Install Ansible in Raspberry Pi (Raspbian)

Raspbian is based on Debian rather than Ubuntu, so, we need a bit of adjustment in order to install Ansible PPA on Raspbian. Here is my notes for the installation

Firstly, update Raspbian info, so Raspbian is aware of its codename “bullseye”

sudo nano /usr/share/python-apt/templates/Raspbian.info

Suite: bullseye
RepositoryType: deb
BaseURI: http://raspbian.raspberrypi.org/raspbian/
MatchURI: http://raspbian.raspberrypi.org/raspbian/

Next, we need to install the required packages

sudo apt install software-properties-common
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367

Next, we need to manually add the apt source, we must manually add the file instead of using apt-add-repository

sudo nano /etc/apt/sources.list.d/ansible.list

deb [arch=armhf] http://ppa.launchpad.net/ansible/ansible/ubuntu focal main

Finally, use “apt install” to install

sudo apt search ansible

And verify with

ansible --version

 

This server setup

I use a pre-crafted image from my Cloud Provider, so need to handle for iptables rules first.

sudo sh -c "iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT && iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT && service iptables save"

1. GRUB

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nopti noibrs noibpb nospec nospectre_v2 nospec_store_bypass_disable mitigations=off"

2. FSTAB

/etc/fstab
LABEL=cloudimg-rootfs / ext4 noatime,defaults 0 1

3. SYSCTRL

/etc/sysctl.conf

vm.swappiness = 1
vm.dirty_background_ratio = 10
vm.dirty_ratio = 5
fs.inotify.max_user_watches = 524288
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

sudo crontab -e
@reboot /sbin/sysctl --load=/etc/sysctl.conf

4. IPTables

/etc/iptables.conf

-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT

5. MariaDB

sudo apt-get install software-properties-common dirmngr apt-transport-https
sudo apt-key adv --fetch-keys 'https://mariadb.org/mariadb_release_signing_key.asc'
sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el,s390x] https://mirrors.xtom.com.hk/mariadb/repo/10.7/ubuntu focal main'

sudo apt update
sudo apt install mariadb-server
ALTER USER 'root'@'localhost' IDENTIFIED BY 'wong2903';
CREATE USER 'phpmyadmin'@'localhost' IDENTIFIED BY 'wong2903';
GRANT SELECT, INSERT, UPDATE, DELETE ON phpmyadmin.* TO 'phpmyadmin'@'localhost';

6. PHP + Nginx

sudo apt install php php-cli php-fpm php-json php-common php-mysql php-zip php-gd php-mbstring php-curl php-xml php-pear php-bcmath
sudo apt install nginx php-curl php-gd php-intl php-mbstring php-soap php-xml php-xmlrpc php-zip
See nginx.conf and site-available/default

7. Certbot

mkdir -p .xxxxx
touch .xxxxx/cloudflare.ini
chmod 600 .xxxxx/cloudflare.ini
sudo snap install core; sudo snap refresh core
sudo snap install --classic certbot
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install certbot-dns-cloudflare
sudo certbot run --dns-cloudflare --dns-cloudflare-credentials ~/.xxxxx/cloudflare.ini -d *.jimmysyss.com -d jimmysyss.com -i nginx

8. PHPMyAdmin

# Create a symlink in /var/www/html to use phpmyadmin
sudo apt-get --no-install-recommends install phpmyadmin
sudo mkdir /var/www/html/phpmyadmin
sudo ln -s /usr/share/phpmyadmin phpmyadmin

In PHPMyAdmin, create a new user phpmyadmin with schema, and then select that schema, Operation, create configuration.

9. Wireguard with Algo VPN

git clone https://github.com/trailofbits/algo.git
sudo apt install -y --no-install-recommends python3-virtualenv
cd algo

python3 -m virtualenv --python="$(command -v python3)" .env &&
source .env/bin/activate &&
python3 -m pip install -U pip virtualenv &&
python3 -m pip install -r requirements.txt

10. PostgreSQL

# Create the file repository configuration:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'

# Import the repository signing key:
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

# Update the package lists:
sudo apt-get update

# Install the latest version of PostgreSQL.
# If you want a specific version, use 'postgresql-12' or similar instead of 'postgresql':
sudo apt-get -y install postgresql
curl -L https://packagecloud.io/timescale/timescaledb/gpgkey | sudo apt-key add -
sudo sh -c "echo 'deb https://packagecloud.io/timescale/timescaledb/debian/ $(lsb_release -c -s) main' > /etc/apt/sources.list.d/timescaledb.list"
wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | sudo apt-key add -
sudo apt update
apt install timescaledb-2-postgresql-14

11. Docker

sudo apt-get install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo usermod -aG docker $USER

Installing SQL Server with Tools

After previous post about introduction of SQL Server, it is time to install SQL Server and Tools.

We are going to use SQL Server Developer Edition, basically it is a SQL Server Professional, without support and cannot be used in production. But no harm, we are testing and learning. There are many new software we are dealing and showing on our next post, learn about adp check stub maker for payments safety management.

First of all, we install SQL Server with Choco command.

choco install sql-server-2019 -y

However, because we need to install some additional function of SQL Server, that’s why we also need to download the SQL Server installation program.

https://www.microsoft.com/en-us/sql-server/sql-server-downloads

Mount the SQL Server ISO disc, Start the installation program. Select the following options for SSRS / SSIS / SSAS and MDS.

For SSRS, download and install from the following path.

https://www.microsoft.com/en-us/download/details.aspx?id=100122

And then we can install the tools with Choco again.


choco install visualstudio2019professional -y
choco install ssrs-2019 -y
choco install ssis-vs2019 -y

For SSMA, it is not available in Chcocolatery, so, we need to install it from here.

https://docs.microsoft.com/en-us/sql/ssma/mysql/installing-ssma-for-mysql-client-mysqltosql?view=sql-server-ver15

 

SQL Server 2019 – More than CRUD – Overview

SQL Server is just another database in many people mind, especially Developers. SQL Server has nothing different from Sybase, Oracle or PostgreSQL. In fact, we should look at it from another approach which SQL Server is an Enterprise Information Management platform with a lots of companion tools that can enable Business Users to manage their data in a Low / No code environment.

Here is a list of SQL Server Add On functionality which could relief the developer from looking for 3rd party solution. More likely than not, they come as FREE and no additional license is needed.

SQL Server MDS + DQS

Master Data Service is an application that manages static data within an Enterprise. It may includes Customer and Product Data. It can also store images, business rules and fee table. SQL Server MDS models Master Data with a flexible data model, which means each entity is a set of attributes.

SQL Server MDS also support various form of entitlement, object versioning, temporal data and audit trail, which fits the enterprise needs. The data is exposed as Database View or RESTful Service, which are friendly to developer who can either use a direct DB approach or SOA approach.

Data Quality Service defines the business rules that check the validaity of data, and flag out in-consistency as early as possible to avoid invalid data cause errors in other system.

SSAS

SQL Server Analysis Service is the offer that support Business Intelligence reporting and query. Traditionally BI requires an OLAP Service, which works on a Data Warehouse solution, which may use a Columnar Database. SQL can provide all of them out of the box with suitable configuration.

SSAS supports various form of OLAP structure, which rides on SQL Server Column Store functionality, which act as an Index type in SQL Server table design. Of course, a good data warehouse require properly designed Fact and Dimension structure.

SSIS

In an Enterprise ecosystem, it is evitable that we need ETL Service to move data around. SSIS is the SQL Server default solution for ETL. The ETL script is defined in SSDT (Becomes part of Visual Studio 2017) and execute inside the SSIS Service.

SSRS

Similar to ETL service, reporting service is another service works with Database. SQL Server provides a SSRS service that can generate report on demand or on schedule. The report is also defined in Visual Studio 2017 or SSDT.

A great system is a garbage without companion tools. Here is a set of Tools that SQL Server will uses for different personna.

SSMS

SQL Server Management Studio targets Database Administrator, he can manage the DB Accounts, configure DB functions, and manage DB backup and restore there.

SSDT (Part of Visual Studio)

SQL Server Data Tool targets Developers. With the latest Visual Studio, SSDT lives as different project type in Visual Studio Plugin Store. It also promotes the Database Design as Code concept. SSDT is strong enough to compare a target DB and DB script and apply the delta to DB only.

SSMA

SQL Server Migration Assistant is a DB platform specific tool for data migration from other database type to SQL Server. It goes through the migration in Analyse, migrate schema and then migrate data steps.

Excel / Power BI

Business Users work with Excel / Power BI every day, it is their primary tools for living. SQL Server has excellent integration with Excel and Power BI, for both to/from Excel to database. In this case, SQL Server can complete the last miles data representation problem easily.

SQL Server has the following capability that build in to support the above service, which they are targeting Enterprise scale data management need. The border between NoSQL and RDBMS is fading, which SQL Server would positions itself as a comprehensive data management solutions.

Dynamic Data Masking

Row Level Security

JSON / XML Object

Full Text Search

Columnar Index

 

EnergizedProtection host.gz list

EnergizedProtection is a very famous adblocker after you root your Android phone.

The concept is really simple, it is a script that gather a list of advertisement hosts, and then put it in a giant Host table, which will redirect to 0.0.0.0, so that it won’t load the advertisement and help you to save precious bandwidth.

It requires root because the program needs to modify the hosts table, which must with Root. The beauty of it is it only make use of Linux native infrastructure, it doesn’t include any special apps for it.

Unfortunately, the server is recently shut down, therefore the installed Magisk module doesn’t update the ad hostname automatically. we need to compile the host list (host.gz) on our own and put into /sdcard/EnergizedProtection/host.gz and let the magisk to consume.

1. check out the source from this link
https://github.com/EnergizedProtection/block

2. Execute /assets/source/filter.sh , it will prompt you for a github token, you can skip it. This step will downloads a list of raw host list to your repo and stored in /block/assets/sources/filter

3. We select a host pack and compile it, e.g. I use blu pack, therefore, I go to /block/blu/ and execute build.sh , the compiled host.gz file will be in “formats” folder.

4. Copy the compiled block/blu/formats/hosts.gz to your mobile /sdcard/EnergizedProtection/hosts.gz

5. Re-run the “energize -m” , select a pack, let it consume the hosts.gz. Remember to reboot after it is install.

6. Enjoy

Getting MicroG to run on Old Galaxy S5 with LineageOS

I have been quite busy with my new job for a year. It is time to continue my research.

Google has been blamed for constantly sniffing and collecting customer behavior. Custom Rom and MicroG is here to come to rescure. The Android is so light without Google Play Framework, excellent battery life and fast and snappy response.

It may be useful if your phone is only for work, e.g. a Microsoft only phone. However, without Google Play Framework, Gmail, Google Maps and tons of Google apps won’t works.

Here is a highlight of the steps to install MicroG on LineageOS.

1. Assume you have an unlocked Bootloader and TWRP installed
2. Wipe Cache, Data and System, and clean install LineageOS (I use 7.1.2) in TWRP
3. Flash Magisk in TWRP
4. Flash Xposed Framework (Depending on Android version EdXposed works for >8.0.0)
5. Boot into LineageOS, install Riru module in Magisk
6. Install Xposed Installer App
7. Install FakeGapps and HiddenCore Module in Xposed Installer, they are used to provide Signature Spoofing capability.
8. You may also want to install Nanodroid for a large package of Open Source alternative common services

 

Running MacOS on Ubuntu – Sosumi

Ubuntu has a Snap package that can run MacOS on KVM. It is a pre-built script that bring out the MacOS up and running automatically.

There are plenty of notes on the web, this passage is just my personal notes.

Pre-Requisite – install KVM utils and assign a common user to KVM group

sudo apt-get install cpu-checker qemu-utils
sudo usermod -a -G kvm,libvirt jimmy
sudo chown root:kvm /dev/kvm
sudo chmod 666 /dev/kvm

Install Sosumi

sudo snap install sosumi --edge

Adjust the default CPU cores, RAM size and disk image size

# Try to launch the VM as a normal user
sosumi 
# immediately close the VM at Clover
# go to snap folder and edit the launch file
cd ~/snap/sosumi/common
nano launch
#modify -m 8G / -smp 8,core=4
qemu-img resize macos.qcow2 +20G

Launch the VM and carry out the standard installation process.

sosumi

Some notes about NewSQL – CockroachDB

I have done some research on CockroachDB recently which make me understand a new class of database called NewSQL.

NewSQL has a few key features which is very attractive, especially we are SQL developers. Corresponding solution in GCP CloudSpanner and AWS Aurora.
1. ACID compliance, but with global locking trade-off
2. Auto-recovery and auto-rebalance under node failure
3. Global Distributed database with localized access of data
4. No phantom read, which maintain global consistency

It sounds pretty attractive in the first view. However, it must be carefully designed in order to enjoy the benefits. Let’s look at how it works first.

0. Define your database cluster topology, which you may define the running instance with Tags, like Region, AZ(AWS Terms), Data Center and Country. These information is useful for locating the table data and index.

1. Each Table will be partitioned by field in columns. The partitioning of data can be done with ENUM for discrete data or range for continous data.

2. Additional Sparse Indexes (Non-primary index) must also be designed with Partition in mind, the best design mechanism is to share the partition key with table data, and add additional fields for improve searching

3. Each Index or Data partition will map to a list of hints which will determine the location that piece of data is stored. CockroachDB will determine the final location by honoring the hints first. However, if there is no living instance which satisfies the hints, it will just pick one node that can spread across the globe.

4. CockroachDB maintains a network latency matrix internally, which keep track of the performance between any 2 nodes. It is a important input to CockroachDB to determine which data partition to update or to read.

5. Each Slice of data, which has a few replica among the living nodes, will elect a “leaseholder” based on table definition hints and usage statistic periodically. All read-write operation MUST go through the leaseholder in order to achieve global consistency and data locking. Since leaseholder is just a pointer among the partition replica, shifting leaseholder is a cheap operation and can change frequently (~10sec) to cope with the shape of traffic.

6. For READ table, the detail mechanism is shown here. The key take away is avoid global query and make the query local, for example, include part of the partition as you searching criterion. The query will route to the leaseholder to process, and the primary concern is the latency between the gateway node and the leaseholder. The performance is excellent in case that everything happens locally.

7. For WRITE table, the detail mechanism is shown here. The key performance trick is the location of majority update. For example, given a 3 replica environment, the leaseholder has to commit 2 out of 3 in order to declare the update is successful, therefore, the delay is related to the 2nd closest replica network latency to the leaseholder.

8. In case of node failure and recovery needed, CockroachDB is doing a great job. It will regenerate the replica at a node that trying to satisfy the hints. Since there are live replica, the performance hits are minimal, and it can self heal when a new instance comes online

9. The DDL and partition configuration can be changed by DDL, Cockroach will help to migrate the slice based on partition hints.

Base on the implementation above, there are some pitfalls which you may keep an eye on.

1. Database topology designs may require some regions clustered together and try to place data locally.

2. Data and Index has to be partitioned seperately, we should put them as closed as possible to make read-write operation localized.

3. Data is committed when majority of partitions report committed to the Leaseholder. It means you need to place the partition wisely and strike a balance between 1) majority of partitions are placed on node which are closed to each others. 2) data must be placed wide apart so that it can archieve Regional replication

4. Average latency measurement
Same City – ~5ms (InterDC dedicated line / AZ)
Same Country, Inter city – ~20ms
Cross Country, e.g. HK-SG – ~50ms
Cross Continenet, e.g. Asia vs EMEA vs US – ~200ms

Base on the latencies, we should be able to precisely predict the expected performance of individual query or operation.

HAPPY CODING!