Category Archives: Server Admin

Server Config, Operations, Tips and Tricks

PostgreSQL + TimescaleDB + MadLib installation

PostgreSQL is a very very powerful database engine by itself, there are many advanced functions which is far beyond the standard SQL query.

The following combo are great as a starting point with the following functionality.

TimescaleDB – Time Series Database, support time based partitioning of data sharding.
https://docs.timescale.com/latest/getting-started

MadLib – a DB extension to carry out common Machine learning program within the DB.
https://cwiki.apache.org/confluence/display/MADLIB/Architecture

Here are the installation steps.

# install PostgreSQL and PGXN client
deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install postgresql-10 libpq-dev postgresql-server-dev-10 postgresql-plpython-10 pgxnclient cmake g++ m4

# Include the TimescaleDB PPA
sudo add-apt-repository ppa:timescale/timescaledb-ppa

sudo apt-get update

# Install the TimescaleDB
sudo apt install timescaledb-postgresql-10
sudo timescaledb-tune --quiet --yes

# Install MADLib
sudo pgxnclient install madlib
sudo pgxnclient load madlib 

# Install CStore
sudo apt-get install bison flex git libreadline-dev libz-dev git libpq-dev libprotobuf-c0-dev make protobuf-c-compiler
sudo pgxn install cstore_fdw
sudo pgxn load cstore_fdw

After you installed all of them, you may play around with the example.
https://docs.timescale.com/latest/tutorials/tutorial-hello-nyc

For each newly created db, we need to enable the extensions, for example.

CStore Reference
https://www.citusdata.com/blog/2014/04/03/columnar-store-for-analytics/
https://info.citusdata.com/rs/235-CNE-301/images/Columnar_Store_for_PostgreSQL_Using_cstore_fdw_Webinar_Slides_0915.pdf

CREATE USER jimmy WITH PASSWORD 'xxxxxx';
CREATE DATABASE testdb OWNER=jimmy LC_COLLATE='C' LC_CTYPE='C' template=template0;
GRANT ALL ON DATABASE testdb to jimmy;
\c testdb
CREATE EXTENSION plpythonu;
CREATE EXTENSION madlib;
CREATE EXTENSION timescaledb;
CREATE EXTENSION cstore_fdw;
CREATE SERVER cstore_server FOREIGN DATA WRAPPER cstore_fdw;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public to jimmy;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public to jimmy;
GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA public to jimmy;

Setup IBM MQ on Ubuntu

My recently is recently integrating with a banking solution via IBM MQ, I need to install an IBM MQ for testing. Here are my steps for the installation

PS. IBM MQ 9.1 requires IPv6 for all communication, so, I fall back to IBM MQ 8.0

1. Download IBM MQ binary from IBM Website
https://developer.ibm.com/messaging/mq-downloads/

2. Follow the following link for the installation
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.1.0/com.ibm.mq.ins.doc/q008640_.htm

3. After installation, add ubuntu user (current user) to the “mqm” group

sudo usermod -aG mqm $USER

4. Up to now, the installation should be completed.

5. Setup Env and Queue Manager

There is a default MQ Installation (Installation1) setup, you can use the installation directly.

The program directory is /opt/mqm , you can find all command in /opt/mqm/bin , all subsequent execution are in that folder.

6. Create the MQ Manager

crtmqm QM1
strmqm QM1

7. Download the sample script from the following link, it will create some default queue and settings.

https://raw.githubusercontent.com/ibm-messaging/mq-dev-samples/master/gettingStarted/mqsc/mq-dev-config.mqsc

Modify the script, change the user from “app” to “ubuntu”

DEFINE CHANNEL(‘DEV.APP.SVRCONN’) CHLTYPE(SVRCONN) MCAUSER(‘ubuntu’) REPLACE

Execute the script

./runmqsc QM1 < ~/mq-dev-config.mqsc

8. MQ Server uses Linux user and user group for authentication and authorization, so, we need to assign more MQ Server privilege to Linux user group (mqm)

./setmqaut -m QM1 -t qmgr -g mqm +connect +inq
./setmqaut -m QM1 -n DEV.** -t queue -g mqm +put +get +browse +inq

9. Finally, since IBM MQ 7.5, Channel would also require authentication, but I have no idea on how it works yet, so the simplest solution is disabled that.

./runmqsc QM1
ALTER QMGR CHLAUTH(DISABLED)

PS. We should figure out how to work properly with the channel authentication.

http://www-01.ibm.com/support/docview.wss?uid=swg21577137

10. Testing with MQ Explorer

Install IBM MQ Explorer from the following link

https://www-01.ibm.com/support/docview.wss?uid=swg24021041

You can use MQ Explorer to connect to .39 MQ Queue Manager.

ActiveMQ Artemis quick start notes

I have played around with Artemis for a few days, here are the useful command I used. The exact config is stored at my github.

https://github.com/jimmysyss/artemis-config

1. Create 2 brokers with port offset
artemis create mybroker1 –port-offset 0
artemis create mybroker2 –port-offset 0

2. Testing the producer and consumer
./artemis producer –url tcp://localhost:61616
./artemis consumer –url tcp://localhost:61616

3. Enabled clustered
See jgroup.xml and broker.xml , and

4. Load balancing
See broker.xml, Add in

5. Enable SSL for HornetQ OpenWire protocol
a. Provide keystore.jks and truststore.jks
b. Config the acceptor in broker.xml

MiniDLNA configuration

MiniDLNA is a uPnP server that allow you to play Video, Picture and Music over LAN via IP broadcasting protocol.

I have installed it in my Raspberry Pi as a mini-server at home, to stream video to my TV and my XiaoMi TVBox.

By default, it runs with “minidlna” user, it may not be so convenient for use because it is not a usual user, I need to manually copy files to the Pi and subsequently to the folder accessible by minidlna. It is related to the user privilege issue. Funny enough, simply change on folder to 666 doesn’t solve the problem because minidlna requires to access the folder along the path from “/”

Therefore, the only solution is to configure MiniDLNA to run on another user account rather than the default minidlna user.

The minidlna.conf is as followed


# Specify the user name or uid to run as.
user=pi
....
#media_dir=/var/lib/minidlna
media_dir=/home/pi/DLNA
....
#db_dir=/var/cache/minidlna
db_dir=/home/pi/.dlnacache
....
#log_dir=/var/log
log_dir=/home/pi/.dlnalog

Install Superset with Ubuntu 16.04 with venv


sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip libsasl2-dev libldap2-dev python3-dev

python3 -m venv superset-venv

source superset-venv/bin/activate

pip install --upgrade setuptools pip

pip install superset

# The following copy from https://superset.incubator.apache.org/installation.html

# Create an admin user (you will be prompted to set username, first and last name before setting a password)
fabmanager create-admin --app superset

# Initialize the database
superset db upgrade

# Load some data to play with
superset load_examples

# Create default roles and permissions
superset init

# Start the web server on port 8088, use -p to bind to another port
superset runserver

Ubuntu KVM virtualization with GPU Passthrough

Linux is equipped with KVM, which is another hypervisor at the same level of VMWare and VirtualBox. However, it has the great capability for GPU pass through, which grant the guest system to access GPU natively.

The function is great, but the setup is a bit complicated because it involves low level configuration of Linux to mask the GPU away from kernel, which allow the guest to use it exclusively.

To do so, we need to do the following steps.
1. Install the KVM as followed

sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker virt-manager ovmf

2. Rebuild the initramfs for the kernel, so that it can load before proper Radeon or AMDGPU load.

jimmy@jimmy-home:~$ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
pci_stub
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
jimmy@jimmy-home:~$ 

jimmy@jimmy-home:~$ cat /etc/initramfs-tools/modules 
# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
#
# Syntax:  module_name [args ...]
#
# You must run update-initramfs(8) to effect this change.
#
# Examples:
#
# raid1
# sd_mod
pci_stub ids=1002:683d,1002:aab0
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd 
jimmy@jimmy-home:~$ 
jimmy@jimmy-home:~$ sudo update-initramfs -u

3. Config the kernel to load vfio-pci before loading any GPU driver, also blacklist the GPU hardware ID which look up in “lspci -nnk”. Furthermore, you can verify the status with “lspci -nnk” make sure the driver for GPU is vfio-pci rather than radeon

jimmy@jimmy-home:~$ vi /etc/modprobe.d/vfio.conf 
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
options vfio-pci ids=1002:683d,1002:aab0 disable_vga=1
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde XT [Radeon HD 7770/8760 / R7 250X] [1002:683d]
	Subsystem: PC Partner Limited / Sapphire Technology Cape Verde XT [Radeon HD 7770/8760 / R7 250X] [174b:e244]
	Kernel driver in use: vfio-pci
	Kernel modules: radeon, amdgpu
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0]
	Subsystem: PC Partner Limited / Sapphire Technology Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [174b:aab0]
	Kernel driver in use: vfio-pci
	Kernel modules: snd_hda_intel

4. It is not a good practice to run libvirtd with root, but it is a quick way to let libvirtd to access an attachable storage.

Changing /etc/libvirt/qemu.conf to make things work.
Uncomment user/group to work as root.
Then restart libvirtd

The following are the reference I checked

https://bbs.archlinux.org/viewtopic.php?id=162768
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Using_vfio-pci
https://pve.proxmox.com/wiki/Pci_passthrough
https://ycnrg.org/vga-passthrough-with-ovmf-vfio/

Linux Software Raid 1

First, install the Multi Disk Admin tools

sudo apt-get install initramfs-tools mdadm 

Next, set the Partition Type to “fd Linux raid auto”

sudo fdisk /dev/sdb
sudo fdisk /dev/sdc

Create the MD(md0)

sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

Format the partition with Ext4

sudo mkfs.ext4 /dev/md0

Command to check RAID status

sudo mdadm --query /dev/md0
sudo mdadm --detail /dev/md0

Nginx Reverse Proxy

I am playing around JHipster recently, one of the goal for it is the Micro services architecture.

Under such assumption, a reverse proxy is inevitable. Apart from HAProxy, I try to use Nginx as the reverse proxy. Here is the config I use.

##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# http://wiki.nginx.org/Pitfalls
# http://wiki.nginx.org/QuickStart
# http://wiki.nginx.org/Configuration
#
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##

#### JHipster SPECIFIC ROUTE ####
upstream jhipster {
	server localhost:8080 weight=10 max_fails=3 fail_timeout=30s;
	server localhost:18080 weight=10 max_fails=3 fail_timeout=30s;
	server localhost:28080 weight=10 max_fails=3 fail_timeout=30s;
}
#### JHipster SPECIFIC ROUTE ####

# Default server configuration
#
server {
	listen 80 default_server;
	listen [::]:80 default_server;

	# SSL configuration
	#
	# listen 443 ssl default_server;
	# listen [::]:443 ssl default_server;
	#
	# Note: You should disable gzip for SSL traffic.
	# See: https://bugs.debian.org/773332
	#
	# Read up on ssl_ciphers to ensure a secure configuration.
	# See: https://bugs.debian.org/765782
	#
	# Self signed certs generated by the ssl-cert package
	# Don't use them in a production server!
	#
	# include snippets/snakeoil.conf;

	root /var/www/html;

	# Add index.php to the list if you are using PHP
	index index.html index.htm index.nginx-debian.html;

	server_name _;

#	location / {
#		# First attempt to serve request as file, then
#		# as directory, then fall back to displaying a 404.
#		try_files $uri $uri/ =404;
#	}

	location / {  
		proxy_pass http://jhipster;  
		proxy_http_version 1.1;  
		proxy_set_header Host $host;  
		proxy_set_header Upgrade $http_upgrade;  
		proxy_set_header Connection 'upgrade';  
		proxy_set_header X-Real-IP $remote_addr;
        	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	        proxy_set_header X-Forwarded-Proto $scheme;
		proxy_cache_bypass $http_upgrade;  
	}

	# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
	#
	#location ~ \.php$ {
	#	include snippets/fastcgi-php.conf;
	#
	#	# With php7.0-cgi alone:
	#	fastcgi_pass 127.0.0.1:9000;
	#	# With php7.0-fpm:
	#	fastcgi_pass unix:/run/php/php7.0-fpm.sock;
	#}

	# deny access to .htaccess files, if Apache's document root
	# concurs with nginx's one
	#
	#location ~ /\.ht {
	#	deny all;
	#}
}


# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#
#server {
#	listen 80;
#	listen [::]:80;
#
#	server_name example.com;
#
#	root /var/www/example.com;
#	index index.html;
#
#	location / {
#		try_files $uri $uri/ =404;
#	}
#}

Linux Single User Mode

I have recently get a bad experience on blowing up my home Ubuntu desktop by adding jimmy as the user again. Jimmy is no longer belongs to the sudoer group and I fail to add anyone back to the sudoer group. I need to use the linux single user mode / recovery mode to get back my account.

The sequence is as followed.

1. In GRUB, select recovery
2. I will be prompted with a root shell. However, the root FS is just mount as read only.

mount -o rw,remount /

3. Add back sudoer to jimmy

useradd jimmy -G sudo

Use Google Drive as your Linux Server offsite backup

The source code can be found here

https://github.com/jimmysyss/google-drive-backup

I have looking for offsite backup solution for my VPS, the are plenty of paid solutions, however, it will be perfect if we can leverage something like Google Drive and Dropbox. I use Google Drive for the time being.

It is a two steps process. First is obtained an OAuth Authentication from Google API. Second is using the Google Drive API to upload a file.

OAuth API
Google OAuth API

Google Drive API
Google Drive API

There are two script in my Github, the first one is helping you to get the credential and save it in a file (Storage), It is a one off process. The second step is using the credential you got and submit the file to Google Drive.

For the GetCredential.py, you need to a new Application Secret from you Google Developers Console => Authentication Page. You need to select Create Client ID => Installed Application => Other (NOT IOS / Android). And then download the JSON file and place at the same directory as the python file.

Next, you run the python file with the following syntax. CLIENTSECRET_PATH is the JSON file in previous step. SAVE_STORAGE_NAME is the new Credential Storage file. Follow the steps in script to get the Application Authenticated ID
./GetCredential.py CLIENTSECRET_PATH SAVE_STORAGE_NAME

After you get the SAVE_STORAGE_NAME file, you can use it to upload the file, you don’t need to get a new SAVE_STORAGE_NAME every time, it will handle the OAuth Key Exchange for you. The command is as followed. FULL_FILENAME is the path to file you want to upload.
UploadToGoogle.py STORAGE_FILE FULL_FILENAME

There are couples of vocabulary that help you to understand how the application runs.

Client ID and Client Secret: Google ID for your APPLICATION, not the User Identity. In OAuth, the Client ID is used to generate a URL for user to authenticate.

Credential: the application logic that help you to add application header to your HTTP Clients.

Storage: The media, either DB or File to store the credential, which can be reused later on. Furthermore, it also helps to handle the Renew of the Token.

Enjoy!

^_^