Category Archives: Server Admin

Server Config, Operations, Tips and Tricks

Setup IBM MQ on Ubuntu

My recently is recently integrating with a banking solution via IBM MQ, I need to install an IBM MQ for testing. Here are my steps for the installation

PS. IBM MQ 9.1 requires IPv6 for all communication, so, I fall back to IBM MQ 8.0

1. Download IBM MQ binary from IBM Website

2. Follow the following link for the installation

3. After installation, add ubuntu user (current user) to the “mqm” group

sudo usermod -aG mqm $USER

4. Up to now, the installation should be completed.

5. Setup Env and Queue Manager

There is a default MQ Installation (Installation1) setup, you can use the installation directly.

The program directory is /opt/mqm , you can find all command in /opt/mqm/bin , all subsequent execution are in that folder.

6. Create the MQ Manager

crtmqm QM1
strmqm QM1

7. Download the sample script from the following link, it will create some default queue and settings.

Modify the script, change the user from “app” to “ubuntu”


Execute the script

./runmqsc QM1 < ~/mq-dev-config.mqsc

8. MQ Server uses Linux user and user group for authentication and authorization, so, we need to assign more MQ Server privilege to Linux user group (mqm)

./setmqaut -m QM1 -t qmgr -g mqm +connect +inq
./setmqaut -m QM1 -n DEV.** -t queue -g mqm +put +get +browse +inq

9. Finally, since IBM MQ 7.5, Channel would also require authentication, but I have no idea on how it works yet, so the simplest solution is disabled that.

./runmqsc QM1

PS. We should figure out how to work properly with the channel authentication.

10. Testing with MQ Explorer

Install IBM MQ Explorer from the following link

You can use MQ Explorer to connect to .39 MQ Queue Manager.

ActiveMQ Artemis quick start notes

I have played around with Artemis for a few days, here are the useful command I used. The exact config is stored at my github.

1. Create 2 brokers with port offset
artemis create mybroker1 –port-offset 0
artemis create mybroker2 –port-offset 0

2. Testing the producer and consumer
./artemis producer –url tcp://localhost:61616
./artemis consumer –url tcp://localhost:61616

3. Enabled clustered
See jgroup.xml and broker.xml , and

4. Load balancing
See broker.xml, Add in

5. Enable SSL for HornetQ OpenWire protocol
a. Provide keystore.jks and truststore.jks
b. Config the acceptor in broker.xml

MiniDLNA configuration

MiniDLNA is a uPnP server that allow you to play Video, Picture and Music over LAN via IP broadcasting protocol.

I have installed it in my Raspberry Pi as a mini-server at home, to stream video to my TV and my XiaoMi TVBox.

By default, it runs with “minidlna” user, it may not be so convenient for use because it is not a usual user, I need to manually copy files to the Pi and subsequently to the folder accessible by minidlna. It is related to the user privilege issue. Funny enough, simply change on folder to 666 doesn’t solve the problem because minidlna requires to access the folder along the path from “/”

Therefore, the only solution is to configure MiniDLNA to run on another user account rather than the default minidlna user.

The minidlna.conf is as followed

# Specify the user name or uid to run as.

Install Superset with Ubuntu 16.04 with venv

sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip libsasl2-dev libldap2-dev python3-dev

python3 -m venv superset-venv

source superset-venv/bin/activate

pip install --upgrade setuptools pip

pip install superset

# The following copy from

# Create an admin user (you will be prompted to set username, first and last name before setting a password)
fabmanager create-admin --app superset

# Initialize the database
superset db upgrade

# Load some data to play with
superset load_examples

# Create default roles and permissions
superset init

# Start the web server on port 8088, use -p to bind to another port
superset runserver

Ubuntu KVM virtualization with GPU Passthrough

Linux is equipped with KVM, which is another hypervisor at the same level of VMWare and VirtualBox. However, it has the great capability for GPU pass through, which grant the guest system to access GPU natively.

The function is great, but the setup is a bit complicated because it involves low level configuration of Linux to mask the GPU away from kernel, which allow the guest to use it exclusively.

To do so, we need to do the following steps.
1. Install the KVM as followed

sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker virt-manager ovmf

2. Rebuild the initramfs for the kernel, so that it can load before proper Radeon or AMDGPU load.

jimmy@jimmy-home:~$ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

jimmy@jimmy-home:~$ cat /etc/initramfs-tools/modules 
# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
# Syntax:  module_name [args ...]
# You must run update-initramfs(8) to effect this change.
# Examples:
# raid1
# sd_mod
pci_stub ids=1002:683d,1002:aab0
jimmy@jimmy-home:~$ sudo update-initramfs -u

3. Config the kernel to load vfio-pci before loading any GPU driver, also blacklist the GPU hardware ID which look up in “lspci -nnk”. Furthermore, you can verify the status with “lspci -nnk” make sure the driver for GPU is vfio-pci rather than radeon

jimmy@jimmy-home:~$ vi /etc/modprobe.d/vfio.conf 
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
options vfio-pci ids=1002:683d,1002:aab0 disable_vga=1
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde XT [Radeon HD 7770/8760 / R7 250X] [1002:683d]
	Subsystem: PC Partner Limited / Sapphire Technology Cape Verde XT [Radeon HD 7770/8760 / R7 250X] [174b:e244]
	Kernel driver in use: vfio-pci
	Kernel modules: radeon, amdgpu
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0]
	Subsystem: PC Partner Limited / Sapphire Technology Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [174b:aab0]
	Kernel driver in use: vfio-pci
	Kernel modules: snd_hda_intel

4. It is not a good practice to run libvirtd with root, but it is a quick way to let libvirtd to access an attachable storage.

Changing /etc/libvirt/qemu.conf to make things work.
Uncomment user/group to work as root.
Then restart libvirtd

The following are the reference I checked

Linux Software Raid 1

First, install the Multi Disk Admin tools

sudo apt-get install initramfs-tools mdadm 

Next, set the Partition Type to “fd Linux raid auto”

sudo fdisk /dev/sdb
sudo fdisk /dev/sdc

Create the MD(md0)

sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

Format the partition with Ext4

sudo mkfs.ext4 /dev/md0

Command to check RAID status

sudo mdadm --query /dev/md0
sudo mdadm --detail /dev/md0

Nginx Reverse Proxy

I am playing around JHipster recently, one of the goal for it is the Micro services architecture.

Under such assumption, a reverse proxy is inevitable. Apart from HAProxy, I try to use Nginx as the reverse proxy. Here is the config I use.

# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.

#### JHipster SPECIFIC ROUTE ####
upstream jhipster {
	server localhost:8080 weight=10 max_fails=3 fail_timeout=30s;
	server localhost:18080 weight=10 max_fails=3 fail_timeout=30s;
	server localhost:28080 weight=10 max_fails=3 fail_timeout=30s;
#### JHipster SPECIFIC ROUTE ####

# Default server configuration
server {
	listen 80 default_server;
	listen [::]:80 default_server;

	# SSL configuration
	# listen 443 ssl default_server;
	# listen [::]:443 ssl default_server;
	# Note: You should disable gzip for SSL traffic.
	# See:
	# Read up on ssl_ciphers to ensure a secure configuration.
	# See:
	# Self signed certs generated by the ssl-cert package
	# Don't use them in a production server!
	# include snippets/snakeoil.conf;

	root /var/www/html;

	# Add index.php to the list if you are using PHP
	index index.html index.htm index.nginx-debian.html;

	server_name _;

#	location / {
#		# First attempt to serve request as file, then
#		# as directory, then fall back to displaying a 404.
#		try_files $uri $uri/ =404;
#	}

	location / {  
		proxy_pass http://jhipster;  
		proxy_http_version 1.1;  
		proxy_set_header Host $host;  
		proxy_set_header Upgrade $http_upgrade;  
		proxy_set_header Connection 'upgrade';  
		proxy_set_header X-Real-IP $remote_addr;
        	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	        proxy_set_header X-Forwarded-Proto $scheme;
		proxy_cache_bypass $http_upgrade;  

	# pass the PHP scripts to FastCGI server listening on
	#location ~ \.php$ {
	#	include snippets/fastcgi-php.conf;
	#	# With php7.0-cgi alone:
	#	fastcgi_pass;
	#	# With php7.0-fpm:
	#	fastcgi_pass unix:/run/php/php7.0-fpm.sock;

	# deny access to .htaccess files, if Apache's document root
	# concurs with nginx's one
	#location ~ /\.ht {
	#	deny all;

# Virtual Host configuration for
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#server {
#	listen 80;
#	listen [::]:80;
#	server_name;
#	root /var/www/;
#	index index.html;
#	location / {
#		try_files $uri $uri/ =404;
#	}

Linux Single User Mode

I have recently get a bad experience on blowing up my home Ubuntu desktop by adding jimmy as the user again. Jimmy is no longer belongs to the sudoer group and I fail to add anyone back to the sudoer group. I need to use the linux single user mode / recovery mode to get back my account.

The sequence is as followed.

1. In GRUB, select recovery
2. I will be prompted with a root shell. However, the root FS is just mount as read only.

mount -o rw,remount /

3. Add back sudoer to jimmy

useradd jimmy -G sudo

Use Google Drive as your Linux Server offsite backup

The source code can be found here

I have looking for offsite backup solution for my VPS, the are plenty of paid solutions, however, it will be perfect if we can leverage something like Google Drive and Dropbox. I use Google Drive for the time being.

It is a two steps process. First is obtained an OAuth Authentication from Google API. Second is using the Google Drive API to upload a file.

Google OAuth API

Google Drive API
Google Drive API

There are two script in my Github, the first one is helping you to get the credential and save it in a file (Storage), It is a one off process. The second step is using the credential you got and submit the file to Google Drive.

For the, you need to a new Application Secret from you Google Developers Console => Authentication Page. You need to select Create Client ID => Installed Application => Other (NOT IOS / Android). And then download the JSON file and place at the same directory as the python file.

Next, you run the python file with the following syntax. CLIENTSECRET_PATH is the JSON file in previous step. SAVE_STORAGE_NAME is the new Credential Storage file. Follow the steps in script to get the Application Authenticated ID

After you get the SAVE_STORAGE_NAME file, you can use it to upload the file, you don’t need to get a new SAVE_STORAGE_NAME every time, it will handle the OAuth Key Exchange for you. The command is as followed. FULL_FILENAME is the path to file you want to upload. STORAGE_FILE FULL_FILENAME

There are couples of vocabulary that help you to understand how the application runs.

Client ID and Client Secret: Google ID for your APPLICATION, not the User Identity. In OAuth, the Client ID is used to generate a URL for user to authenticate.

Credential: the application logic that help you to add application header to your HTTP Clients.

Storage: The media, either DB or File to store the credential, which can be reused later on. Furthermore, it also helps to handle the Renew of the Token.



VPN on CentOS OpenVZ using PPTPD

VPN is a useful technique that can help you to access some forbidden website or service in some countries. Furthermore, we can use it to change the physical location, for example, if I want to buy something in US, I can use a US VPN to access the site.

Of course, there are a couple of VPN service that you can buy online, however, making use of a VPN Server provides you all kind of flexibility, especially if you have an existing VPS.


OK, no bullshit, let’s start.

The basic idea is, the VPN Client routes all traffic to the VPN Server, the Server acts as a jumping board and access the other services you want.

0. Config your VPS to enable PPP and TUN, usually you can find the settings in SolusVM Admin Console, it is a settings in Kernel. Another point to watch out is, on OpenVZ VPS, the network card is venet0 instead of eth0, it traps me for more than 4 hours. Jesus!

1. Add the dependencies

yum install -y ppp libpcap iptables

2. Get the PPTPD install RPM, select with either 32bit (i686) / 64bit (x86_64)


Update (-uvh) or Install (-ivh)

rpm -ivh pptpd-1.4.0-1.el6.i686.rpm

3. Edit /etc/pptpd.conf, the is the private network section of the PPTPD. We will set the server with IP while sequentially set the client as
PS. In my case, I only need a few access, so I cut down the IP Range and number of max connections

connections 5

4. Edit /etc/sysctl.conf to change this line from 0 to 1

net.ipv4.ip_forward = 1

5. Set the DNS Server when the client connected, we will be using Google DNS and (Google, my Lord), edit /etc/ppp/options.pptpd


6. edit /etc/ppp/chap-secrets for the client credentials, remember to change the password, “password” is on the top list of hacker’s dictionary

# Secrets for authentication using CHAP
# client        server  secret                  IP addresses
vpnuser pptpd password *

7. It is time to set the IPTables. We have several to be done here.
a. Open port 47(GRE) and 1723
b. Enable NAT, so that the client can send traffic
c. Allow Input and Output traffic for ppp+ (It is wildcard for ppp0, ppp1, ppp2 and etc)
edit /etc/sysconfig/iptables

-A INPUT -i ppp+ -j ACCEPT
-A INPUT -p tcp -m tcp --dport 1723 -j ACCEPT
-A INPUT -p gre -j ACCEPT
-A OUTPUT -o ppp+ -j ACCEPT
-A OUTPUT -p gre -j ACCEPT

8. We are almost done, it is time to restart all service.

service network reload
/etc/rc.d/init.d/iptables restart 
/etc/rc.d/init.d/pptpd restart-kill


There are plenty of tutorials of VPN Setup on the web. but there are a few for helping you to trouble shoot. There are two main loophole for you, one is on setup, and the other is on routing rule.

Troubleshooting setup params

You can enable the logging of setup params in /etc/ppp/options.pptpd, uncomment “dump” in the files. You can find the log in /var/log/messages. It shows most information when the client is connecting to server.

Troubleshooting the Route

if your client can connect to the VPN Server, but unable to connect to the internet, probably the issue goes to the IPTables rules. Here is a 4 steps process.

1. VPN Client to VPN Server
2. VPN Client to VPN Server to External Machine
3. External Machine replies to VPN Server and then translate to Internal IP
4. External Machine replies to VPN Server and then send to VPN Client

Here, TCPDump and Ping is your friends, tcpdump help you to capture all packets going through the server by specifying the parameters.

tcpdump -n -i ppp0 icmp and src host and dst host

ppp0 can be replaced by eth0 or venet0
src host can be your client IP address or external party (Case 1 and Case 3)

For details, you may refer to this link