Monthly Archives: December 2017

Ubuntu KVM virtualization with GPU Passthrough

Linux is equipped with KVM, which is another hypervisor at the same level of VMWare and VirtualBox. However, it has the great capability for GPU pass through, which grant the guest system to access GPU natively.

The function is great, but the setup is a bit complicated because it involves low level configuration of Linux to mask the GPU away from kernel, which allow the guest to use it exclusively.

To do so, we need to do the following steps.
1. Install the KVM as followed

sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker virt-manager ovmf

2. Rebuild the initramfs for the kernel, so that it can load before proper Radeon or AMDGPU load.

jimmy@jimmy-home:~$ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

jimmy@jimmy-home:~$ cat /etc/initramfs-tools/modules 
# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
# Syntax:  module_name [args ...]
# You must run update-initramfs(8) to effect this change.
# Examples:
# raid1
# sd_mod
pci_stub ids=1002:683d,1002:aab0
jimmy@jimmy-home:~$ sudo update-initramfs -u

3. Config the kernel to load vfio-pci before loading any GPU driver, also blacklist the GPU hardware ID which look up in “lspci -nnk”. Furthermore, you can verify the status with “lspci -nnk” make sure the driver for GPU is vfio-pci rather than radeon

jimmy@jimmy-home:~$ vi /etc/modprobe.d/vfio.conf 
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
options vfio-pci ids=1002:683d,1002:aab0 disable_vga=1
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde XT [Radeon HD 7770/8760 / R7 250X] [1002:683d]
	Subsystem: PC Partner Limited / Sapphire Technology Cape Verde XT [Radeon HD 7770/8760 / R7 250X] [174b:e244]
	Kernel driver in use: vfio-pci
	Kernel modules: radeon, amdgpu
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0]
	Subsystem: PC Partner Limited / Sapphire Technology Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [174b:aab0]
	Kernel driver in use: vfio-pci
	Kernel modules: snd_hda_intel

4. It is not a good practice to run libvirtd with root, but it is a quick way to let libvirtd to access an attachable storage.

Changing /etc/libvirt/qemu.conf to make things work.
Uncomment user/group to work as root.
Then restart libvirtd

The following are the reference I checked

Nokia’s study

Nokia is a interesting company to study, especially when u are looking from an engineering perspective. It has the best engineer team, which lead to the success of Nokia at 199x, but it failed to adapt to the new era and decline.

I think the decline is due to the company can no longer blend tech and business well, given the explosive growth of business.

(Below is bit technical, but it supports my view of the conflict between tech and business)

Nokia’s management heavily invest in R&D, one of the key research area is embedded Linux, which is another stream outside the Symbian OS. It has the concept of App, but the App is hard to code. The App is written on C++ and use Qt as the UI library. It is really a natural choice because this couple has been around with Linux since day 1. The hurdle for writing App is the exceptionally low computational power (<100Mhz). exactly the same like today programmers, learning Java is easy, learning GOOD Java is hard.

Furthermore, it seems Nokia business side doesn’t want to promote the embedded Linux concept, every one heard the common models, 8810, 8850, N73, N95, 5800 and etc. But seems no one have seen Nokia Communicator or E90.

Later on, the industry come up with J2ME, which partially solve the hardware management issue, but still, very hard to code.

Having the app concept without a complete end-to-end use cases means no one will use app. Finding, buying, downloading and installing an app takes more than two hours. You cannot expect your grandma can do it on her own. That’s why I think App Store kills Nokia.

It simply lost the first mover advantage.

After that, there are two chances for Nokia to turn around, at least capture a decent portion in the market. The first one is launching Nokia 5800 Xpress Music in 2008 and the second one is choosing Windows, rather than Android.

Nokia 5800 was considered as a game changer, with a touch screen, on Symbian S60 V5, 3G network and driven by Music as opposed to iTunes that time. The key failure is using Resistive display (in contrast to capacitive display by iPhone 1G), which limit the usage with a Pen and no Multi-touch Gesture. The product is just a clone of iPhone features superficially, without knowing the key market value. It simply fails to blend tech and design and maximize their advantages. At the same time, other competitors start adapting Android at top tier phone which lag Nokia behind.

The second chance is terminating Symbian OS, and choosing M$ Windows instead of Android. I am not saying the Stephen Elop Trojan Horse(Smile) case, just the management fails to do the SWOT analysis well. Android must be a better choice given Nokia strong engineering team in embedded system, which is transitive to Android. Nokia also uses ARM chips since 5800. It is essentially asking a Java Developer to write a C# program from scratch, it simply gives up all the know-how.

All in all, the success of Apple is not simply UI/UX, but the complete supply chain of App. Nokia failures is caused by failure to understand the market.

PS. I am looking at Nokia 8 recently, even though it is not the original Nokia, the minimal customization Android and steel body may make it a good choice for me.


SQL and traditional RDBMS is inevitably a key surviving skill for every programmer. SQL is good given that it is more or less an universal standard in database, or when we want to interact with any database. RDBMS is good because they provide ACID guarantee for programmer, which drastic simplify programmer life especially in web environment.

However, RDBMS suffers from scalability and concurrency problem when it comes to web scale. The common 1 master + N slaves or data sharing technique only postpone the problems by several times. It also posts a few limitations like Read-Write ratio and the partition key has to be carefully selected, which means the designers have to aware of the limitation.

Some people start moving on to NoSQL, whether it is NO SQL or NOT ONLY SQL is debatable. However, from an engineering point of view, NoSQL is a solution to a specific problem but not a silver bullet.

In general, NoSQL can be further divided into the following categories.
– Graph Database – Neo4j
– Document Base Database – MongoDB
– Key Value Database – Redis / Memcache
– Column Oriented Database – Cassandra
– Time Series Database (Extension) – Riak DB / OpenTSDB

For further details on NoSQL database, tap this link

Each of them are solving a particular business model or use cases. For example, Graph DB are used to handle parties and relationships very efficiently. Document Base DB can handle hierarchy data better than RDBMS. We usually trade scalability for giving up Transaction capability.

The famous CAP theorem (Brewer’s theorem) describes the situation that Consistency, Availability and Partition Tolerance are mutually exclusive, we can only pick two out of three. Each NoSQL DB and RDBMS are following this rules with NO EXCEPTION.

SQL is still widely used as of today, since many software are not really web scale or the management doesn’t know or want to pay the cost for web scale. So, people are dreaming to have SQL & RDBMS capabilities while having the NoSQL scalability? It becomes the goal for NewSQL. Google (The God, again) has published a paper for the concept of Spanner, which is now in production in Google Cloud Platform.

In short, it tries to detach the TX manager and the underlying Storage Manager, so that for each query or update, the TX Manager can acquire the relevant Storage managers. Since the Storage is handle locally, the storage itself is usually bigger in size (64MB) and can be distributed. It fits perfectly in AWS S3 or Google Cloud Storage. However, as it involves Network operations, the overall performance should be slower than a local DB with smaller dataset.

There are other local implementations, like VoltDB or CockroachDB.