Published: 12-11-2015 | Author: Jonathan Robe | Text only version of this article
Table of Contents
This article was originaly published in Linux Voice, issue 2, May 2014.This issue is now available under a Creative Commons BY-SA license. In anutshell: you can modify and share all content from the magazine (apart fromadverts), even for commercial purposes, providing you credit Linux Voice as theoriginal source, and retain the same license.
This remix is converted manually to Markdown and HTML for ease of archiving andcopy-pasting.
Other converted Linux Voice articles can be found here.
Enterprise-grade virtualisation on a real kernel.
While Linux containers have been around for a while, they've recently beengaining more recognition as a lightweight alternative to traditionalvirtualisation products like KVM or VMWare. With the arrival of LXC, Docker, andthe next generation of distributions, we're all likely to see a lot more of themover the coming decade.
As with all virtualisation, the idea of containers is to make it easy to runmultiple applications on a single host, all the while ensuring each remainsseparate. This enables the administrator to carefully manage the resourcesassigned to each application and to ensure that they can't interfere with eachother.
What makes containers different to traditional products is that they don't doany hardware emulation. Instead, the applications in question all run directlyon top of the host kernel, just like any other process. Separation between therunning containers is achieved through the careful use of a number of Linuxkernel features.
Control Groups (
cgroups) are the first of these features, and are probably thebest known. They provide a mean for administrators to group processes, and alltheir future children, into hierarchical groups. Various subsystems can then beused to strictly manage the processes and the resources they interact with.
If you have systemd installed, you can quickly inspect what cgroup yourprocesses are running in with the
ps -aeo pid,cgroup,command
Running this, you should see that all processes are running in cgroups thatexist in a hierarchy below the systemd cgroup. You could use systemd unit filesto manage the resources assigned to a service (indeed, if you're using systemd,this is probably the best way to use cgroups), but you can also interact withcgroups directly, too.
There are a collection of tools available in the
cgcreate, for example. You can use this tool to create a new cgroupas follows:
cgcreate -g memory,cpu:mysql
This will create a new cgroup called
mysql which has been tied to the memoryand cpu subsystems. You can then take advantage of a command such as
cgset, orinteract directly with the virtual filesystem exposed by cgroups, to manipulatethe resource limits of this newly created group:
cgset -r swappiness=xxx /sys/fs/cgroups/memory/ mysql
This command will set the
swappiness parameter of all processes running in the
mysql cgroup to
xxx. To add a process to the cgroup, all you need to do isecho its PID to the tasks file in the cgroup's filesystem or use the
Image 1: The highlighted area shows the cgroup in which the different processesare running. As you can see, all are either in the systemd defaults of
Namespace isolation is the other key technology that makes containers possibleon Linux. Each namespace wraps a particular system resource, and makes processesrunning inside that namespace believe they have their own instance of thatresource. There are six namespaces in Linux:
- mount: Isolates the filesystems visible to a group of processes, similar to the chroot command.
- UTS: Isolates host and domain names so that each namespace can have its own. (UTS = Unix Time Sharing)
- IPC: Isolates System V and POSIX message queue interprocess communication channels. (IPC = InterProcess Communication)
- PID: Lets processes in different PID namespaces have the same PID. (This is useful in containers, as it lets each container have its own
init(PID 1) and allows for easy migration between systems. ) (PID = Process ID)
- network: Enables each network namespace to have its own view of the network stack, including network devices, IP addresses, routing tables etc.
- user: Allows a process to have a different UID and GID inside a namespace to what it has outside.
A quick way to experiment with namespaces yourself is to use the
unsharecommand. This will run a particular program, removing its connection to aparticular namespace of its parent:
sudo unshare -u /bin/bash
This will create a new bash process that doesn't share its parent UTS namespace.If you now set the hostname to
foo, you'll then be able to look, in anothershell on the same system, and see that the hostname in the root (original)namespace hasn't changed.
Image 2: The output of this long listing in the
/sys/fs/cgroup directory showsall the different subsystems that are available for managing processes withcgroups on a default Fedora 20 install.
Now that you have an idea of what the underlying technologies do, let's take alook at Linux Containers (
LXC), a userspace interface that brings themtogether. To install the LXC userspace tools, you need to install the
lxcpackage on Ubuntu and Fedora, but in the case of the latter, you should alsoinstall
lxc-extras for a better experience.
Once that's done, creating a new container, depending on your requirements, canbe simple. In the
/usr/share/lxc/templates directory, you'll find a collectionof scripts that will create some default containers, including
Ubuntu system containers, and
Alpineapplication containers. To put one of these to use, all you need to do is runthe following command:
lxc-create -n linux-voice -t /usr/share/lxc/templates/busybox --dir /home/jon/containers/linux-voice
-n: sets the name of the container.
-t: says which template you want to use.
--dir: says where you want the rootfs for the new container to be created.
This command creates a directory in
/var/lib/lxc with the name set by the
-nflag. The contents of this directory are populated by the script specified withthe
If you look at, say, the
BusyBox template, you'll see that this script sets upa filesystem hierarchy, copies appropriate binaries and installs importantpieces of configuration with
heredoc statements. Inside the created directory,you'll also find that a config file has been created. This defines which systemresources are to be isolated and controlled by the container.
man lxc.conf command goes in to detail on what options can be put in thisfile, but a few key examples will be helpful:
lxc.cgroup.cpu.shares = 1234: Sets the share of CPU that the container has.
lxc.utsname = linux-voice: Sets the hostname of the container.
lxc.mount.entry = /lib/home/jon/containers/busybox/lib: Specifies directories on the host filesystem that should be mounted in the container.
This configuration file means you can apply the existing templates in quiteflexible ways, but if you really want to create a custom container, you're goingto have to set to work creating your own template script.
As the LXC man page says, creating a system container is paradoxically easierthan creating an application container.
In the latter case, you have to start by figuring out which resources you wantto isolate from the rest of the system, and then figure out how to populate theappropriate parts of the file system etc. In the former case, you simply isolateeverything, much simpler.
Once you've created your container with
lxc-create and modified the configfile as you see fit, you can start it with the
lxc-start command, use
lxc-console to get a console in it, and shut it down with
While cgroups and namespaces have reached a degree of maturity in Linux, theuser experience still has some room for improvement. If you found the
lxc-commands tricky to use, you might want to install
libvirt-sandbox, which willprovide a set of scripts and extensions for using LXC through the familiar