How to install and use LXC (LinuX Container)

I have already discussed about Linux containers in my previous post. So let’s get started with one of the popular Linux container application “LXC”.
Installation is pretty much straight forward. Just do

[apt update]                 # to update your ubuntu repositories
[apt upgrade]               # to upgrade installed packages
[apt install lxc lxc1]    # to install lxc packages. It will also install all required dependencies and configure bridged network for the containers.

LXC can run in two modes, i.e. privileged and non-privileged mode, however using non privilege mode is more secure, so that recommended. Users in LXD group can run lxc commands. To check run [groups USER_NAME].

To check if everything is configured to run containers.
root@ayush:~# lxc-checkconfig
— Namespaces —
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Warning: newuidmap is not setuid-root
Warning: newgidmap is not setuid-root
Network namespace: enabled

— Control groups —
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

— Misc —
Veth pair device: enabled
Macvlan: enabled
.….

It seems everything is configured as expected.
The default configurations of lXC:

root@ayush:/etc/lxc# cat default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

So by default lxc.network.link is set to “lxcbr0”, which is created at the time of installation. If you want to use any other network, change followings in /etc/default/lxc-net:

USE_LXC_BRIDGE=”false”            # to use existing bridge
Or USE_LXC_BRIDGE=”mavlan”  # to use your host’s NIC
lxc.network.link = lxcbr0           # Here specify the  network you have created

After changing above, do change accordingly in /etc/lxc/default.conf.
Additional networks can be created and configured in /etc/network/interfaces file and can be used by the containers. Now rebooting the server or simply rebooting the system networking will enable newly created Network bridge.

By default LXC has some pre-configured templates, which can be used to create new containers.
root@ayush:/usr/share/lxc/templates# ls
Lxc-archlinux    lxc-centos    lxc-debian    lxc-fedora    lxc-oracle    lxc-sshd    lxc-ubuntu-cloud    lxc-download      lxc-gentoo      lxc-opensuse        lxc-ubuntu

Creating container using available templates:
[lxc-create -t download -n container1 — –dist ubuntu –release xenial –arch amd64]
It will create a container with name container1, with Ubuntu xenial (Ubuntu 16.04 64 bit)
Remote image servers can also be used to pull images and create containers.
To list all available image repositories:
root@ayush:~# lxc remote list    [output is suppressed]
NAME                                       URL                                               PROTOCOL
——————————————————————————————————————————-
Images               https://images.linuxcontainers.org                  simplestreams
local (default)    unix://                                                                  lxd
ubuntu               https://cloud-images.ubuntu.com/releases    simplestreams
ubuntu-daily      https://cloud-images.ubuntu.com/daily         simplestreams

By default “local” is used to create container. To check run [lxc remote get-default]. For that LXD must be running.
You can also add different remote servers, like
[lxc remote add custom_server server_addr]

To list images from images repository:
[lxc image list images:]
To filter ubuntu images from images repo:
[lxc image list images: ubuntu]
Further filtering release and architecture:
[lxc image list images: ubuntu/yakketty/amd64]    # It will list ubuntu 16.10 64 bit images
To start a container named container1 using above image:
[lxc launch images:ubuntu/yakketty/amd64 container1]
To list available images locally:
[lxc image list]
To delete an image:
[lxc image delete image_name]
To change the name of the image using its fingerprint:
[lxc image alias list]
[lxc image alias create alias finger_print]
Further containers can be created using already downloaded images:
[lxc launch alias container_name]

To list available network interfaces for the containers:
root@ayush:~# lxc network list
| NAME | TYPE | MANAGED | USED BY |
———————————————————————
| docker0 | bridge | NO | 0 |
| enp2s0 | physical | NO | 1 |
| lxcbr0 | bridge | NO | 1 |
| wlp3s0 | physical | NO | 0 |

In my case container1 is using lxcbr0 and container2 is using enp2s0.
root@ayush:~# lxc list
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
| container1 | RUNNING |  |  | PERSISTENT | 0 |
| container2 | RUNNING |  |  | PERSISTENT | 0 |

I think container is unable to get IPs. But containers can be accessed using UNIX sockers.
Now the containers are ready to use. Any commands can be run inside the container. It can also be used like a sandbox. You can test malicious scripts inside it, so that it would have no effect on real host.
To execute commands inside container:
[lxc exec container_name — command_to_execute]
Let’s configure network of container1:
root@ayush:~# lxc exec container1 — dhclient
root@ayush:~# lxc exec container1 — ifconfig eth2 | grep -w inet
         inet addr:10.0.3.41  Bcast:10.0.3.255  Mask:255.255.255.0
Now container1 got an IP from DHCP server.

Let’s configure a static ip for container2:
To enable eth0 interface, Because its state was down.
root@ayush:~# lxc exec container2 — ifconfig eth0 up
Assigning static IP
root@ayush:~# lxc exec container2 — ip addr add 10.10.30.249/24 dev eth0
Assigning default route
root@ayush:~# lxc exec container2 — ip route add default via 10.10.30.1 dev eth0
Using google DNS
root@ayush:~# lxc exec container2 — echo nameserver 8.8.8.8 >> /etc/resolv.conf

To list all the containers:
root@ayush:~# lxc list
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
| container1 | RUNNING | 10.0.3.41 (eth2) | | PERSISTENT | 0 |
| container2 | RUNNING | 10.10.30.249 (eth0) | | PERSISTENT | 0 |

To get into container2:
root@ayush:~# lxc shell container2
root@container2:~# hostname
container2
root@container2:~# netstat -nltp | grep sshd
tcp    0  0  0.0.0.0:22  0.0.0.0:*   LISTEN  329/sshd
tcp6  0  0  :::22           :::*            LISTEN  329/sshd  

So now I can access this container using SSH as SSHD is listening on port 22.
To get informations about a container:
[lxc info container_name]
To push a file from host machine to container:

root@ayush:/tmp# touch a
root@ayush:/tmp# lxc file push a container2/tmp/
root@ayush:/tmp# lxc exec container2 — ls /tmp/a
/tmp/a

Similarly to pull a file from container to host:
[lxc file pull container2/tmp/a /tmp/]

Though it is the right way of transferring files between the container and the host, It can also be done like:

root@ayush:/var/lib/lxd/containers/container2/rootfs/root# touch test
root@container2:~# ls
test

Containers can be cloned. To do so,
root@ayush:~# lxc stop container1                         # To stop container1
root@ayush:~# lxc copy container1 container3    # To clone container3 as a copy of container1
root@ayush:~# lxc list                                              # To list available containers
root@ayush:~# lxc delete container3                     # To delete container3

Snapshot is another feature of LXC:
[lxc snapshot container1 snap0] #To take snapshot of container1 and assign a name snap0
And to restore from the taken snapshot:
[lxc restore container1 snap0]
Ex:

root@ayush:~# lxc exec container2 — ls /root
test
root@ayush:~# lxc snapshot container2 snap1
root@ayush:~# lxc exec container2 — rm /root/test
root@ayush:~# lxc exec container2 — ls /root
root@ayush:~# lxc restore container2 snap1
root@ayush:~# lxc exec container2 — ls /root
Test
So we restored container2 successfully using snapshot snap1.
You can find snapshots under /var/lib/lxd/snapshots/
root@ayush:/var/lib/lxd/snapshots# cd container2

root@ayush:/var/lib/lxd/snapshots/container2# ls
snap1

How to create new images from running containers:
Suppose you have created a container running ubuntu. You installed few applications and configured them. Now you can create an image containing all the applications with its configurations. But before that do ensure you have cleared all credentials.
[lxc publish CONTAINER –alias ALIAS]    # Do replace the names as per your need
Now that image can be listed using [lxc image list] and can be used to create new containers.

LXC containers can be configured to start automatically after LXC service starts, that is basically after host boots up.

root@ayush:~# lxc config set container2 boot.autostart 1
If many containers are there they can be scheduled to boot in particular order.
root@ayush:~# lxc config set container1 boot.autostart.order 10
root@ayush:~# lxc config set container2 boot.autostart.order 8
That means container1 will boot first and then container2 will as higher number start first.

Using LXC you can limit CPU resource for the containers.
Suppose your system is having 4 CPU cores. My laptop is having 2 physical cores, using hyperthreading it becomes 4 logical cores. So basically I can use 4 cores for my system processes.
To limit the container2 to use any 2 cores out of 4:
root@ayush:~# lxc config set container2 limits.cpu 2
To set the container2 to use only 2, 4,7,8 and 9 number core out of 16 cores:
root@ayush:~# lxc config set container2 limits.cpu 2,4,7-9
CPU allowance and CPU priority can also be set using LXC.

Try LXC, do some research by your own to become productive.

Hope you guys liked it.
Thanks
Ayush

Advertisements

One thought on “How to install and use LXC (LinuX Container)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s