Never Ending Security

It starts all here

Tag Archives: Docker

A thorough Introduction Guide To Docker Containers


Let me start with a big promise. You will absolutely LOVE this article today. It’s going to be long, detailed and highly useful. Think GRUB, GRUB2. The same thing here. Only we will tackle Docker, a nice distribution platform that wraps the Linux Containers (LXC) technology in a simple, convenient way.

I will show you how to get started, and then we will create our own containers with SSH and Apache, learn how to use Dockerfiles, expose service ports, and solve an immense number of little bugs and problems that normally never get addressed in public forums. Please, without further ado, follow me.

Teaser

Table of Contents

  1. Introduction
  2. Docker implementation
  3. Getting started
  4. Docker commands
  5. Pull image
  6. Start a Docker container
  7. Install Apache & SSH
    1. Start service
    2. Apache service
    3. SSH service
  8. Check if Web server is up
    1. Expose incoming ports
    2. Check IP address
    3. Testing new configuration
  9. Check if SSH works
    1. Wait, what is the root password?
  10. Commit image
  11. Dockerfile
    1. Build image
    2. Test image
  12. Alternative build
    1. COPY instruction
  13. Advantages of containers
  14. Problems you may encounter & troubleshooting
  15. Additional commands
    1. Differences between exec and attach
    2. Differences between start and run
    3. Differences between build and create
  16. This is just a beginning …
  17. More reading
  18. Conclusion

Introduction

I have given a brief overview of the technology in a Gizmo’s Freeware article sometime last year. Now, we are going to get serious about using Docker. First, it is important to remember that this framework allows you to use LXC in a convenient manner, without having to worry about all the little details. It is the next step in this world, the same way OpenStack is the next evolutionary step in the virtualization world. Let me give you some history and analogies.

Virtualization began with software that lets you abstractize your hardware. Then, to make things speedier, virtualization programs began using hardware acceleration, and then you also got paravirtualization. In the end, hypervisors began popping up like mushrooms after rain, and it became somewhat difficult to provision and manage them all. This is the core reason for concepts like OpenStack, which hide different platforms under a unified API.

The containers began their way in a similar manner. First, we had the chroot, but processes running inside the jailed environment shared the same namespace and fought for the same resources. Then, we got the kexec system call, which let us boot into the context of another kernel without going through the BIOS. Then, control groups came about, allowing us to partition system resources like CPU, memory and others into subgroups, thus allowing better control, hence the name, of processes running on the system.

Later on, the Linux kernel began offering a full isolation of resources, using cgroups as the basic partitioning mechanism. Technically, this is a system-level virtualization technology, allowing you to run multiple instances of the running kernel on top of the control host inside self-contained environments, with the added bonus of very little performance penalty and overhead.

Several competing technologies tried to offer similar solutions, like OpenVZ, but the community eventually narrowed down its focus to the native enablement inside the mainline kernel, and this seems to be the future direction. Still, LXC remains somewhat difficult to use, as a fair amount of technical knowledge and scripting is required to get the containers running.

This is where Docker comes into place. It tries to take away the gritty pieces and offer a simple method of spawning new container instances without worrying about the infrastructure backend. Well, almost. But the level of difficulty is much less.

Another strong advantage of Docker is a widespread community acceptance, as well as the emphasis on integration with cloud services. Here we go full buzzword, and this means naming some of the big players like AWS, Hadoop, Azure, Jenkins and others. Then we can also talk about Platform as a Service (Paas), and you can imagine how much money and focus this is going to get in the coming years. The technological landscape is huge and confusing, and it’s definitely going to keep on changing and evolving, with more and more concepts and wrapper technologies coming into life and building on top of Docker.

But we want to focus on the technological side. Once we master the basic, we will slowly expand and began utilizing the strong integration capabilities, the flexibility of the solution, and work on making our cloud ecosystem expertise varied, automated and just pure rad. That won’t happen right now, but I want to help you navigate the first few miles, or should we say kilometers, of the muddy startup waters, so you can begin using Docker in a sensible, efficient way. Since this is a young technology, it’s Wild West out there, and most of the online documentation, tips, tutorials and whatnot are outdated, copy & paste versions that do not help anyone, and largely incomplete. I want to fix that today.

Docker implementation

A bit more boring stuff before we do some cool things. Anyhow, Docker is mostly about LXC, but not just. It’s been designed to be extensible, and it can also interface with libvirt and systemd. In a way, this makes it almost like a hyper-hypervisor, as there’s potential for future growth, and when additional modules are added, it could effectively replace classic hypervisors like Xen or KVM or anything using libvirt and friends.

Docker diagram

This be a public domain image, if you wondered.

Getting started

We will demonstrate using CentOS 7. Not Ubuntu. Most of the online stuff focuses on Ubuntu, but I want to show you how it’s done using as-near-as-enterprise flavor of Linux as possible, because if you’re going to be using Docker, it’s gonna be somewhere business like. The first thing is to install docker:

yum install docker-io

Once the software is installed, you can start using it. However, you may encounter the following two issues the first time you attempt to run docker commands:

docker <any one command>
FATA[0000] Get http:///var/run/docker.sock/v1.18/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?

And the other error is:

docker <any one command>
FATA[0000] Get http:///var/run/docker.sock/v1.18/containers/json: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

The reason is, you need to start the Docker service first. Moreover, you must run the technology as root, because Docker needs access to some rather sensitive pieces of the system, and interact with the kernel. That’s how it works.

systemctl start docker

Now we can go crazy and begin using Docker.

Docker commands

The basic thing is to run docker help to get the available list of commands. I will not go through all the options. We will learn more about them as we go along. In general, if you’re ever in doubt, you should consult the pretty decent online documentation. The complete CLI reference also kicks ass. And then, there’s also an excellent cheat sheet on GitHub. But our first mission will be to download a new Docker image and then run our first instance.

Pull image

There are many available images. We want to practice with CentOS. This is a good starting point. An official repository is available, and it lists all the supported images and tags. Indeed, at this point, we need to understand how Docker images are labeled.

The naming convention is repository:tag, for example centos:latest. In other words, we want the latest CentOS image. But the require image might as well be centos:6.6. All right, let’s do it.

Pulling image

Now let’s list the images by running the docker images command:

Images

Start a Docker container

As we’ve seen in my original tutorial, the simplest example is to run a shell:

docker run -ti centos:centos7 /bin/bash

So what do we have here? We are running a new container instance with its own TTY (-t) and STDIN (-i), from the CentOS 7 image, with a BASH shell. With a few seconds, you will get a new shell inside the container. Now, it’s a very basic, very stripped-down operating system, but you can start building things inside it.

Run container

Container running top

Install Apache & SSH

Let’s setup a Web server, which will also have SSH access. To this end, we will need to do some rather basic installations. Grab Apache (httpd) and SSHD (openssh-server), and configure them. This has nothing to do with Docker, per se, but it’s a useful exercise.

How, some of you may clamor, wait, you don’t need SSH inside a container, it’s a security risk and whatnot. Well, maybe, yes and no, depending on what you need and what you intend to use the container for. But let’s leave the security considerations aside. The purpose of the exercise is to learn how to setup and run ANY service.

Start service

You might want to start your Apache using an init script or a systemd command. This will not quite work. Specifically for CentOS, it comes with systemd, but more importantly, the container does not have its own systemd. If you try, the commands will fail.

systemctl start httpd
Failed to get D-Bus connection: No connection to service manager.

There are hacks around this problem, and we will learn about some of these in a future tutorial. But in general, given the lightweight and simple nature of containers, you do not really need a fully fledged startup service to run your processes. This does add some complexity.

Apache service

To run Apache (HTTPD), just execute /usr/sbin/httpd – or an equivalent command in your distro. The service should start, most likely with a warning that you have not configured your ServerName directive in httpd.conf. We have learned how to do this in my rather extensive Apache guide.

/usr/sbin/httpd
AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, using 172.17.0.4. Set the ‘ServerName’ directive globally to suppress this message

SSH service

With SSHD, run /usr/sbin/sshd.

/usr/sbin/sshd -f /etc/ssh/sshd_config
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_dsa_key
Could not load host key: /etc/ssh/ssh_host_ecdsa_key
Could not load host key: /etc/ssh/ssh_host_ed25519_key

You will also fail, because you won’t have all the keys. Normally, startup scripts take of this, so you will need to run the ssh-keygen command once before the service starts correctly. Either one of the two commands will work:

/usr/bin/ssh-keygen -t rsa -f <path to file>

/usr/bin/ssh-keygen -A
ssh-keygen: generating new host keys: RSA1 RSA DSA ECDSA ED25519

Check if Web server is up

Now, inside the container, we can see that Apache is indeed running.

ps -ef|grep apache
apache      87    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      88    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      89    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      90    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      91    86  0 10:47 ?        00:00:00 /usr/sbin/httpd

But what if we want to check external connectivity? At this point, we have a couple of problems at our hand. One, we have not setup any open ports, so to speak. Two, we do not know what the IP address of our container is. Now, if you try to run the ifconfig inside the BASH shell, you won’t get anywhere, because the necessary package containing the basic networking commands is not installed. Good, because it makes our container slim and secure.

Expose incoming ports

Like with any Web server, we will need to allow incoming connections. We will use the default port 80. This is no different than port forwarding in your router, allowing firewall policies and whatnot. With Docker, there are several ways you can achieve the desired result.

When starting a new container with the run command, you can use -p option to specify which ports to open. You can choose a single port or a range of ports, and you can also map both the host port (hostPort) and container port (containerPort). For instance:

  • -p 80 will expose container port 80. It will be automatically mapped to a random port on the host. We will learn later on how to identify the correct port.
  • -p 80:80 will map the container port to the host port 80. This means you do not need to know the internal IP address of the container. There is an element of internal NAT involved, which goes through the Docker virtual interface. We will discuss this soon. Moreover, if you use this method, only a single container will be able to bind to port 80. If you want to use multiple Web servers with different IP addresses, you will have to set them up each on a different port.

docker run -ti -p 22:22 -p 80:80 image-1:latest
FATA[0000] Error response from daemon: Cannot start container 64bd520e2d95a699156f5d40331d1aba972039c3c201a97268d61c6ed17e1619: Bind for 0.0.0.0:80 failed: port is already allocated

There are many additional considerations. IP forwarding, bridged networks, public and private networks, subnet ranges, firewall rules, load balancing, and more. At the moment, we do not need to worry about these.

There is also an additional method of how we can expose port, but we will discuss that later on, when we touch on the topic of Dockerfiles, which are templates for building new images. For now, we need to remember to run our images with the -p option.

Check IP address

If you want to leave your host ports free, then you can omit the hostPort piece. In that case, you can connect to the container directly, using its IP address and Web server port. To do that, we need to figure our the container details:

docker inspect <container name or ID>

This will give a very long list of details, much like the KVM XML config, except this one is written in JSON, which is another modern and ugly format for data. Readable but extremely ugly.

docker inspect distracted_euclid
[{
“AppArmorProfile”: “”,
“Args”: [],
“Config”: {
“AttachStderr”: true,
“AttachStdin”: true,
“AttachStdout”: true,
“Cmd”: [
“/bin/bash”
],
“CpuShares”: 0,
“Cpuset”: “”,
“Domainname”: “”,
“Entrypoint”: null,
“Env”: [

“ExposedPorts”: {
“80/tcp”: {}
},
“Hostname”: “43b179c5aec7”,
“Image”: “centos:centos7”,
“Labels”: {},
“MacAddress”: “”,

We can narrow it down to just the IP address.

docker inspect <container name or ID> | grep -i “ipaddr”
“IPAddress”: “172.17.0.20”,

Testing new configuration

Let’s start fresh. Launch a new instance, setup Apache, start it. Open a Web browser and test. If it works, then you have properly configured your Web server. Exactly what we wanted.

docker run -it -p 80:80 centos:centos7 /bin/bash

If we check the running container, we can see the port mapping – the output is split over multiple lines for brevety, so please excuse that. Normally, the all-uppercase titles will show as the row header, and then, you will get all the rest printed below, one container per line.

# docker ps
CONTAINER ID        IMAGE               COMMAND
43b179c5aec7        centos:centos7      “/bin/bash”

CREATED             STATUS              PORTS
2 hours ago         Up 2 hours          0.0.0.0:80->80/tcp

NAMES               distracted_euclid

And in the browser, we get:

Web server running

Optional: Now, the internal IP address range will only be accessible on the host. If you want to make it accessible from other machines, you will need your NAT and IP forwarding. And if you want to use names, then you will need to properly configure the /etc/hosts as well as DNS. For container, this can be done using the –add-host=”host:IP” directive when running a new instance.

Another note: Remember that Docker has its own internal networking, much like VirtualBox and KVM, as we’ve seen in my other tutorials. It’s a fairly extensive /16 network, so you have quite a lot of freedom. On the host:

# /sbin/ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 172.17.42.1  netmask 255.255.0.0  broadcast 0.0.0.0
inet6 fe80::5484:7aff:fefe:9799  prefixlen 64  scopeid 0x20<link>
ether 56:84:7a:fe:97:99  txqueuelen 0  (Ethernet)
RX packets 6199  bytes 333408 (325.5 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 11037  bytes 32736299 (31.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Check if SSH works

We need to do the same exercise with SSH. Again, this means exposing port 22, and we have several options available. To make it more interesting, let’s try with a random port assignment:

docker run -ti -p 20 -p 80 centos:centos7 /bin/bash

And if we check with docker ps, specifically for ports:

0.0.0.0:49176->22/tcp, 0.0.0.0:49177->80/tcp   boring_mcclintock

This means you can connect to the docker0 IP address, ports as specified above in the docker ps command output, and this equivalent to actually connecting to the container IP directly, on their service port. This can be useful, because you do not need to worry about the internal IP address that your container uses, and it can simplify forwarding. Now, let’s try to connect. We can use the host port, or we can use the container IP directly.

ssh 172.17.42.1 -p 49117

Either way, we will get what we need, for instance:

ssh 172.17.0.5
The authenticity of host ‘172.17.0.5 (172.17.0.5)’ can’t be established. ECDSA key fingerprint is 00:4b:de:91:60:e5:22:cc:f7:89:01:19:3e:61:cb:ea.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.17.0.5’ (ECDSA) to the list of known hosts.
root@172.17.0.5’s password:

Wait, what is the root password?

We will fail because we do not have the root password. So what do we do now? Again, we have several options. First, try to change the root password inside the container using the passwd command. But this won’t work, because the passwd utility is not installed. We can then grab the necessary RPM and set it up inside the container. On the host, check the dependencies:

rpm -q –whatprovides /etc/passwd
setup-2.8.71-5.el7.noarch

But this is a security vulnerability. We want our containers to be lean. So we can just copy the password hash from /etc/shadow on the host into the container. Later, we will learn about a more streamlined way of doing it.

Another thing that strikes quite clearly is that we are repeating all our actions. This is not efficient, and this is why we want to preserve changes we have done to our container. The next section handles that.

SSH success

Commit image

After you’ve made changes to the container, you may want to commit it. In other words, when starting a new container later on, you will not need to repeat all the steps from scratch, you will be able to reuse your existing work and save time and bandwidth. You can commit an image based on its ID or its alias:

docker commit <container name or ID> <new image>

For example, we get the following:

docker commit 43b179c5aec7 myapache3
1ee373ea750434354faeb1cb70b0177b463c51c96c9816dcdf5562b4730dac54

Commit image

Check the list of images again:

Image committed

Dockerfile

A more streamlined way of creating your images is to use Dockerfiles. In a way, it’s like using Makefile for compilation, only in Docker format. Or an RPM specfile if you will. Basically, in any one “build” directory, create a Dockerfile. We will learn what things we can put inside one, and why we want it for our Apache + SSH exercise. Then, we will build a new image from it. We can combine it with our committed images to preserve changes already done inside the container, like the installation of software, to make it faster and save network utilization.

Before we go any further, let’s take a look at a Dockerfile that we will be using for our exercise. At the moment, the commands may not make much sense, but they soon will.

FROM myhttptest2:latest

EXPOSE 22

CMD [“/usr/sbin/sshd”, “-D”]

EXPOSE 80

RUN mkdir -p /run/httpd
CMD [“/usr/sbin/httpd”, “-D”, “FOREGROUND”]

What do we have here?

  • The FROM directory tells us what repo:tag to use as the baseline. In our case, it’s one of the committed images that already contains the httpd and sshd binaries, SSH keys, and a bit more.
  • EXPOSE 22 – This line exposes port 22 inside the container. We can map it further using the -p option at runtime. The same is true for EXPOSE 80, which is relevant for the Web server.
  • CMD [“/usr/sbin/sshd”, “-D”] – This instructions runs an executable, with optional arguments. It is as simple as that.
  • RUN mkdir -p /run/httpd – This instruction runs a command in a new layer on top of the base image – and COMMITS the results. This is very important to remember, as we will soon discuss what happens if you don’t use the RUN mkdir thingie with Apache.
  • CMD [“/usr/sbin/httpd”, “-D”, “FOREGROUND”] – We run the server, in the foreground. The last bit is optional, but for the time being, you can start Apache this way. Good enough.

As you can see, Dockerfiles aren’t that complex or difficult to write, but they are highly useful. You can pretty much add anything you want. Using these templates form a basis for automation, and with conditional logic, you can create all sorts of scenarios and spawn containers that match your requirements.

Build image

Once you have a Dockerfile in place, it’s time to build a new image. Dockerfiles must follow a strict convention, just like Makefiles. It’s best to keep different image builds in separate sub-directories. For example:

docker build -t test5 .
Sending build context to Docker daemon 41.47 kB
Sending build context to Docker daemon
Step 0 : FROM myapache4:latest
—> 7505c70235e6
Step 1 : EXPOSE 22 80
—> Using cache
—> 58f11217c3e3
Step 2 : CMD /usr/sbin/sshd -D
—> Using cache
—> 628c3d6b5399
Step 3 : RUN mkdir -p /run/httpd
—> Using cache
—> 5fc118f61a4d
Step 4 : CMD /usr/sbin/httpd -D FOREGROUND
—> Using cache
—> d892acd86198
Successfully built d892acd86198

The command tells us the following: -t repository name from a Dockerfile stored in the current directory (.). That’s all. Very simple and elegant.

Test image

Run a new container from the created image. If everything went smoothly, you should have both SSH connectivity, as well as a running Web server in place. Again, all the usual network related rules apply.

Running successfully, built from Dockerfile

Alternative build

Once you have the knowledge how do it on your own, you can try one of the official Apache builds. Indeed, the Docker repository contains a lot of good stuff, so you should definitely invest time checking available templates. For Apache, you only need the following in your Dockerfile – the second like is optional.

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

COPY instruction

What do we have above? Basically, in the Dockerfile, we have the declaration what template to use. And then, we have a COPY instructions, which will look for a public-html directory in the current folder and copy it into the container during the build. In the same manner, you can also copy your httpd.conf file. Depending on your distribution, the paths and filenames might differ. Finally, after building the image and running the container:

docker run -ti -p 22 -p 80 image-1:latest
AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, using 172.17.0.17. Set the ‘ServerName’ directive globally to suppress this message
[Thu Apr 16 21:08:35.967670 2015] [mpm_event:notice] [pid 1:tid 140302870259584] AH00489: Apache/2.4.12 (Unix) configured — resuming normal operations
[Thu Apr 16 21:08:35.976879 2015] [core:notice] [pid 1:tid 140302870259584] AH00094: Command line: ‘httpd -D FOREGROUND’

Default HTTPD works

Advantages of containers

There are many good reasons why you want to use this technology. But let’s just briefly focus on what we gain by running these tiny, isolated instances. Sure, there’s a lot happening under the hood, in the kernel, but in general, the memory footprint of spawned containers is fairly small. In our case, the SSH + Apache containers use a tiny fraction of extra memory. Compare this to any virtualization technology.

Container memory

Low memory usage

Problems you may encounter & troubleshooting

Let’s go back to the Apache example, and now you will also learn why so many online tutorials sin the sin of copy & pasting information without checking, and why most of the advice is not correct, unfortunately. It has to do with, what do you do if your Apache server seems to die within a second or two after launching the container? Indeed, if this happens, you want to step into the container and troubleshoot. To that end, you can use the docker exec command to attach a shell to the instance.

docker exec -ti boring_mcclintock /bin/bash

Then, it comes down to reading logs and trying to figure out what might have gone wrong. If your httpd.conf is configured correctly, you will have access and error logs under /var/log/httpd:

[auth_digest:error] [pid 25] (2)No such file or directory: AH01762: Failed to create shared memory segment on file /run/httpd/authdigest_shm.25

A typical problem is that you may be a missing /run/httpd directory. If this one does not exist in your container, httpd will start and die. Sounds so simple, but few if any reference mentions this.

While initially playing with containers, I did encounter this issue. Reading online, I found several suggestions, none of which really helped. But I do want to elaborate on them, and how you can make progress in your problem solving, even if intermediate steps aren’t really useful.

Suggestion 1: You must use -D FOREGROUND to run Apache, and you must also use ENTRYPOINT rather than CMD. The difference between the two instructions is very subtle. And it does not solve our problem in any way.

ENTRYPOINT [“/usr/sbin/httpd”]
CMD [“-D”, “FOREGROUND”]

Suggestion 2: Use a separate startup script, which could work around any issues with the starting or restarting of the httpd service. In other words, the Dockerfile becomes something like this:


EXPOSE 80
COPY ./run-httpd.sh /run-httpd.sh
RUN chmod -v +x /run-httpd.sh
CMD [“/run-httpd.sh”]

And the contents of the run-httpd.sh script are along the lines of:

#!/bin/bash

rm -rf /run/httpd/*

exec /usr/sbin/apachectl -D FOREGROUND

Almost there. Remove any old leftover PID files, but these are normally not stored under /run/httpd. Instead, you will find them under /var/run/httpd. Moreover, we are not certain that this directory exists.

Finally, the idea is to work around any problems with the execution of a separation shell inside which the httpd thread is spawned. While it does provide us with additional, useful lessons on how to manage the container, with COPY and RUN instructions, it’s not what we need to fix the issue.

Step 3 : EXPOSE 80
—> Using cache
—> 108785c8e507
Step 4 : COPY ./run-httpd.sh /run-httpd.sh
—> 582d795d59d4
Removing intermediate container 7ff5b58b40bf
Step 5 : RUN chmod -v +x /run-httpd.sh
—> Running in 56fadf4dd2d4
mode of ‘/run-httpd.sh’ changed from 0644 (rw-r–r–) to 0755 (rwxr-xr-x)
—> 928640f680cf
Removing intermediate container 56fadf4dd2d4
Step 6 : CMD /run-httpd.sh
—> Running in f9c6b30795e2
—> b2dcc2818a27
Removing intermediate container f9c6b30795e2
Successfully built b2dcc2818a27

This won’t work, because apachectl is an unsupported command for managing httpd, plus we have seen problems using startup scripts and utilities earlier, and we will work on fixing this in a separate tutorial.

docker run -ti -p 80 image-2:latest
Passing arguments to httpd using apachectl is no longer supported. You can only start/stop/restart httpd using this script. If you want to pass extra arguments to httpd, edit the /etc/sysconfig/httpd config file.

But it is useful to try these different things, to get the hang of it. Unfortunately, it also highlights the lack of maturity and the somewhat inadequate documentation for this technology out there.

Additional commands

There are many ways you can interact with your container. If you do not want to attach a new shell to a running instance, you can use a subset of docker commands directly against the container ID or name:

docker <command> <container name or ID>

For instance, to get the top output from the container:

docker top boring_stallman

If you have too many images, some of which have just been used for testing, then you can remove them to free up some of your disk space. This can be done using the docker rmi command.

# docker rmi -f test7
Untagged: test7:latest
Deleted: d0505b88466a97b73d083434b2dd0e7b59b9a5e8d0438b1bf8c6c
Deleted: 5fc118f61bf856f6f3d90e0e71076b737fa7cc58cd56785ea7904
Deleted: 628c3d6b53992521c9c1fdda4148693347c3d10b1d130f7e091e7
Deleted: 58f11217c3e31206b4e41d07100a797cd4d17e4569b0fdb8b7a18
Deleted: 7505c70235e638c54028ea5b63eba2b691de6bee67c2cb5e2861a

Then, you can also run your containers in the background. Using the -d flag will do exactly that, and you will get the shell prompt back. This is also useful if you do not mask signals, so if you accidentally break in your shell, you might kill the container when it’s running in the foreground.

docker run -d -ti -p 80 image-3:latest

You can also check events, examine changes inside a container’s filesystem as well as check history, so you basically have a version control in place, export or import tarred images to and from remote locations, including over the Web, and more.

Differences between exec and attach

If you read through the documentation, you will notice you can connect to a running container using eitherexec or attach commands. So what’s the difference, you may ask? If we look at the official documentation, then:

The docker exec command runs a new command in a running container. The command started using docker exec only runs while the container’s primary process (PID 1) is running, and it is not restarted if the container is restarted.

On the other hand, attach gives you the following:

The docker attach command allows you to attach to a running container using the container’s ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your daemonized process. You can detach from the container (and leave it running) with CTRL-p CTRL-q (for a quiet exit) or CTRL-c which will send a SIGKILL to the container. When you are attached to a container, and exit its main process, the process’s exit code will be returned to the client.

In other words, with attach, you will get a shell, and be able to do whatever you need. With exec, you can issue commands that do not require any interaction, but with you use a shell in combination with exec, you will achieve the same result as if you used attach.

Differences between start and run

Start is used to resume the execution of a stopped container. It is not used to start a fresh instance. For that, you have the run command. The choice of words could have been better.

Differences between build and create

The first command is used to create a new image from a Dockerfile. On the other hand, the latter is used to create a new container using command line options and arguments. Create lets you specify container settings, too, like network configurations, resource limitations and other settings, which affect the container from the outside, whereas the changes implemented by the build command will be reflected inside it, once you start an instance. And by start, I mean run. Get it?

This is just a beginning …

There are a million more things we can do: using systemd enabled containers, policies, security, resource constraints, proxying, signals, other networking and storage options including the super-critical question of how to mount data volumes inside containers so that data does not get destroyed when containers die, additional pure LXC commands, and more. We’ve barely scratched the surface. But now, we know what to do. And we’ll get there. Slowly but surely.

More reading

I recommend you allocate a few hours and then spend some honest time reading all of the below, in detail. Then practice. This is the only way you will really fully understand and embrace the concepts.

Dockerizing an SSH Deamon Service

Differences between save and export in Docker

Docker Explained: Using Dockerfiles to Automate Building of Images

Conclusion

We’re done with this tutorial for today. Hopefully, you’ve found it useful. In a nutshell, it does explain quite a few things, including how to get started with Docker, how to pull new images, run basic containers, add services like SSH and Apache, commit changes to a file, expose incoming ports, build new images with Dockerfiles, lots of troubleshooting of problems, additional commands, and more. Eventful and colorful, I’d dare say.

In the future, we will expand significantly on what we learned here, and focus on various helper technologies like supervisord for instance, we will learn how to mount filesystems, work on administration and orchestration, and many other cool things. Docker is a very nice concept, and if used correctly, it can make your virtual world easier and more elegant. The initial few steps are rough, but with some luck, this guide will have provided you with the right dose of karma to get happily and confidently underway. Ping me if you have any requests or desires. Technology related, of course. We’re done.

How To Private Docker Registry


HOW TO: PRIVATE DOCKER REGISTRY


How To: Private Docker Registry

Docker is a great tool for deploying your servers. While docker.io lets you upload your Docker creations to their registry for free, anything you upload is also public. This probably isn’t what you want for a non-open source-project. This guide will show you how to set up and secure your own private Docker registry.

Docker Concepts

On how to use Docker, you can take a look at this excellent Docker Cheat Sheet.

Docker at it’s core is a way to separate an application and the dependencies needed to run it from the operating system itself. To make this possible Docker uses containers and images. A Docker image is basically a template for a filesystem. When you run a Docker image with the docker run command, an instance of this filesystem is made live, and runs on your system inside a Docker container. By default this container can’t touch the original image itself, or the filesystem of the host where docker is running. It’s a self-contained environment.

Whatever changes you make in the container are preserved in that container itself, and don’t affect the original image. If you decide you want to keep those changes, then you can “commit” a container to a Docker image (via the docker commit command). This means you can then spawn new containers that start with the contents of your old container, without affecting the original container (or image). If you’re familiar with git then the workflow should seem quite similar: you can create new branches (images in Docker parlance) from any container. Running an image is a bit like doing a git checkout.

To continue the analogy, running a private Docker registry is like running a private Git repository for your Docker images.

Install Prerequisites

You should create a user with sudo access on the registry server (and on the clients when you get that far).

The Docker registry is a Python application, so to get it up and running we need to install the Python development utilities and a few libraries:

sudo apt-get -y install build-essential python-dev libevent-dev python-pip liblzma-dev

Install and Configure Docker Registry

To install the latest stable release of the Docker registry we’ll use Python’s package management utility pip:

sudo pip install docker-registry

Docker-registry requires a configuration file.

pip by default installs this config file in a rather obscure location, which can differ depending how your system’s Python is installed. So, to find the path, we’ll attempt to run the registry and let it complain:

gunicorn --access-logfile - --debug -k gevent -b 0.0.0.0:5000 -w 1 docker_registry.wsgi:application   

Since the config file isn’t in the right place yet it will fail to start and spit out an error message that contains a FileNotFoundError that looks like this:

FileNotFoundError: Heads-up! File is missing: /usr/local/lib/python2.7/dist-packages/docker_registry/lib/../../config/config.yml

The registry includes a sample config file called config_sample.yml at the same path, so we can use the path it gave us to locate the sample file.

Copy the path from the error message (in this case /usr/local/lib/python2.7/dist-packages/docker_registry/lib/../../config/config.yml), and remove the config.yml portion so we can change to that directory:

cd /usr/local/lib/python2.7/dist-packages/docker_registry/lib/../../config/

Now copy the config_sample.yml file to config.yml:

sudo cp config_sample.yml config.yml

The default values in the sample config are fine, so no need to change anything there. Feel free to look through them. If you want to do something more complex like using external storage for your Docker data, this file is the place to set it up. That’s outside the scope of this tutorial though, so you’ll have to check the docker-registry documentation if you want to go that route.

Now that the config is in the right place let’s try to test the server again:

gunicorn --access-logfile - --debug -k gevent -b 0.0.0.0:5000 -w 1 docker_registry.wsgi:application   

You should see output that looks like this:

2014-09-27 11:44:15 [29344] [INFO] Starting gunicorn 18.0
2014-09-27 11:44:15 [29344] [INFO] Listening at: http://0.0.0.0:5000 (29344)
2014-09-27 11:44:15 [29344] [INFO] Using worker: gevent
2014-09-27 11:44:15 [29349] [INFO] Booting worker with pid: 29349
2014-09-27 11:44:15,807 DEBUG: Will return docker-registry.drivers.file.Storage

Great! Now we have a Docker registry running. Go ahead and kill it with Ctrl+C.

At this point the registry isn’t that useful yet — it won’t start unless you type in the above gunicorn command. Also, Docker registry doesn’t come with any built-in authentication mechanism, so it’s insecure and completely open to the public right now.

Start Docker Registry as a Service

Let’s set the registry to start on system startup by creating an Upstart script.

First let’s create a directory for the log files to live in:

sudo mkdir -p /var/log/docker-registry

Then use your favorite text editor to create an Upstart script:

sudo nano /etc/init/docker-registry.conf

Add the following contents to create the Upstart script:

description "Docker Registry"

start on runlevel [2345]
stop on runlevel [016]

respawn
respawn limit 10 5

script
exec gunicorn --access-logfile /var/log/docker-registry/access.log --error-logfile /var/log/docker-registry/server.log -k gevent --max-requests 100 --graceful-timeout 3600 -t 3600 -b localhost:5000 -w 8 docker_registry.wsgi:application
end script

If you run:

sudo service docker-registry start

You should see something like this:

docker-registry start/running, process 25303

You can verify that the server is running by taking a look at the server.log file like so:

tail /var/log/docker-registry/server.log

If all is well you’ll see text similar to the output from our previous gunicorn test above.

Now that the server’s running in the background, let’s move on to configuring Nginx so the registry is secure.

Secure Your Docker Registry with Nginx

The first step is to set up authentication so that not just anybody can log into our server.

Let’s install Nginx and the apache2-utils package (which allows us to easily create authentication files that Nginx can read).

sudo apt-get -y install nginx apache2-utils

Now it’s time to create our Docker users.

Create the first user as follows:

sudo htpasswd -c /etc/nginx/docker-registry.htpasswd USERNAME

Create a new password for this user when prompted.

If you want to add more users in the future, just re-run the above command without the c option:

sudo htpasswd /etc/nginx/docker-registry.htpasswd USERNAME_2

At this point we have a docker-registry.htpasswd file with our users set up, and a Docker registry available. You can take a peek at the file at any point if you want to view your users (and remove users if you want to revoke access).

Next we need to tell Nginx to use that authentication file, and to forward requests to our Docker registry.

Let’s create an Nginx configuration file. Create a new docker-registry file, entering your sudo password if needed:

sudo nano /etc/nginx/sites-available/docker-registry

Add the following content. Comments are in-line.

# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary

upstream docker-registry {
 server localhost:5000;
}

server {
 listen 8080;
 server_name my.docker.registry.com;

 # ssl on;
 # ssl_certificate /etc/ssl/certs/docker-registry;
 # ssl_certificate_key /etc/ssl/private/docker-registry;

 proxy_set_header Host       $http_host;   # required for Docker client sake
 proxy_set_header X-Real-IP  $remote_addr; # pass on real client IP

 client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

 # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
 chunked_transfer_encoding on;

 location / {
     # let Nginx know about our auth file
     auth_basic              "Restricted";
     auth_basic_user_file    docker-registry.htpasswd;

     proxy_pass http://docker-registry;
 }
 location /_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }  
 location /v1/_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }

}

And link it up so that Nginx can use it:

sudo ln -s /etc/nginx/sites-available/docker-registry /etc/nginx/sites-enabled/docker-registry

Then restart Nginx to activate the virtual host configuration:

sudo service nginx restart

Let’s make sure everything worked. Our Nginx server is listening on port 8080, while our original docker-registry server is listening on localhost port 5000.

We can use curl to see if everything is working:

curl localhost:5000

You should something like the following

"docker-registry server (dev) (v0.8.1)"

Great, so Docker is running. Now to check if Nginx worked:

curl localhost:8080

This time you’ll get back the HTML of an unauthorized message:

<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>

It’s worthwhile to run these two test commands from a remote machine as well, using the server’s IP address instead of localhost, to verify that your ports are set up correctly.

In the Upstart config file we told docker-registry to listen only on localhost, which means it shouldn’t be accessible from the outside on port 5000. Nginx, on the other hand, is listening on port 8080 on all interfaces, and should be accessible from the outside. If it isn’t then you may need to adjust your firewall permissions.

Good, so authentication is up. Let’s try to log in now with one of the usernames you created earlier:

curl USERNAME:PASSWORD@localhost:8080

If it worked correctly you should now see:

<"docker-registry server (dev) (v0.8.1)"

Set Up SSL

At this point we have the registry up and running behind Nginx with HTTP basic authentication working. However, the setup is still not very secure since the connections are unencrypted. You might have noticed the commented-out SSL lines in the Nginx config file we made earlier.

Let’s enable them. First, open the Nginx configuration file for editing:

sudo nano /etc/nginx/sites-available/docker-registry

Use the arrow keys to move around and look for these lines:

server {
      listen 8080;
      server_name my.docker.registry.com;

      # ssl on;
      # ssl_certificate /etc/ssl/certs/docker-registry;
      # ssl_certificate_key /etc/ssl/private/docker-registry;

Uncomment the SSL lines by removing the # symbols in front of them. If you have a domain name set up for your server, change the server_name to your domain name while you’re at it. When you’re done the file should look like this:

server {
      listen 8080;
      server_name yourdomain.com;

      ssl on;
      ssl_certificate /etc/ssl/certs/docker-registry;
      ssl_certificate_key /etc/ssl/private/docker-registry;

Save the file. Nginx is now configured to use SSL and will look for the SSL certificate and key files at /etc/ssl/certs/docker-registry and /etc/ssl/private/docker-registry respectively.

If you already have an SSL certificate set up or are planning to buy one, then you can just copy the certificate and key files to the paths listed above (ssl_certificate and ssl_certificate_key).

You could also get a free signed SSL certificate.

Or, use a self-signed SSL certificate. Since Docker currently doesn’t allow you to use self-signed SSL certificates this is a bit more complicated than usual, since we’ll also have to set up our system to act as our own certificate signing authority.

Signing Your Own Certificate

First let’s make a directory to store the new certificates and go there:

mkdir ~/certs
cd ~/certs

Generate a new root key:

openssl genrsa -out devdockerCA.key 2048

Generate a root certificate (enter whatever you’d like at the prompts):

openssl req -x509 -new -nodes -key devdockerCA.key -days 10000 -out devdockerCA.crt

Then generate a key for your server (this is the file we’ll later copy to /etc/ssl/private/docker-registry for Nginx to use):

openssl genrsa -out dev-docker-registry.com.key 2048

Now we have to make a certificate signing request.

After you type this command OpenSSL will prompt you to answer a few questions. Write whatever you’d like for the first few, but when OpenSSL prompts you to enter the “Common Name” make sure to type in the domain of your server.

openssl req -new -key dev-docker-registry.com.key -out dev-docker-registry.com.csr

For example, if your Docker registry is going to be running on the domain www.ilovedocker.com, then your input should look like this:

Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:www.ilovedocker.com
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Do not enter a challenge password. Then we need to sign the certificate request:

openssl x509 -req -in dev-docker-registry.com.csr -CA devdockerCA.crt -CAkey devdockerCA.key -CAcreateserial -out dev-docker-registry.com.crt -days 10000

Now that we’ve generated all the files we need for our certificate to work, we need to copy them to the correct places.

First copy the certificate and key to the paths where Nginx is expecting them to be:

sudo cp dev-docker-registry.com.crt /etc/ssl/certs/docker-registry
sudo cp dev-docker-registry.com.key /etc/ssl/private/docker-registry

Since the certificates we just generated aren’t verified by any known certificate authority (e.g., VeriSign), we need to tell any clients that are going to be using this Docker registry that this is a legitimate certificate. Let’s do this locally so that we can use Docker from the Docker registry server itself:

sudo mkdir /usr/local/share/ca-certificates/docker-dev-cert
sudo cp devdockerCA.crt /usr/local/share/ca-certificates/docker-dev-cert
sudo update-ca-certificates    

You’ll have to repeat this step for every machine that connects to this Docker registry! Otherwise you will get SSL errors and be unable to connect. These steps are shown in the client test section as well.

SSL Test

Let’s restart Nginx to reload the configuration and SSL keys:

sudo service nginx restart

Do another curl test (only this time using https) to verify that our SSL setup is working properly. Keep in mind that for SSL to work correctly you will have to use the same domain name you typed into the Common Name field earlier while you were creating your SSL certificate.

curl https://USERNAME:PASSWORD@YOUR-DOMAIN:8080

For example, if the user and password you set up were nik and test, and your SSL certificate is for>www.ilovedocker.com, then you would type the following:

curl https://nik:test@www.ilovedocker.com:8080

If all went well, you should see the familiar:

"docker-registry server (dev) (v0.8.1)"

If not, recheck the SSL steps and your Nginx configuration file to make sure everything is correct.

Now we have a Docker registry running behind an Nginx server which is providing authentication and encryption via SSL.

Access Your Docker Registry from Another Machine

To access your Docker registry, first add the SSL certificate you created earlier to the new client machine. The file you want is located at ~/certs/devdockerCA.crt. You can copy it to the new machine directly or use the below instructions to copy and paste it:

On the registry server, view the certificate:

cat ~/certs/devdockerCA.crt

You’ll get output that looks something like this:

-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJANiXy7fHSPrmMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX
...
u5wYtI9YDMsxeVW6OP9ZfvpGZW/n/88MSFjMlBjFfFsorfRd6P5WADhdfA6CBECG
LP83r7/MhqO06EOpsv4n2CJ3yoyqIr1L1+6C7Erl2em/jfOb/24y63dj/ATytt2H
6g==
-----END CERTIFICATE-----

Copy that output to your clipboard and connect to your client machine.

On the client server, create the certificate directory:

sudo mkdir /usr/local/share/ca-certificates/docker-dev-cert

Open the certificate file for editing:

nano /usr/local/share/ca-certificates/docker-dev-cert/devdockerCA.crt

Paste the certificate contents.

Verify that the file saved to the client machine correctly by viewing the file:

cat /usr/local/share/ca-certificates/docker-dev-cert/devdockerCA.crt

If everything worked properly you’ll see the same text from earlier:

-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJANiXy7fHSPrmMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
...
...
LP83r7/MhqO06EOpsv4n2CJ3yoyqIr1L1+6C7Erl2em/jfOb/24y63dj/ATytt2H
6g==
-----END CERTIFICATE-----

Now update the certificates:

sudo update-ca-certificates

You should get output that looks like the following (note the “1 added“)

Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.

If you don’t have Docker installed on the client yet, do so now.

On most versions of Ubuntu you can quickly install a recent version of Docker by following the next few commands. If your client is on a different distro or you have issues then see Docker’s installation documentation for other ways to install Docker.

Add the repository key:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9;

Create a file to list the Docker repository:

sudo nano /etc/apt/sources.list.d/docker.list

Add the following line to the file:

deb https://get.docker.io/ubuntu docker main

Update your package lists:

sudo apt-get update

Install Docker:

sudo apt-get install -y --force-yes lxc-docker

To make working with Docker a little easier, let’s add our current user to the Docker group and re-open a new shell:

sudo gpasswd -a ${USER} docker
sudo su -l $USER #(enter your password at the prompt if needed)

Restart Docker to make sure it reloads the system’s CA certificates.

sudo service docker restart

You should now be able to log in to your Docker registry from the client machine:

docker login https://YOUR-HOSTNAME:8080

Note that you’re using https:// and port 8080 here. Enter the username and password you set up earlier (enter whatever you’d like for email if prompted). You should see a Login Succeeded message.

At this point your Docker registry is up and running! Let’s make a test image to push to the registry.

Publish to Your Docker Registry

On the client server, create a small empty image to push to our new registry.

docker run -t -i ubuntu /bin/bash

After it finishes downloading you’ll be inside a Docker prompt. Let’s make a quick change to the filesystem:

touch /SUCCESS

Exit out of the Docker container:

exit

Commit the change:

docker commit $(docker ps -lq) test-image

If you run docker images now, you’ll see that you have a new test-image in the image list:

# docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
test-image          latest              1f3ce8008165        9 seconds ago       192.7 MB
ubuntu              trusty              ba5877dc9bec        11 days ago         192.7 MB

This image only exists locally right now, so let’s push it to the new registry we’ve created.

First, log in to the registry with Docker. Note that you want to use https:// and port 8080:

docker login https://<YOUR-DOMAIN>:8080

Enter the username and password you set up earlier:

Username: USERNAME
Password: PASSWORD
Email: 
Account created. Please see the documentation of the registry http://localhost:5000/v1/ for instructions how to activate it.

Docker has an unusual mechanism for specifying which registry to push to. You have to tag an image with the private registry’s location in order to push to it. Let’s tag our image to our private registry:

docker tag test-image YOUR-DOMAIN:8080/test-image

Note that you are using the local name of the image first, then the tag you want to add to it. The tag is not using https://, just the domain, port, and image name.

Now we can push that image to our registry. This time we’re using the tag name only:

docker push <YOUR-DOMAIN>:8080/test-image

This will take a moment to upload to the registry server. You should see output that includes Image successfully pushed.

Pull from Your Docker Registry

To make sure everything worked let’s go back to our original server (where you installed the Docker registry) and pull the image we just pushed from the client. You could also test this from a third server.

If Docker is not installed on your test pull server, go back and follow the installation instructions (and if it’s a third server, the SSL instructions) from Step Six.

Log in with the username and password you set up previously.

docker login https://<YOUR-DOMAIN>:8080

And now pull the image. You want just the “tag” image name, which includes the domain name, port, and image name (but not https://):

docker pull <YOUR-DOMAIN>:8080/test-image

Docker will do some downloading and return you to the prompt. If you run the image on the new machine you’ll see that the SUCCESS file we created earlier is there:

docker run -t -i <YOUR-DOMAIN>:8080/test-image /bin/bash

List your files:

ls

You should see the SUCCESS file we created earlier:

SUCCESS  bin  boot  dev  etc  home  lib  lib64  media   mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

Congratulations! You’ve just used your own private Docker registry to push and pull your first Docker container! Happy Docker-ing!

Docker – Getting Started


DOCKER: GETTING STARTED


Docker: Getting Started

Introduction

The provided use cases are limitless and the need has always been there. Docker is here to offer you an efficient, speedy way to port applications across systems and machines. It is light and lean, allowing you to quickly contain applications and run them within their own secure environments (via Linux Containers: LXC).

In this article, we aim to thoroughly introduce you to Docker: one of the most exciting and powerful open-source projects to come to life in the recent years. Docker can help you with so much it’s unfair to attempt to summarize its capabilities in one sentence.

Glossary

1. Docker

2. The Docker Project and its Main Parts

3. Docker Elements

  1. Docker Containers
  2. Docker Images
  3. Dockerfiles

4. How to Install Docker

5. How To Use Docker

  1. Beginning
  2. Working with Images
  3. Working with Containers

Docker

Whether it be from your development machine to a remote server for production, or packaging everything for use elsewhere, it is always a challenge when it comes to porting your application stack together with its dependencies and getting it to run without hiccups. In fact, the challenge is immense and solutions so far have not really proved successful for the masses.

In a nutshell, docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines – virtual or physical – and brings along loads more of great benefits with it.

Docker achieves its robust application (and therefore, process and resource) containment via Linux Containers (e.g. namespaces and other kernel features). Its further capabilities come from a project’s own parts and components, which extract all the complexity of working with lower-level linux tools/APIs used for system and application management with regards to securely containing processes.

The Docker Project and its Main Parts

Docker project (open-sourced by dotCloud in March ’13) consists of several main parts (applications) and elements (used by these parts) which are all [mostly] built on top of already existing functionality, libraries and frameworks offered by the Linux kernel and third-parties (e.g. LXC, device-mapper, aufs etc.).

Main Docker Parts

  1. docker daemon: used to manage docker (LXC) containers on the host it runs
  2. docker CLI: used to command and communicate with the docker daemon
  3. docker image index: a repository (public or private) for docker images

Main Docker Elements

  1. docker containers: directories containing everything-your-application
  2. docker images: snapshots of containers or base OS (e.g. Ubuntu) images
  3. Dockerfiles: scripts automating the building process of images

Docker Elements

The following elements are used by the applications forming the docker project.

Docker Containers

The entire procedure of porting applications using docker relies solely on the shipment of containers.

Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts). The only dependency is having the hosts tuned to run the containers (i.e. have docker installed). Containment here is obtained via Linux Containers (LXC).

LXC (Linux Containers)

Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment. By making use of certain features (e.g. namespaces, chroots, cgroups and SELinux profiles), the LXC contains application processes and helps with their management through limiting resources, not allowing reach beyond their own file-system (access to the parent’s namespace) etc.

Docker with its containers makes use of LXC, however, also brings along much more.

Docker Containers

Docker containers have several main features.

They allow;

  • Application portability
  • Isolating processes
  • Prevention from tempering with the outside
  • Managing resource consumption

and more, requiring much less resources than traditional virtual-machines used for isolated application deployments.

They do not allow;

  • Messing with other processes
  • Causing “dependency hell”
  • Or not working on a different system
  • Being vulnerable to attacks and abuse all system’s resources

and (also) more.

Being based and depending on LXC, from a technical aspect, these containers are like a directory (but a shaped and formatted one). This allows portability and gradual builds of containers.

Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one. And various tools and configurations make this set-up work in a harmonious way altogether (e.g. union file-system).

What this way of having containers allows is the extreme benefit of easily launching and creating new containers and images, which are thus kept lightweight (thanks to gradual and layered way they are built). Since everything is based on the file-system, taking snapshots and performing roll-backs in time are cheap(i.e. very easily done / not heavy on resources), much like version control systems (VCS).

Each docker container starts from a docker image which forms the base for other applications and layers to come.

Docker Images

Docker images constitute the base of docker containers from which everything starts to form. They are very similar to default operating-system disk images which are used to run applications on servers or desktop computers.

Having these images (e.g. Ubuntu base) allow seamless portability across systems. They make a solid, consistent and dependable base with everything that is needed to run the applications. When everything is self-contained and the risk of system-level updates or modifications are eliminated, the container becomes immune to external exposures which could put it out of order – preventing the dependency hell.

As more layers (tools, applications etc.) are added on top of the base, new images can be formed bycommitting these changes. When a new container gets created from a saved (i.e. committed) image, things continue from where they left off. And the  union file system, brings all the layers together as a single entity when you work with a container.

These base images can be explicitly stated when working with the docker CLI to directly create a new container or they might be specified inside a Dockerfile for automated image building.

Dockerfiles

Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image. Each command executed translates to a new layer of the onion, forming the end product. They basically replace the process of doing everything manually and repeatedly. When a Dockerfile is finished executing, you end up having formed an image, which then you use to start (i.e. create) a new container.

How To Install Docker

At first, docker was only available on Ubuntu. Nowadays, with its most recent release (0.7.1. dating 5 Dec.), it is possible to deploy docker on RHEL based systems (e.g. CentOS) and others as well.

Let’s quickly go over the installation process for Ubuntu.

Installation Instructions for Ubuntu

The simplest way to get docker, other than using the pre-built application image, is to go with a 64-bit Ubuntu 13.04 VPS

Update your droplet:

sudo aptitude    update
sudo aptitude -y upgrade

Make sure aufs support is available:

sudo aptitude install linux-image-extra-`uname -r`

Add docker repository key to apt-key for package verification:

sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -"

Add the docker repository to aptitude sources:

sudo sh -c "echo deb http://get.docker.io/ubuntu docker main\
> /etc/apt/sources.list.d/docker.list"

Update the repository with the new addition:

sudo aptitude    update

Finally, download and install docker:

sudo aptitude install lxc-docker

Ubuntu’s default firewall (UFW: Uncomplicated Firewall) denies all forwarding traffic by default, which is needed by docker.

Enable forwarding with UFW:

Edit UFW configuration using the nano text editor.

sudo nano /etc/default/ufw

Scroll down and find the line beginning with DEFAULTFORWARDPOLICY.

Replace:

DEFAULT_FORWARD_POLICY="DROP"

With:

DEFAULT_FORWARD_POLICY="ACCEPT"

Press CTRL+X and approve with Y to save and close.

Finally, reload the UFW:

sudo ufw reload

For a full set of instructions, check out docker documentation for installation here.

How To Use Docker

Once you have docker installed, its intuitive usage experience makes it very easy to work with. By now, you shall have the docker daemon running in the background. If not, use the following command to run the docker daemon.

To run the docker daemon:

sudo docker -d &

Usage Syntax:

Using docker (via CLI) consists of passing it a chain of options and commands followed by arguments. Please note that docker needs sudo privileges in order to work.

sudo docker [option] [command] [arguments]

Note: Below instructions and explanations are provided to be used as a guide and to give you an overall idea of using and working with docker. The best way to get familiar with it is practice on a new VPS. Do not be afraid of breaking anything– in fact, do break things! With docker, you can save your progress and continue from there very easily.

Beginning

Let’s begin with seeing all available commands docker have.

Ask docker for a list of all available commands:

sudo docker

All currently (as of 0.7.1) available commands:

attach    Attach to a running container
build     Build a container from a Dockerfile
commit    Create a new image from a container's changes
cp        Copy files/folders from the containers filesystem to the host path
diff      Inspect changes on a container's filesystem
events    Get real time events from the server
export    Stream the contents of a container as a tar archive
history   Show the history of an image
images    List images
import    Create a new filesystem image from the contents of a tarball
info      Display system-wide information
insert    Insert a file in an image
inspect   Return low-level information on a container
kill      Kill a running container
load      Load an image from a tar archive
login     Register or Login to the docker registry server
logs      Fetch the logs of a container
port      Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
ps        List containers
pull      Pull an image or a repository from the docker registry server
push      Push an image or a repository to the docker registry server
restart   Restart a running container
rm        Remove one or more containers
rmi       Remove one or more images
run       Run a command in a new container
save      Save an image to a tar archive
search    Search for an image in the docker index
start     Start a stopped container
stop      Stop a running container
tag       Tag an image into a repository
top       Lookup the running processes of a container
version   Show the docker version information
wait      Block until a container stops, then print its exit code

Check out system-wide information and docker version:

# For system-wide information on docker:
sudo docker info

# For docker version:
sudo docker version

Working with Images

As we have discussed at length, the key to start working with any docker container is using images. There are many freely available images shared across docker image index and the CLI allows simple access to query the image repository and to download new ones.
When you are ready, you can also share your image there as well. See the section on “push” further down for details.
Searching for a docker image:*

# Usage: sudo docker search [image name]
sudo docker search ubuntu

This will provide you a very long list of all available images matching the query: Ubuntu.

Downloading (PULLing) an image:

Either when you are building / creating a container or before you do, you will need to have an image present at the host machine where the containers will exist. In order to download images (perhaps following “search”) you can execute pull to get one.

# Usage: sudo docker pull [image name]
sudo docker pull ubuntu

Listing images:

All the images on your system, including the ones you have created by committing (see below for details), can be listed using “images”. This provides a full list of all available ones.

# Example: sudo docker images
sudo docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
my_img              latest              72461793563e        36 seconds ago      128 MB
ubuntu              12.04               8dbd9e392a96        8 months ago        128 MB
ubuntu              latest              8dbd9e392a96        8 months ago        128 MB
ubuntu              precise             8dbd9e392a96        8 months ago        128 MB
ubuntu              12.10               b750fe79269d        8 months ago        175.3 MB
ubuntu              quantal             b750fe79269d        8 months ago        175.3 MB

Committing changes to an image:

As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”. Committing makes sure that everything continues from where they left next time you use one (i.e. an image).

# Usage: sudo docker commit [container ID] [image name]
sudo docker commit 8dbd9e392a96 my_img

Sharing (PUSHing) images:

Although it is a bit early at this moment – in our article, when you have created your own container which you would like to share with the rest of the world, you can use push to have your image listed in the index where everybody can download and use.
Please remember to “commit” all your changes.

# Usage: sudo docker push [username/image name]  
sudo docker push my_username/my_first_image

Note: You need to sign-up at index.docker.io to push images to docker index.

Working with Containers

When you “run” any process using an image, in return, you will have a container. When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.

>Listing all current containers:

By default, you can use the following to list all running containers:

sudo docker ps

To have a list of both running and non-running ones, use:

sudo docker ps -l 

Creating a New Container

It is currently not possible to create a container without running anything (i.e. commands). To create a new container, you need to use a base image and specify a command to run.

# Usage: sudo docker run [image name] [command to run]
sudo docker run my_img echo "hello"

# To name a container instead of having long IDs
# Usage: sudo docker run -name [name] [image name] [comm.]
sudo docker run -name my_cont_1 my_img echo "hello"

This will output “hello” and you will be right back where you were. (i.e. your host’s shell)

As you can not change the command you run after having created a container (hence specifying one during “creation”), it is common practice to use process managers and even custom launch scripts to be able to execute different commands.
Running a container:

When you create a container and it stops (either due to its process ending or you stopping it explicitly), you can use “run” to get the container working again with the same command used to create it.

# Usage: sudo docker run [container ID]
sudo docker run c629b7d70666

Remember how to find the containers? See above section for listing them.
Stopping a container:

To stop a container’s process from running:

# Usage: sudo docker stop [container ID]
sudo docker stop c629b7d70666

Saving (committing) a container:

If you would like to save the progress and changes you made with a container, you can use “commit” as explained above to save it as an image.
This command turns your container to an image.
Remember that with docker, commits are cheap. Do not hesitate to use them to create images to save your progress with a
container or to roll back when you need (e.g. like snapshots in time).

Removing / Deleting a container:

Using the ID of a container, you can delete one with rm.

# Usage: sudo docker rm [container ID]
sudo docker rm c629b7d70666

You can learn more about Docker by reading their official documentation