Containers effectively implement a virtual operating system on top of a real one. Multiple containers can share an underlying real operating system in a manner that requires a lot less overhead than using multiple virtual machines to do the same. In this lab, we will use Docker, the defacto standard for building and running containers, in order to containerize the guestbook application. This will generate a container "image", a blueprint that will allow us to instantiate multiple containers with identical run-times. Container images can be stored in repositories such as Dockerhub (docker.io) much like source code can be stored in repositories such as Github.

In order to run docker, however, you must have administrator access. Unfortunately, this means you can not run docker containers on linuxlab machines. Within a Ubuntu 18.04 VM that you are running locally, clone the repository and change into the directory containing the code.

git clone https://github.com/wu4f/cs430-src
cd cs430-src/04_container_dockerhub

Docker container images are built from recipes contained in Dockerfiles. Images are created in layers starting from their base. For each command in the Dockerfile, a new layer is added to the image that is built. If many container images share the same layers, this allows a system to save storage overhead.

The Dockerfile.ubuntu file is shown below. The file does the following

Dockerfile.ubuntu

FROM ubuntu:18.04 

MAINTAINER Your Name "yourname@pdx.edu" 

RUN apt-get update -y 
RUN apt-get install -y python-pip 

COPY . /app 

WORKDIR /app 

RUN pip install -r requirements.txt 

ENTRYPOINT ["python"]

CMD ["app.py"] 

You will make a single edit to this file before using it. In the MAINTAINER line, specify your name and PSU e-mail address for the container image that will be built.

We will now build our first container image. Because access to Docker requires elevated privileges, docker commands must typically be run via "sudo" (e.g. superuser do). This can be quite tedious. An alternative is to add yourself to the docker group, which is configured to automatically obtain elevated privileges when needed. To do so, run the following on your Ubuntu VM.

sudo usermod -a -G docker $(whoami)

The command adds (-a) your username (returned via $(whoami)) to the group docker (-G docker). To make this change hold, log out of your machine and log back in. Then, change back into the directory and build the container image from the Dockerfile using the current directory (.) and create a local tag for the resulting image named helloubuntu (-t)

docker build -f Dockerfile.ubuntu -t helloubuntu .

Show the image generated and its size in a screenshot for your lab notebook using the command:

docker images

Then, create a running instance of the helloubuntu image and name it hellou (--name). Have the container run in detached mode (-d) without an interactive shell and map the host's port 8000 to the container port 5000 (-p 8000:5000).

docker run -p 8000:5000 --name hellou -di helloubuntu

You might be wondering why the Python/Flask application comes up on port 5000. In this case, within app.py, we have not specified the port parameter as we did in prior labs. This causes the server to run on Flask's default port (5000).

app.py

# Run with default Flask port 5000
if __name__ == '__main__':
    app.run(host='0.0.0.0', debug=True)

Test the container by retrieving http://127.0.0.1:8000 using a browser on the VM, wget, or curl.

We will now get practice running some of the docker commands. To list all running (and stopped) containers, execute the command:

docker ps -a

Note under the NAMES column, the container that we named hellou.

To stop this container, perform the following:

docker stop hellou

See that it is no longer running via:

docker ps -a

Then, start the container via its name:

docker start hellou

This will start the container, but will not give you an interactive shell session on it.

To get an interactive shell on the container, perform the following:

docker exec -it hellou /bin/bash

Within the container, show the contents of the current directory via ls, the contents of the file specifying the Linux standard base being used (/etc/lsb-release), and the output of the process listing command (ps -ef). Exit out of the shell and container.

Stop the container again.

docker stop hellou

Then remove the container from the system.

docker rm hellou

Note that this command only removes the container instance. The container image it was derived from (helloubuntu) still remains on the system and can be used to launch subsequent container instances.

We will now publish the container image we created locally (helloubuntu) to the Docker Hub registry at docker.io. Note that Docker Hub is commonly used as a public registry. Companies often wish to store their container images privately in a location they control. They also wish to place the registry near the machines that are running instances. For this, cloud platforms implement private container registries on a per-project basis. For Google Cloud, this is done via Container Registry (gcr.io) and for AWS via Elastic Container Registry (ecr).

Using your credentials setup at the beginning of the course, login to Docker Hub.

docker login

The local container image has a tag of helloubuntu. To access it via Docker Hub, it will require a global tag that is identified via your Docker Hub user ID. Run the following command, replacing <dockerhub_id> with your own (e.g. wuchangfeng):

docker tag helloubuntu <dockerhub_id>/helloubuntu

Similar to git, docker can upload the image to the registry using a push command:

docker push <dockerhub_id>/helloubuntu

We will now look to run the container image straight from Docker Hub. Examine the container images you've created and tagged so far:

docker images

Then delete both of them via their names with the remove image command:

docker rmi helloubuntu <dockerhub_id>/helloubuntu

Run the image directly from Docker Hub and show a screenshot of the output of the command in your lab notebook.

docker run -di -p 8000:5000 --name hellou <dockerhub_id>/helloubuntu


As before, the container is brought up and the local port of 8000 is mapped into the container port of 5000. Test the container by retrieving http://127.0.0.1:8000 using a browser on the VM, wget, or curl.

Stop and remove the container, then remove the container image.

docker stop hellou
docker rm hellou
docker rmi <dockerhub_id>/helloubuntu

Then, log into Docker Hub with a web browser, navigate to the container image, and take a screenshot of the container image and its size.

Finally, visit https://microbadger.com/ and show the container image metadata using MicroBadger that describes the individual layers of the container. Note that if this site takes too long to return a result, you may visit the same container image name under the wuchangfeng account.

Using Ubuntu 18.04 as a base image includes a large number of libraries and applications that are not needed for our simple Guestbook application. When building and running a large number of container images and containers, for startup speed and storage efficity, one would like to reduce their sizes. One optimization we can make is to use a smaller base image such as Alpine, a set of Linux distributions that focus on removing many non-essential components in runtime Linux systems.

We will now build an Alpine-based container for our application. To begin with, change into the lab's directory.

cd cs430-src/04_container_dockerhub

View Dockerfile.alpine. Similar to Dockerfile.ubuntu, this file can be used to build a container image, but with an Alpine base image specifically made for running Python applications like ours. Note that because this base image already contains Python and pip, there is no need to install them as we did previously, making the Dockerfile simpler.

Dockerfile.alpine

FROM python:alpine

MAINTAINER Your Name "yourname@pdx.edu"

COPY . /app

WORKDIR /app

RUN pip install --no-cache -r requirements.txt

ENTRYPOINT ["python"]

CMD ["app.py"]

As before, make a single edit to this file before using it. In the MAINTAINER line, specify your name and PSU e-mail address for the container image that will be built.

We will now build the Alpine container image. Change back into the directory and build the container image from the Dockerfile with a local tag helloalpine.

docker build -f Dockerfile.alpine -t helloalpine .

Show the image generated and its size in a screenshot for your lab notebook. How much smaller is the image?

docker images

Then, as before, create a running instance of the helloalpine image and name it helloa.

docker run -p 8000:5000 --name helloa -di helloalpine

Test the container by retrieving http://127.0.0.1:8000 using a browser on the VM, wget, or curl.

See that the container is running:

docker ps -a

Attempt to get an interactive shell on the container by performing the following:

docker exec -it helloa /bin/bash

Show the output of this command in a screenshot for your lab notebook. What might have happened?


Then, replace /bin/bash with /bin/sh and repeat the command. Within the container, show the contents of the file specifying the Alpine release being used (/etc/alpine-release) and the output of the process listing command (ps -ef). Exit out of the shell and container.

Stop the container and remove it from the system

docker stop helloa
docker rm helloa

We will now publish the container image we created locally (helloalpine) to the Docker Hub registry at docker.io. Login to Docker Hub again

docker login

The local container image has a tag of helloalpine. To access it via Docker Hub, it will require a global tag that is identified via your Docker Hub user ID. Run the following command, replacing <dockerhub_id> with your own (e.g. wuchangfeng):

docker tag helloalpine <dockerhub_id>/helloalpine

Then push the image to Docker Hub using the command:

docker push <dockerhub_id>/helloalpine

Examine the container images you've created and tagged so far:

docker images

Then delete them via their names:

docker rmi helloalpine <dockerhub_id>/helloalpine

Run the image directly from Docker Hub and show a screenshot of the output of the command in your lab notebook.

docker run -di -p 8000:5000 --name helloa <dockerhub_id>/helloalpine


As before, the container is brought up and the local port of 8000 is mapped into the container port of 5000. Test the container by retrieving http://127.0.0.1:8000 using a browser on the VM, wget, or curl.

Then, log into Docker Hub with a web browser, navigate to the container image, and take a screenshot of the container image and its size.

Finally, visit https://microbadger.com/ and show the container image metadata using MicroBadger that describes the individual layers of the container. Note that if this site takes too long to return a result, you may visit the same container image name under the wuchangfeng account.

The beauty of container registries is that they can be accessed from anywhere that you want to run them from. We will first demonstrate how this is done from a virtual machine in the cloud. As in the prior labs, create a VM on Google Cloud's Compute Engine with the following specifications:

ssh into the VM and install docker on it.

sudo apt update -y
sudo apt install docker.io -y
sudo usermod -a -G docker $(whoami)

Logout and then log back in.

Then, run the helloalpine container image, but map the host's port 80 to the container's port 5000.

docker login
docker run -di -p 80:5000 --name helloa <dockerhub_id>/helloalpine

Go to a web browser and point it to the External IP address of the VM. Note that, this can also be done by clicking on the IP address from the Compute Engine console.

Show in a screenshot that the site is running via the VM's external IP address with a guestbook entry with the message "Hello Compute Engine + Docker!"

Google runs billions of containers each week. In order to streamline their execution and management, a VM optimized for containers called Container-Optimized OS is used. We can deploy our container on top of it directly.

One of the optimizations made in the Container OS VM is that container ports are directly mapped onto the host ports when an instance is started with a specified container image. Our container images run the Guestbook on Flask's default port (5000). As a result, in order to access the application from external sites, we need to add a firewall rule to allow incoming traffic to the port.

Begin by going to the web console on Google Cloud and locating "VPC network". Then, click on "Firewall rules"

Create a new rule called tcp-allow-5000.

Set its target tag to http-5000. This tag can now be applied to any VM that requires incoming traffic to port 5000. Then, specify the rule to apply on ingress traffic, for all source IP addresses connecting to port 5000.

Then, within Compute Engine, create a VM in us-west1-b on an n1-standard machine to deploy your container on. In the interface, specify the container image name to be your Docker Hub image name for the Alpine version of your web application.

The boot disk will be automatically set to Google's Container OS when a container image is specified. Then, Allow HTTP Traffic and click on the "Management, disks, networking, SSH keys" link to expand it. Select the "Networking" tab and specify the http-5000 target tag to allow access to port 5000.

Create the VM and wait for it to come up.

ssh into the instance. Perform a local request to the web app (e.g. wget http://localhost:5000) to ensure the container is running.

Finally, visit the site via the external IP address on port 5000 to show the site is running. Add a "Hello ContainerOS!" guestbook entry and take a screenshot for your lab .

Go back to the web console of Google Cloud and then go to Compute Engine. Visit the VM instances page and delete the VMs you have created.