Welcome to part two of our Docker blog series. If you missed part one, or would like to refresh your mind about what Docker is, and the opportunities it presents to its users, please go back our blog from last year, Docker Part 1. An Introduction and Use for Deployments.
In part one of this blog series, we introduced Docker and the vital role it plays in application deployments across the world. Docker allows for users to easily build environments and create applications that will run your code identically, anywhere Docker is installed. Half of the battle of application deployment is setting up the environment correctly, and Docker helps us nullify that problem and provides us with new opportunities and challenges to simplify deployments.
Developers are currently trending towards “dockerizing” (or containerize) their applications in pursuit of this simplicity. In this portion of the blog series, we are going to learn how to “dockerize” one of the most commonly used web frameworks, Django. The end result of this dockerization will be a newly created Docker image that we can then share and execute across all of our development stages as described in Part 1. In this process, we will assume you already have Docker installed and have created a local Django python instance on your computer, if you would like to review that, please see the official Django installation guide. I called my local Django instance: django-app and I used the following requirements for my Django instance:
File: django_app/requirements.txt Django>=2.0,<3.0 psycopg2>=2.7,<3.0
Let start with creating our first Dockerfile, which is essentially the blueprint to building our application. It is best practice to put these Dockerfiles in the root directory of your application. Each Dockerfile is generally based off another to help simplify the process of creating these environments. DockerHub hosts a large selection of these Dockerfiles, which can help you create your own application specific Dockerfile.
File: django_app/Dockerfile (no extension) FROM python:3 RUN mkdir /django_app WORKDIR /django_app COPY * /django_app/ RUN pip install -r requirements.txt
Dockerfiles adhere to a strict format as outlined in Docker Best Practices, but once created, they are a self-documenting reference for all the steps necessary to get our application running. In the first line, we are issuing a FROM command to select our base image, the official Python 3 docker image. After that, we model our Docker image after our application structure by creating the appropriate directory. In our last line, we issue the COPY command to copy our application directory into our Docker image. Finally, since we are using Python, we install additional requirements through the RUN command followed by the standard python pip install command.
Now if we open a Docker-enabled command line and change the working directory of the command line to our django_app, we can issue the following command:
docker build . --tag django_app
Great! We have built a Docker image with our Django application inside, but now the challenge is: how do you run this image? A docker container is necessary for our docker image to run. In order to accomplish this, an additional file is necessary called docker-compose.yml (YAML or YML stands for “YAML ain’t Markup Language”). For our purposes, we will create this file in the root directory of our application, alongside our django_app/Dockerfile.
File: docker-compose.yml version: '3' services: db: image: postgres web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/django_app ports: - "8000:8000" depends_on: - db
Similar to Dockerfiles, our docker-compose files that are used to build our docker containers are surprisingly self-documenting.
In this file, we define two services: our web service and our database service. In the database service, we register that we want our database to be based on the official Postgres image from Docker.
Next, we define our web service, and we start with a build command which will search for a Dockerfile in the given directory and build a new image. Then, we define any commands to be run after the image is done building via the command term. Now, in order for us to maintain any data throughout the life of this container, we need to declare a volume that will create a link between your local computer and the virtual docker containers. In this example, we create a new volume-link between our actual root directory and the virtual django_app directory created inside our image. Finally, we specify that the docker container will be running on ports 8000 to 8000 and depends on our database service.
We can now use our Docker-enabled command line to issue another command to run our new docker container using the settings in our docker-compose.yml file.
Congratulations! That’s it. We have successfully created a “dockerized” version of our python 3 Django application. At this point, our Django application is running inside our new docker container on port 8000 connected to a Postgres database. In order to verify that the docker image was created and that our docker container is successfully running, we can also issue the following commands:
docker images # shows local docker images docker ps # shows running docker containers
Thank you for reading and remember, this is just a brief glimpse into the world of Docker. There is always more to learn about Docker and its capabilities. The docker image and container created in this article are basic but fundamental examples of how to utilize Docker for a modern-day web service.Continue to check back on the Artemis blog for more Docker and DevOps related blogs.