Deploying a django application using docker.

what is docker?

Docker is an open-source tool that automates the deployment of an application inside a software container. which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system.
for detailed information on the workings of docker, I’d recommend reading this article and for those not comfortable reading long posts, this tutorial series on youtube was especially useful in introducing me to the concepts of docker.

Installing docker.

In case you don’t have docker installed on your machine follow the detailed steps below as per your operating system.

Getting started

For deploying a typical django application you would need the following services in order to get it running.

  1. 2.Postgres/any database of your choice.
  2. 3.python with gunicorn installed.

1.Python image

FROM python:3.6

RUN mkdir /code
WORKDIR /code

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt


COPY . .
docker build .

2. Nginx image

FROM nginx

RUN rm /etc/nginx/conf.d/default.conf
COPY mysite.conf /etc/nginx/conf.d
upstream my_site {
server web:8080;
}

server {


listen 80;
charset utf-8;
server_name 127.0.0.1;


client_max_body_size 4G;
access_log /code/logs/nginx-access.log;
error_log /code/logs/nginx-error.log;


location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://my_site;
break;
}
}

location /static/ {
autoindex on;
alias /code/static_cdn/;
}

location /media/ {
autoindex on;
alias /code/media_cdn/;
}

}

3. postgres

Lastly we get to the database, in this use case, I used postgres.

FROM postgres:latest

COPY ./init/01-db_setup.sh /docker-entrypoint-initdb.d/01-db-setup.sh
postgres
├── postgres/Dockerfile
└── postgres/init
└── postgres/init/01-db_setup.sh
#!/bin/sh

psql -U postgres -c "CREATE USER $POSTGRES_USER PASSWORD '$POSTGRES_PASSWORD'"
psql -U postgres -c "CREATE DATABASE $POSTGRES_DB OWNER $POSTGRES_USER"
psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE $POSTGRES_DB TO $POSTGRES_USER"
sudo chmod u+x filename.sh

4. wrapping things up with docker-compose

At this point, you’ve probably noticed that we have a lot of dockerfiles,
with docker-compose, we can conveniently build all this images using
the command

docker-compose build .
version: '3'
services:

web:

build: .
container_name: great
volumes:
- .:/code
- static:/code/static_cdn
- media:/code/media_cdn
depends_on:
- postgres
expose:
- 8080
command: bash -c "python manage.py collectstatic --no-input && python manage.py makemigrations && python manage.py migrate && gunicorn --workers=3 projectname.wsgi -b 0.0.0.0:8080"

postgres:
build: ./postgres
restart: unless-stopped
expose:
- "5432"
environment: # will be used by the init script
LC_ALL: C.UTF-8
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassowrd.
POSTGRES_DB: mydb
volumes:
- pgdata:/var/lib/postgresql/data/

nginx:
restart: always
build: ./nginx/
volumes:
- ./nginx/:/etc/nginx/conf.d
- ./logs/:/code/logs
- static:/code/static_cdn
- media:/code/media_cdn
ports:
- "1221:80"
links:
- web
volumes:
pgdata:
media:
static:
  1. services — from this point, we’ll highlight the different services we’ll be launching. As specified above, these will be nginx,python and postgres, and name them as we want. In my case i’ve named them nginx, web and postgres.
  2. build — remember all those dockerfiles we spent time writing? Good.using the build command you can specify the location of each individual dockerfile and based on the commands on these files, an image will be build.
  3. container_name — this gives the container the name you specified once, the containers are up and running.
  4. Volumes — this is a way of sharing data between the docker-containers and the host machine. They also allow persistence of data even after the docker-containers are destroyed and recreated again as this is something you’ll find yourself doing often.for more on volumes and how to use them check out this article.
  5. ports — this is to specify which ports from the docker containers are mapped to the host machine, taking the nginx service for example, the container’s port 80 is mapped to the host machine’s port 1221.
  6. expose — Exposing a port makes the port accessible to linked services, but not from the host machine.
  7. restart — specifies the behavior of the containers in case of unforeseen shutdown
  8. command — instructs the containers which commands to run before starting, in this case the chained commands in the web service are for checking for changes in the database and binding the web service to the port 8080.

5. final steps

To build the images, it’s now a matter of simply running

docker-compose build
docker-compose up
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'mydb',
'USER': 'myuser',
'PASSWORD': 'mypassword',
'HOST': 'postgres',
'PORT': 5432,
}
}
localhost:1221virtual-box-machine-ip:1221(for those using docker-toolbox)

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store