Deploying Django application with Docker, Postgres, Gunicorn, NGINX (Part-2)

Sagar Budhathoki Magar
4 min readDec 25, 2021
Deploying Django application with Docker, Postgres, Gunicorn, NGINX

Continued… part-1

Gunicorn

Now, install Gunicorn. It’s production grade WSGI server.

For now, since we want to use default django’s built-in server, create production compose file:

version: '3.5'services:
app:
build:
context: .
command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_data:/vol/static
ports:
- "8000:8000"
restart: always
env_file:
- .env.prod
depends_on:
- app-db
app-db:
image: postgres:12-alpine
ports:
- "5432:5432"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data:rw
env_file:
- .env.prod
volumes:
static_data:
postgres_data:

Here, we’re using commang gunicorn instead of django server command. we can static_data volume as it’s not needed in production. For now, let’s create .env.prod file for environemental variables:

DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
DB_ENGINE=django.db.backends.postgresql_psycopg2
POSTGRES_HOST_AUTH_METHOD=trust
POSTGRES_USER=sagar
POSTGRES_PASSWORD=********
POSTGRES_DB=portfolio_db_prod
POSTGRES_HOST=app-db
POSTGRES_PORT=5432

Add both files to .gitignore file if you want to keep them out from version control. Now, down all containers with -v flag, -v flag removes associated volumes:

$ docker-compose down -v

Then, re-build images and run the containers:

$ docker-compose -f docker-compose.prod.yml up --build

Run with -d flag if you wan’t to run services in background. If any error when running, check errors with command:

$ docker-compose -f docker-compose.prod.yml logs -f

Wow, let’s create production Dockerfile as Dockerfile.prod with production entrypoint.prod.sh file inside scripts directory of the root. entrypoint.prod.sh script file:

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z "$POSTGRES_HOST" "$POSTGRES_PORT"; do
sleep 0.1
done
echo "PostgreSQL started"
fi

exec "$@"

Dockerfile.prod file with scripts permission:

FROM python:3.8.9-alpine as builder

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONNUNBUFFERED 1

RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev libc-dev linux-headers

RUN apk add jpeg-dev zlib-dev libjpeg

RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /wheels -r requirements.txt

#### FINAL ####

FROM python:3.8.9-alpine

RUN mkdir /app
COPY . /app
WORKDIR /app

RUN apk update && apk add libpq
COPY --from=builder ./wheels /wheels
COPY --from=builder ./requirements.txt .
RUN pip install --no-cache /wheels/*
#RUN pip install -r requirements.txt

COPY ./scripts /scripts
RUN chmod +x /scripts/*

RUN mkdir -p /vol/media
RUN mkdir -p /vol/static

#RUN adduser -S user

#RUN chown -R user /vol

RUN chmod -R 755 /vol
#RUN chown -R user /app
#RUN chmod -R 755 /app

#USER user

ENTRYPOINT ["/scripts/entrypoint.prod.sh"]

Here we used a multi-stage build as it reduces final image size. ‘builder’ is a temporary image that’s used just to build python wheels with dependencies, that are copied to the Final stage. we can create a non-root user. Because that is the best practice to be safe from attackers. Now, update the compose production file with docker production file:

version: '3.5'

services:
app:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_data:/vol/static
expose:
- "8000:8000"
restart: always
env_file:
- .env.prod
depends_on:
- app-db

app-db:
image: postgres:12-alpine
ports:
- "5432:5432"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data:rw
env_file:
- .env.prod
volumes:
static_data:
postgres_data:

Rebuild, and run:

$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec app python manage.py migrate --

Ngnix

Nginx really gives you the ultimate power. You can do whatever you want. Let’s add Nginx to act as a reverse proxy for Gunicorn. Add service on docker-compose file (production):

version: '3.5'

services:
app:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_data:/vol/static
- media_data: /vol/media
ports:
- "8000:8000"
restart: always
env_file:
- .env.prod
depends_on:
- app-db

app-db:
image: postgres:12-alpine
ports:
- "5432:5432"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data:rw
env_file:
- .env.prod

proxy:
build: ./proxy
volumes:
- static_data:/vol/static
- media_data:/vol/media
restart: always
ports:
- "8008:80"
depends_on:
- app
volumes:
static_data:
media_data:
postgres_data:

Inside root directory create a proxy(whatever you want to name it) directory and add a configuration file, in my case I have created default.conf file as:

server {
listen 80;

location /static {
alias /vol/static;
}

location /media {
alias /vol/media;
}

location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}

And create uwsgi_params file for this.

uwsgi_param QUERY_STRING $query_string;
uwsgi_param REQUEST_METHOD $request_method;
uwsgi_param CONTENT_TYPE $content_type;
uwsgi_param CONTENT_LENGTH $content_length;
uwsgi_param REQUEST_URI $request_uri;
uwsgi_param PATH_INFO $document_uri;
uwsgi_param DOCUMENT_ROOT $document_root;
uwsgi_param SERVER_PROTOCOL $server_protocol;
uwsgi_param REMOTE_ADDR $remote_addr;
uwsgi_param REMOTE_PORT $remote_port;
uwsgi_param SERVER_ADDR $server_addr;
uwsgi_param SERVER_PORT $server_port;
uwsgi_param SERVER_NAME $server_name;

Also, add a Dockerfile inside the proxy directory for Nginx configuration:

FROM nginxinc/nginx-unprivileged:1-alpine

COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY uwsgi_params /etc/nginx/uwsgi_params

You can use expose instead of ports in docker-compose.prod.yml file for app service:

app:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_data:/vol/static
- media_data:/vol/media
expose:
- 8000
restart: always
env_file:
- .env.prod
depends_on:
- app-db

Again, re-build run and try:

$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput
$ docker-compose -f docker-compose.prod.yml exec web python manage.py collectstatic --no-input --clear

Ensure the app is running in localhost:8008.

That’s it.

Thank You!

Previous: Part-1

Originally published at https://blog.budhathokisagar.com.np.

--

--

Sagar Budhathoki Magar

Python/DevOps/Cloud Engineer | AWS | AIML Enthusiast | Django