Table of contents
In this, we'll be deploying a Django application with docker, postgres, gunicorn and nginx configurations.
Prerequisites
First, ensure the following is installed on your machine:
- Python 3.7 or higher(I've used python 3.8.9)
- Python pip
- Git and a GitHub account
- Docker and docker-compose
Let's jump directly to dockerization of Django web application. I'm sure you have Django project set up on your system.
Docker
After installation of docker, add a Dockerfile to the root directory of your project:
FROM python:3.8.9-alpine
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONNUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
Here, we used an alpine-based docker image for python 3.8.9. Then we set two environmental variables:
- PYTHONDONTWRITEBYTECODE (which prevents writing pyc files)
- PYTHONUNBUFFERED (which prevents buffering stdout and stderr)
And, we updated the pip version and copied the requirements.txt file to the working directory, and installed requirements. After that, we finally copied our project to the working directory(/app).
Now, create a docker-compose.yml file to the project root and add services:
version: '3.5'
services:
app:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- static_data:/vol/web
ports:
- "8000:8000"
restart: always
env_file:
- ./.env
Create .env file at the root (the same directory containing docker-compose.yml) and edit as:
DEBUG=1
SECRET_KEY=foo
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::]
Update DEBUG, ALLOWED_HOSTS variables in settings.py:
DEBUG = int(os.environ.get("DEBUG", default=0))
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS").split(" ")
'DJANGO_ALLOWED_HOSTS' should be a single string of hosts with a space between each. For example: 'DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]'
In docker-compose file, build: . means it will build image from the root Dockerfile we have created before.
Now, build the image:
$ docker-compose build
Use sudo if needed. Run the container once the image is built:
$ docker-compose up -d
Postgres
Add new service to the docker-compose.yml file, and update django database settings, with Psycopg2. Lets add new service named as app-db:
version: '3.5'
services:
app:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- static_data:/vol/web
ports:
- "8000:8000"
restart: always
env_file:
- ./.env
depends_on:
- app-db
app-db:
image: postgres:12-alpine
ports:
- "5432:5432"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data:rw
env_file:
- .env
# you can also use environmental variables directly as following:
#(remember variables for postgres should be named exactly as given below)
# environment:
# - POSTGRES_HOST_AUTH_METHOD=trust
# - POSTGRES_USER:sagar
# - POSTGRES_PASSWORD:********
# - POSTGRES_DB:portfolio_db
# - TZ:Asia/Kathmandu
We will just use the official Postgres docker image and postgres_data is the persistent data volume within docker. It should suffice
DEBUG=1
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
POSTGRES_HOST_AUTH_METHOD=trust
POSTGRES_USER=sagar
POSTGRES_PASSWORD=********
POSTGRES_DB=portfolio_db
POSTGRES_HOST=app-db
POSTGRES_PORT=5432
Update the DATABASES dict in settings.py:
DATABASES = {
'default': {
'ENGINE': os.environ.get("DB_ENGINE", "django.db.backends.sqlite3"),
'NAME': os.environ.get("POSTGRES_DB", os.path.join(BASE_DIR, "db.sqlite3")),
'USER': os.environ.get("POSTGRES_USER", "default_user"),
'PASSWORD': os.environ.get("POSTGRES_PASSWORD", "default_password"),
'HOST': os.environ.get("POSTGRES_HOST", "localhost"),
'PORT': os.environ.get("POSTGRES_PORT", "5432"),
}
}
Here, the database is configured based on the environment variables that we just defined. Take note of the default values. Update the Dockerfile to install the appropriate packages required for Psycopg2:
From python:3.8.9-alpine
WORKDIR /app
PYTHONDONTWRITEBYTECODE 1
ENV PYTHONNUNBUFFERED 1
#psycopg2 dependencies installation
RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev libc-dev linux-headers
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
Add Psycopg2 to requirements.txt. Make sure everytime you install packages, they are added to requirements.txt file. (pip freeze > requirements.txt)
Build the new image with two services:
$ docker-compose up -d --build
Then run the migrations:
$ docker-compose exec app python manage.py migrate --noinput
Operations to perform:
Apply all migrations: admin, auth, blogs, contenttypes, django_summernote, portfolio, sessions, works
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying auth.0010_alter_group_name_max_length... OK
Applying auth.0011_update_proxy_permissions... OK
Applying blogs.0001_initial... OK
Applying django_summernote.0001_initial... OK
Applying django_summernote.0002_update-help_text... OK
Applying portfolio.0001_initial... OK
Applying sessions.0001_initial... OK
Applying works.0001_initial... OK
Applying works.0002_auto_20200325_1330... OK
Applying works.0003_auto_20200325_1411... OK
Applying works.0004_auto_20200325_1413... OK
Applying works.0005_auto_20200325_1417... OK
Applying works.0006_remove_work_image... OK
Applying works.0007_work_image... OK
If any error, run docker-compose down -v to remove the volumes along with the containers. Then re-build and run migrations.
Ensure database tables are created:
$ docker-compose exec app-db psql --username=user --dbname=portfolio_db
$ sudo docker-compose exec app-db psql --username=sagar --dbname=portfolio_db
psql (12.7)
Type "help" for help.
portfolio_db=# \c portfolio_db
You are now connected to database "portfolio_db" as user "sagar".
portfolio_db=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------+-------+----------+------------+------------+-------------------
portfolio_db | sagar | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | sagar | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | sagar | UTF8 | en_US.utf8 | en_US.utf8 | =c/sagar +
| | | | | sagar=CTc/sagar
template1 | sagar | UTF8 | en_US.utf8 | en_US.utf8 | =c/sagar +
| | | | | sagar=CTc/sagar
(4 rows)
portfolio_db=# \dt
List of relations
Schema | Name | Type | Owner
--------+------------------------------+-------+-------
public | auth_group | table | sagar
public | auth_group_permissions | table | sagar
public | auth_permission | table | sagar
public | auth_user | table | sagar
public | auth_user_groups | table | sagar
public | auth_user_user_permissions | table | sagar
public | blogs_category_post | table | sagar
public | blogs_comment | table | sagar
public | blogs_post | table | sagar
public | blogs_post_categories | table | sagar
public | django_admin_log | table | sagar
public | django_content_type | table | sagar
public | django_migrations | table | sagar
public | django_session | table | sagar
public | django_summernote_attachment | table | sagar
public | portfolio_contact | table | sagar
public | works_category_work | table | sagar
public | works_work | table | sagar
public | works_work_categories | table | sagar
(19 rows)
portfolio_db=#
Now add entrypoint.sh script inside scripts directory:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z "$POSTGRES_HOST" "$POSTGRES_PORT"; do
sleep 0.1
done
echo "PostgreSQL started"
fi
# It's okay to run the following two, flush and migrate commands on development mode(when debug mode is on) but not recommended
# for production:
# python manage.py flush --no-input
# python manage.py migrate
exec "$@"
Update Dockerfile with file permissions, and also add DATABASE variable to .env file.
From python:3.8.9-alpine
WORKDIR /app
PYTHONDONTWRITEBYTECODE 1
ENV PYTHONNUNBUFFERED 1
#psycopg2 dependencies installation
RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev libc-dev linux-headers
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
COPY ./scripts /scripts
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN chmod -R 755 /vol/web
ENTRYPOINT ["/scripts/entrypoint.sh"]
Edit .env file:
DEBUG=1
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
POSTGRES_HOST_AUTH_METHOD=trust
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=portfolio_db
POSTGRES_HOST=app-db #from docker-compose
POSTGRES_PORT=5432
DATABASE=postgres
Now, re-build, run and try localhost:8000
Next: Django, Postgres, Gunicorn, Nginx with Docker ( Part-2 )