NGINX as a Reverse Proxy With Docker

DevOps
Published May 7, 2022 ยท 4 min read
If you're following the microservices architecture, or at least you have several endpoints for the backend to maintain, you may want to use reverse proxy, and NGINX got you. In the following tutorial we will go over how to use it alongside Docker and get it up and running.

Motivation

You may want to use a reverse proxy/ API gateway in your project for the following reasons:

  • you don't want to expose your endpoints to the client. Instead, it is preferable to have a single entry point address that hides all the internal endpoints behind it.
  • changing your endpoints or moving them around won't be an issue for the client anymore, as it will all be the proxy's job to maintain those endpoints.

NGINX Docker Container

First, you need to create a container to play the role of the proxy. We do that by running NGINX's alpine image, but we want to configure it, so we will be building our custom image on top of theirs, and there we will be able to pass our configurations.

FROM nginx:alpine

COPY ./nginx.conf /etc/nginx/nginx.conf

NGINX Configuration

To configure NGINX, we need to create a configuration file named `nginx.conf`. Its content will be:

worker_processes auto;
events { }

http {
    server {
        listen  80;
        server_name localhost;
        return 301 https://$host$request_uri;
    }
    
    server {
        listen 80;
        server_name _;
        
        location / {
            proxy_pass http://ui;
        }

        location /service1/ {
            proxy_pass https://service1/;
        }
        
        location /service2/path/ {
            proxy_pass https://service2/;
        }   
        
    }
}

In the above configuration, we are:

  • listening to port 80
  • defining the default path (with a slash '/')
  • defining all the paths that we want to create proxies for
  • passing the call to the defined proxies

Defining a proxy

To define a proxy:

  1. specify the path that the gateway will receive and pass it to the hidden service
  2. pass it to the address of the service. If you are using Docker Compose, you can just specify the hostname of the service used in docker-compose.yml

The syntax goes as follows:

location /path_gateway_will_receive/ {
    proxy_pass https://name-of-service/;
}

Note: be very careful with the slashes. Make sure to use them like in the example above.

Using SSL/TLS Certificate

If you want your backend to support HTTPS, it is enough to provide an SSL certificate for the API gateway. Other than that, all the calls between the agent and the services are internal and secure by definition.

Generating the certificate

To generate a certificate, run the following command in your UNIX shell:
 

docker run --rm -p 80:80 -p 443:443 \
    -v /root/nginx/letsencrypt:/etc/letsencrypt \
    certbot/certbot certonly -d {your_domain} \
    --standalone -m {your_email_address} --agree-tos

The above command runs a container of certbot docker image which generates the certificate. Make sure to change the values of the domain and the email to match yours.

After you run the command, you will get a folder /root/nginx/letsencrypt/live/{your_domain}/ which contains files:

  • cert.pem
  • chain.pem
  • fullchain.pem
  • privkey.pem

These files, together, represent your certificate.

We also need to create a Diffie-Hellman Parameter that enhances the security of our project. To generate it you need to run the following command

openssl dhparam -out /root/nginx/dhparam.pem 4096

Then you will need to re-configure NGINX to:

  • listen to port 443 alongside 80, since the first one is the default one for HTTPS calls
  • add the paths for our generated files 

To do so, edit nginx.conf as follows:

worker_processes auto;
events { }

http {
    server {
        listen  80;
        server_name localhost;
        return 301 https://$host$request_uri;
    }
    
    server {
        listen 80;
        listen 443;
        server_name _;

        ssl_certificate /etc/letsencrypt/live/mohammed.ezzedine.me/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/mohammed.ezzedine.me/privkey.pem;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
        ssl_trusted_certificate /etc/letsencrypt/live/mohammed.ezzedine.me/fullchain.pem;

        ssl on;
        
        location / {
            proxy_pass http://ui;
        }

        location /service1/ {
            proxy_pass https://service1/;
        }
        
        location /service2/path/ {
            proxy_pass https://service2/;
        }     
        
    }
}

Docker Compose

Docker Compose is a recommended tool to orchestrate your containers. The following docker-compose.yml file does the job for our purpose

version: '3.9'

services:
  service1:
    build: path/to/service1
    hostname: service1
    networks:
      - gateway-internal
        
  service2:
    build: path/to/service2
    hostname: service2
    networks:
      - gateway-internal
    
  ui:
    build: path/to/ui
    hostname: ui
    networks:
      - client-gateway
      
  proxy:
    build: path/to/proxy
    hostname: proxy
    networks:
      - client-gateway
      - gateway-internal
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./cert/nginx/letsencrypt:/etc/letsencrypt
      - ./cert/nginx/dhparam.pem:/etc/ssl/certs/dhparam.pem
        
networks:
  client-gateway:
    name: client-gateway-network
  gateway-internal:
    name: gateway-internal-network