Nginx: An Easy-to-Understand Overview

In this blog, we will understand the basic functionality of Nginx.

About Nginx

  • We can use Nginx as a webserver and proxy server.

  • Web Server:

    • Nginx can efficiently serve static content (HTML, CSS, JavaScript, images, etc.).

    • It is designed to handle many simultaneous connections with low resource usage, making it ideal for high-traffic websites.

  • Reverse Proxy Server:

    • Nginx can be a gateway between clients and backend servers (like application servers or databases).

    • It forwards client requests to backend servers while hiding their details from the client.

Feature of Nginx

  1. Load balancing:
    Distributes incoming traffic across multiple backend servers.
    To balance the load and improve performance & provide redundancy.
    It acts like a proxy server, that forwards client requests to other servers.

    • Load Balancing methods:

      1. Round robin:
        This is the default load balancing method used by Nginx.
        Distribute client requests sequentially to each server in a cyclical manner to each server in the group.

      2. Least connections:
        Route traffic to the server with the least number of active connections.
        When a client request is received, Nginx checks the number of active connections for each backend server in the upstream group.

      3. IP hash:
        Nginx calculates a hash value from the client's IP address, and it determines which backend server will handle the request.
        Requests from the same IP address will consistently go to the same backend server (as long as the server is available).
        It acts as a sticky session and ensures that a client is always directed to the same server.

      4. Weighted round robin:
        Nginx cycles through the server’s round-robin but allocates requests proportionally to their weights.
        Each server in the upstream group is assigned a weight.
        ex: A server with weight 2 gets twice as many requests as a server with weight 1.

      5. Weighted least connections:
        Nginx combines the benefits of the algorithm with the least connections with server weights.
        It routes requests to the server with the fewest active connections while considering the server's weight. Servers with higher weights receive a proportionally larger share of connections.

  2. Caching:
    Caching is a core feature of Nginx.
    Caches responses from the backend server for frequently accessed resources for the client end.
    Copies are stored temporarily to improve the performance.

  3. Security:
    There are multiple security capabilities that Nginx provides like WAF, Rate limiting, Authentication Mechanisms, CSP, Reverse Proxy Security, SSL/TLS Encryption, etc.

  4. Compression & Segmentation:
    Nginx proxy can compress the response to reduce the bandwidth consumption from both sides while accessing large image and video files to improve load times.
    It also supports sending responses in chunks instead of entire files at once, this is called segmentation.

Nginx configuration examples

  1. Using Nginx as a webserver to serve static files.

     http {
         server {
             listen 80;
             # Specifies the domain name or hostname the server block will respond to.
             server_name exapmle.com
    
             location / {
                 # Specifies the directory on the server where the website's files are located.
                 root /var/www/html/exapmle.com;
                 # Specifies the default files to serve when a directory is requested 
                 index index.html index.htm;
             }
         }
     }
    
  2. Using Nginx as a proxy server, to forward traffic to other web servers or backend services.

     http {
         server {
             listen 80;
             server_name api.exapmle.com
    
             location / {
                 # nginx to forward incoming requests to the backend service
                 proxy_pass http://backend_service_ip;
                 # proxy request to the value of $host (the original domain name, such as api.exapmle.com).
                 proxy_set_header Host $host;
                 # containing the original client's IP address.
                 proxy_set_header X-Real-IP $remote_addr;
                 # to determine the original request protocol (http or https).
                 proxy_set_header X-Forwarded-Proto $scheme; 
             }
         }
     }
    
  3. Using Nginx as a webserver to serve static files with security SSL.

     http {
         server {
             listen 80;
             server_name exapmle.com
             # to redirect all the incoming HTTP requests to the equivalent HTTPS URL.
             return 301 https://$host$request_uri;
         }
    
         server {
             # nginx listens for HTTPS traffic on port 443, with SSL enabled.
             listen 443 ssl;
             server_name exapmle.com
    
             # specifies the path to the SSL certificate file (public key)
             ssl_certificate /path/to/ssl/certificate/nginx.crt
             # specifies the path to the SSL certificate key file (private key)
             ssl_certificate_key /path/to/ssl/certificate/nginx.key
    
             location / {
                 root /var/www/html/exapmle.com;
                 index index.html index.htm;
             }
         }
     }
    
  4. Using Nginx as a proxy server with load balancing.

     http {
         upstream myapp1 {
             server svc1.example.com
             server svc2.example.com
             server svc3.example.com
         }
    
         server {
             listen 80;
             server_name api.exapmle.com
    
             location / {
                 proxy_pass http://myapp1;
                 proxy_set_header Host $host;
                 proxy_set_header X-Real-IP $remote_addr;
                 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                 proxy_set_header X-Forwarded-Proto $scheme; 
             }
         }
     }
    
  5. Using Nginx as a proxy server with caching.

     http {
         # defines a group of backend servers (for load balancing) that Nginx will proxy requests to.
         # by default they used round robin load balancing method.
         upstream myapp1 {
             # backend servers to which nginx will proxy requests. 
             server svc1.example.com
             server svc2.example.com
             server svc3.example.com
         }
    
         server {
             listen 80;
             server_name api.exapmle.com
    
             location / {
                 # forwards all incoming requests to the upstream group 
                 proxy_pass http://myapp1;
                 proxy_set_header Host $host;
                 proxy_set_header X-Real-IP $remote_addr;
                 proxy_set_header X-Forwarded-Proto $scheme; 
             }
         }
     }
    

Nginx as a K8s ingress controller

  • What Nginx did for web servers, is now doing for Kubernetes in the form of an Ingress controller.

  • An Ingress controller is a proxy with advanced load-balancing functionality for Kubernetes.

  • It acts as a proxy and load balancer that receives incoming traffic first and, based on the defined configuration, forwards it to the appropriate service inside the cluster.

  • The Nginx Ingress controller is one of the most popular options in Kubernetes.

In the next blog, we will see more about the Ingress controller…