Before we get into the thick of things, I want to clarify that “web server” and “web server software” aren’t the same thing. You can compare a web server to a kitchen and web server software to kitchen staff. When a customer orders a “meal” from the web server, the web server software prepares and serves the “dish.”
In other words, a web server includes the hardware necessary to host websites, while web server software processes requests to access web content.
NGINX is an open-source web server software. It can be used for caching, reverse proxies, and load balancing.
The term “web server” has become a standard shortcut for web server software, so don’t get confused. For clarity, I’ll also refer to web servers simply as servers. Now, let’s step back into the kitchen.
Unless you’re in a top-notch restaurant, the kitchen staff can’t prepare every cuisine you fancy — some chefs specialize in certain cuisines. You can compare the different types of web servers to these chefs. And let me tell you, NGINX is one hell of a cook!
-
Navigate This Article:
Understanding NGINX Basics
When you purchase a web hosting plan, you have to “hire” a chef for your kitchen. Some chefs are versatile and can prepare multiple cuisines (like Apache HTTP Server), while others can handle more customers at a time and can also mop the floor if necessary (like NGINX)!
You’re probably salivating by now, so let’s drop the food-related comparisons. Your mastery of NGINX has just begun.
What Is a Web Server?
When you enter a URL like https://www.hostingadvice.com in your web browser (such as Microsoft Edge) and press the “Enter” key, your browser sends a request to a Domain Name Server (DNS) to determine the IP address associated with HostingAdvice.
The browser then sends an HTTP request to the server using the IP address and establishes a TCP connection. This is where web server software steps onto the football pitch (it’s my favorite sport, shush).
The web server will look up the requested resources and interact with databases if necessary. After processing the request, it generates an HTTP response, which includes the requested data.
These processes will take place in a jiffy. After all, the HostingAdvice website should load on your screen within a matter of seconds.
Try to visualize these background processes — it’s satisfying! I know I’ve captured your attention. NGINX and Apache HTTP Server are the best web servers in the business, so let’s detail them.
NGINX vs. Apache: A Quick Comparison
NGINX is a smart contender, as it can handle multiple requests simultaneously (within a single thread). In contrast, Apache HTTP Server is more rigid, creating a new thread for each request.
When you piece things together, NGINX’s event-driven approach makes it ideal for serving static files (such as product images) and for high-traffic situations.
On the other hand, Apache’s process-driven approach is better suited for applications that require flexibility (such as eCommerce websites) and dynamic content (such as product recommendations). It’s difficult to determine which is outright better, so I’ll let you decide for yourself:
Criteria | NGINX | Apache HTTP Server |
---|---|---|
Architecture | Event-driven | Process-driven |
Performance | Suitable for high-traffic scenarios | May struggle with many simultaneous connections |
Resource Usage | Low memory and CPU footprint | Higher memory usage |
Scalability | Excellent scalability | Scales well but may require more resources and tuning |
Configuration | Straightforward | Highly flexible with extensive modules |
Community Support | Supportive, growing community | Long-standing, influential community |
Depending on the scale of your business, you can use NGINX and Apache HTTP Server in a complementary manner on different servers. While both web servers can deliver static and dynamic content, they specialize in only one — more on this soon.
NGINX’s Event-Driven Architecture
To better understand the event-driven nature of NGINX’s request processing, I’m going to refer to the chef analogy again. Instead of sitting idle while waiting for pasta to boil, the chef can chop veggies for a salad or grill chicken, maximizing efficiency.
NGINX works like that skilled chef — it can handle numerous requests simultaneously without wasting time or resources.
Benefits:
- Lower memory usage: It can handle hundreds of thousands of concurrent requests (yes, that’s how efficient it is) with minimal memory overhead.
- Better concurrency: If one event takes too much time, it simply moves on to the next one (it’s asynchronous).
- Optimized resource utilization: By using techniques like caching and load balancing, NGINX optimizes resource usage.
You may be wondering how NGINX can handle so many concurrent requests. NGINX operates on a single-threaded event loop model to manage all incoming requests, unlike Apache HTTP Server, which creates a new thread for every request. You can connect the dots (hint: think of its asynchronous behavior).
Key Features of NGINX
Earlier, I mentioned that, alongside being a Michelin-starred chef (it really is that good), NGINX can mop the kitchen floor if necessary. By that, I meant it isn’t limited to just being a chef; it can also act as a reverse proxy and load balancer.
If you want to enjoy the best of both worlds — NGINX and Apache HTTP Server — you’ll be particularly interested in its role as a reverse proxy.
HTTP Web Server
NGINX is specifically optimized for serving static content like CSS, JavaScript, images, and HTML, and in my opinion, it is the best web server for it.
You already know the mechanisms behind its workings, so I’m not going to reiterate them. Instead, I will talk about how NGINX manages keepalive connections.
As the name suggests, a “keepalive” connection allows your web browser to make multiple requests to a server over the same TCP connection.
Given how NGINX operates on a single-threaded event loop, it can efficiently handle these requests without compromising your user experience. Two birds, one stone.
Reverse Proxy
The best thing since sliced bread is proxy attendance. If you want to use NGINX and Apache HTTP Server on two different servers, consider setting up NGINX as a reverse proxy for dynamic content requests.
This basically means whenever your NGINX server receives a dynamic content request, it will forward the request to the Apache server, which will handle it. You can also proxy requests for dynamic content to application servers like PHP, Python, or Node.js.
Benefits:
- Seamless load distribution
- Improved security (masking backend servers)
- Better SSL termination
We’re probably going to kill a lot of birds at this rate!
Load Balancing
This brings me smoothly to NGINx’s role as a load balancer. NGINX supports numerous load balancing algorithms, which means you can use it to distribute incoming client requests across multiple backend servers.
Load Balancing Algorithms:
- Round-robin: Evenly distributes incoming requests across all available servers.
- Least connection: Sends incoming requests to the available server with the fewest active connections.
- IP hash: Uses your IP address to enable a sticky session.
Have you ever been unexpectedly logged out from a website or app? This might happen if sticky sessions aren’t enabled in a load-balanced environment. While the name seems funny (and awkward), sticky sessions, AKA session persistence, ensure your requests are routed to the same backend server.
Please enable sticky sessions in NGINX!
SSL Termination and HTTPS Support
I hope a few questions popped into your mind when I listed “better SSL termination” as a benefit of using NGINX as a reverse proxy.
What is SSL termination, and why is it important? When you send an HTTP request to a server (through your browser), NGINX decrypts the encrypted data and forwards it to the backend servers. This allows the servers to focus on doing what they do best — processing your requests in double-quick time.
Speaking of decryption, if you want your requests to be encrypted in the first place, always access websites with a valid SSL/TLS certificate (URLs that start with “https”).
If you’re a prospective website owner, I suggest using Let’s Encrypt for a free SSL certificate.
How to use Let’s Encrypt for free SSL certificates:
- Choose your domain name.
- Install Certbot (select the right web server and operating system combination!).
- Log in to NGINX via Secure Shell (SSH) to run Certbot.
- Run the command “sudo certbot –nginx” to request a certificate.
- Follow all the prompts with eagle-eyed vision.
- Let’s Encrypt will then run a background check to verify your ownership of the domain. Don’t worry; it’s routine.
- Once you’re verified, Certbot will obtain your SSL certificate and configure NGINX to use it.
Don’t forget to set up auto-renewal! Once everything is done, check your website to see if the certificate is working properly — your domain’s URL should start with “https.”
HTTP/2 and gRPC Support
Last but not least, NGINX supports modern web protocols like HTTP/2 and gRPC. I’m going to compare a keepalive connection with multiplexing to explain how significant this is.
In a keepalive connection, the focus is on keeping the connection alive for (potentially) multiple requests from the same client (your web browser). In multiplexing, the goal is to allow multiple requests to be sent over the same connection simultaneously.
In other words, multiplexing is meaner and faster, and it is a key HTTP/2 feature. Keepalive connections, on the other hand, are often used in HTTP/1.1. Other benefits of HTTP/2 include header compression and overall faster loading times.
The Google Remote Procedure Call (gRPC) protocol is just as interesting. Many businesses are now building apps in a modular manner, with each module referred to as a microservice.
These microservices focus on specific business functions (like user authentication and payment processing) and can communicate with other services over a network. Now, the communication part is where the gRPC protocol comes into the picture. Luckily for you, NGINX provides strong support for it.
Setting Up and Configuring NGINX
If you’re using a virtual private server (VPS) or dedicated server, you’ll have to install and configure NGINX yourself (unless you purchase a managed plan). In a shared hosting environment, this is usually managed by the web hosting provider. You can interact with it through the control panel to make specific changes, like adding a custom domain, setting up redirects, or managing SSL (SSH access is usually restricted).
This means in a shared hosting plan, you can install a free Let’s Encrypt SSL certificate by using your web host’s provided control panel (such as cPanel).
Installing NGINX
It’s easier to install NGINX on major operating systems like Ubuntu (a Linux distro), Windows, and macOS via package managers using the command line interface (CLI), so let’s get into the thick of things.
Here are step-by-step guides to installing NGINX on these platforms:
Ubuntu | Windows | macOS |
---|---|---|
1. Update package list: “sudo apt update” | 1. Download the latest version of NGINX by clicking on this link | 1. You need to install HomeBrew to kick things off: “ /bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/ Homebrew/install/HEAD/install.sh)” “ |
2. Install NGINX: “sudo apt install nginx” | 2. Unzip the downloaded file to a directory of your choice. Let’s assume “C:\nginx” | 2. Install NGINX: “brew install nginx” |
3. Start NGINX: “sudo systemctl start nginx” | 3. Navigate to the directory via the CLI: “cd C:\nginx” and run “start nginx” | 3. Start NGINX: “brew services start nginx” |
4. Enable NGINX to start at boot: “sudo systemctl enable nginx” | 4. Verify the installation by visiting “http://localhost” | 4. Verify the installation by visiting “http://localhost:8080” |
5. Verify the installation by visiting “http://localhost” on your web browser. You should see the NGINX welcome page |
You can also install NGINX by compiling from source, but I’ll leave that for another day!
Basic Configuration for NGINX
Now that you’ve installed NGINX on your server operating system, I’m going to walk you through how to set it up.
The NGINX configuration file, “nginx.conf,” controls the behavior of the NGINX web server and is divided into several blocks.
These blocks define how the web server interacts with incoming requests. You need to access the NGINX configuration file to configure these blocks.
In an Ubuntu environment, for example, the default location of this file is “/etc/nginx/nginx.conf”.
Breakdown of blocks:
- http: This is the top-level block in the hierarchy and encompasses all configurations related to HTTP requests.
- server: The “server” block is defined within the “http” block and specifies how to handle requests for a particular domain or set of domains.
- location: This block is defined within a “server” block and is used to configure how to handle specific Uniform Resource Identifiers (URIs) or paths.
To set up an NGINX web server, you’ll need to define your requirements in blocks of code called server blocks (within the “http” block).
Here’s an example configuration (with a single server block) for serving a static website called “example.com”:
http {
server {
listen 80;
server_name example.com www.example.com;
root /var/example.com/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
}
In this example, the “server” listens on port 80 and serves static content for www.example.com. In this context, “root” and “index” are key directives. You can think of them as commands or settings you place within a block.
Note: If you’re working on a remote server, you’ll need SSH to access it securely. Once connected via SSH, you can use the CLI to manage it. To access SSH, enter the command “ssh username@hostname” in the CLI, where username is your remote server’s username and hostname is its IP address or domain name.
Reverse Proxy Configuration
If you want to set up NGINX as a reverse proxy to forward requests to backend application servers, such as Node.js, Python, or Ruby, let’s consider a basic reverse proxy configuration to help you understand the variables in play.
Basic reverse proxy configuration:
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Here’s how the following directives are used in this context:
- proxy_pass: Specifies the backend server to which requests should be forwarded.
- proxy_set_header: Passes your client’s original host header to the backend server.
- proxy_set_header X-Real-IP: Forwards your client’s IP address to the backend server.
When proxying, I recommend specifying directives like “proxy_buffering on” (to enable response buffering), “proxy_cache” (to implement caching), and “proxy_read_timeout” and “proxy_connect_timeout” to adjust timeout settings.
Configuring SSL for Secure Connections
If you follow the steps I listed to obtain a free SSL certificate from Let’s Encrypt, it’s likely Certbot has automatically configured your NGINX instance to enable HTTPS. You can run the “sudo nginx -t” command to check if everything is set up correctly. If HTTPS isn’t activated or if there are issues with the SSL configuration in your NGINX setup, you’ll see an error message.
Here’s how you can enable HTTPS with a basic configuration for www.example.com:
server {
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://localhost:3000;
}
}
In this context, the directive “ssl_certificate” points to the full chain certificate (the SSL certificate and other related components), and “ssl_certificate_key” points to the certificate’s private key.
To redirect all HTTP traffic to HTTPS, you can set up a separate server block like this:
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri; # Redirect all HTTP traffic to HTTPS
}
Note: You can enhance SSL performance with optimizations like OCSP stapling (Google it; It’s interesting), HTTP/2, and TLS session caching (to speed up the SSL handshake for returning clients).
Advanced NGINX Features and Use Cases
Do you really think I was going to skip a masterclass on how to use NGINX as a load balancer? I’m a man of principle, after all.
I could have included this in the previous section, as load balancing builds on the reverse proxy functionality, but since it requires an understanding of different algorithms and session persistence, I decided to introduce it in a separate section.
Load Balancing With NGINX
Round-robin, least connections, and IP hash are some of the load balancing algorithms you should familiarize yourself with. Round-robin is the most popular of the three. Let’s explore how a basic round-robin load balancing strategy might look.
Example of a basic round-robin load-balancing setup:
http {
upstream my_backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
server_name loadbalancer.example.com;
location / {
proxy_pass http://my_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
In this context, the “upstream” block defines a group of backend servers, “my_backend,” for load balancing. The “proxy_pass” derivative directs all incoming requests to these backend servers in a round-robin fashion.
Note: I recommend using external monitoring tools (such as Nagios or Zabbix) or custom scripts with Cron to automatically monitor the health of your backend servers and remove unhealthy nodes. If you find it difficult to develop a custom script (which is the cost-effective option), consider choosing a robust monitoring tool instead.
Rate Limiting and Access Control
Load balancing isn’t the only superpower in NGINX’s arsenal. Rate limiting and access control are also key features.
While NGINX can handle hundreds of thousands of concurrent requests like a pro, you should set rate-limiting rules based on IP addresses to protect your servers from abuse. A large-scale DDoS attack, for example, can exceed millions of requests per second.
I bet you don’t like working overtime, so why should your servers?
Here’s how you can set rate-limiting rules based on IP:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
server {
location / {
limit_req zone=mylimit burst=5;
limit_req_status 503;
}
}
This server block will effectively limit incoming requests based on IP and protect your backend server. You can slightly modify the server block to include other servers as well. Along the lines of limiting requests based on IP, you can control which IP addresses are allowed access to your server (IP ranges) as well. You can use this feature to block potentially malicious traffic.
Here’s an example of restricting access by IP ranges:
location / {
allow 192.168.1.0/24;
deny all;
}
The server block is self-explanatory, so I won’t insult your intelligence by breaking it down step by step.
Caching With NGINX
When you click on a website, you expect it to load in the blink of an eye, right? Time is of the essence, after all.
Whether you implement keepalive connections or multiplexing, every request made to your application servers can be time-consuming and might drive away potential customers. This is especially true if your website is dynamic (like one built on PHP).
This is where FastCGI caching comes in. It’s NGINX’s way of speeding up dynamic content delivery by caching responses from application servers.
Example of caching PHP-generated pages with FastCGI:
location ~ \.php$ {
include fastcgi_params;
fastcgi_cache my_cache;
fastcgi_cache_valid 200 1h;
fastcgi_pass 127.0.0.1:9000;
}
In this server block, NGINX serves the cached versions of up to 200 PHP requests (stored in “my_cache”) to any user who requests them over the next hour.
Mind you, FastCGI caching is limited to dynamic content. You can implement proxy caching to cache static and dynamic content from proxied servers. But keep in mind FastCGI caching is tailored for PHP applications, so I recommend using both caching techniques.
Microservices Architecture With NGINX
NGINX serves as a reverse proxy for microservices architectures, acting like a knowledgeable guide that sits in front of your microservices. It automatically directs incoming requests to the right service when paired with service discovery tools like “Consul” or “etcd”. This functionality makes your system more efficient, dynamic, and flexible.
NGINX shines even brighter in containerized environments, such as those managed by Docker and Kubernetes. As you scale up, NGINX adapts seamlessly, directing traffic to the available instances without manual intervention.
Monitoring and Optimizing NGINX Performance
Setting up your NGINX server with all necessary basic and advanced configurations isn’t enough; that’s just the beginning of your journey.
You need to monitor your server in real time using built-in tools like access logs and error logs and third-party tools like Prometheus, Datadog, and Grafana to take monitoring to the next level.
Using these tools will help you keep a close eye on traffic patterns and performance metrics and assist you in troubleshooting potential bottlenecks before they become critical issues.
Monitoring NGINX With Built-in Tools
If you have a small setup and enjoy coding and command line work, you can use the built-in access logs and error logs NGINX offers to monitor traffic and troubleshoot issues. Here are some of the key metrics you should monitor and the command line tools you can use for them.
Key metrics to monitor:
- Response times: You can use command line tools like “awk” or “grep” to analyze response times in the logs.
- 4xx and 5xx errors: 4xx (client-side) and 5xx (server-side) errors indicate issues with requests made to a server. You can use these commands to count 4xx and 5xx errors:
“grep ‘ 4[0-9][0-9] ‘ /var/log/nginx/error.log | wc -l”
“grep ‘ 5[0-9][0-9] ‘ /var/log/nginx/error.log | wc -l”
- Traffic rates: You can use this command to get the number of requests made to your NGINX server: “wc -l
- Connection status: To get real-time connection metrics, you need to enable it in your NGINX configuration:
server {
location /nginx_status {
stub_status on;
allow 127.0.0.1;
deny all;
}
}
You can then access these metrics through “http://your-server/nginx_status”. Remember to regularly review logs and update your security posture.
Integrating NGINX With Monitoring Tools
I recommend using built-in NGINX tools alongside third-party tools such as Prometheus, Datadog, and Grafana for comprehensive server monitoring.
You shouldn’t rely on a single monitoring solution. Having both provides a safety net in case one fails. Of course, if your budget doesn’t permit buying third-party tools, feel free to stick with the built-in tools.
Using these third-party tools will significantly reduce the need to write custom scripts and offer benefits such as automated data collection, alerts and notifications, advanced analytics, and streamlined integration with systems like incident management and logging solutions.
Optimizing NGINX for Performance
The final step in setting up your server for success is optimizing it for peak efficiency. After all, your goal should be to deliver a fast and seamless experience to your users. Consider implementing the following strategies:
- Tuning worker processes: By setting the right number of worker processes and connections per worker, you can significantly improve your server’s efficiency.
- Buffering and gzip compression: Enable and optimize buffering and compression to reduce bandwidth and improve client-side performance.
- Caching static assets (CSS, JS) for better page load times: Caching these elements will dramatically improve page load times for your users, leading to happier customers and a more successful business.
With these strategies in place, you’re now ready to set up a top-notch NGINX server!
NGINX vs. Other Web Servers
NGINX isn’t the only fish in the web server sea, but it’s certainly the biggest — it has a market share of around 33.8%. One of the primary reasons for its popularity is its ability to efficiently handle high-traffic websites.
I’m pretty sure you’ve used TikTok at some point in your life. To demonstrate just how powerful NGINX is, TikTok uses it to handle requests from its 50 million daily active users!
While other web servers like Apache HTTP Server and LiteSpeed are preferred for relatively lower-traffic applications, they’re no slouches.
NGINX vs. Apache
I’ve already discussed the difference between NGINX and Apache HTTP Server in length, so I won’t bore you by repeating myself. I did save one juicy detail for now: Apache offers extensive logging capabilities that you can customize to your liking.
This flexibility makes it easier for you to analyze traffic patterns, user behaviors, and errors. You can use this information to enhance your website’s performance and user experience.
In comparison, NGINX offers a simpler, less customizable logging configuration. Mind you, while NGINX’s logging is less flexible than Apache, you can still use it to derive meaningful insights for your website (as you already know). Apache is simply better in this respect.
NGINX vs. LiteSpeed
If NGINX is the OG for static content apps, LiteSpeed is the OG for dynamic content apps (only on paper); both web servers operate on an event-driven architecture and are exceptional in their own right.
You may be thinking: if LiteSpeed is that good at serving dynamic content, why is NGINX preferred over it? This table should help address your question:
Criteria | NGINX | LiteSpeed |
---|---|---|
Cost | Free | The free version of LiteSpeed, OpenLiteSpeed, is limited; Its paid version, LiteSpeed Enterprise, offers better performance optimizations and crucial advanced features |
Community | Large community | Smaller community with few resources, tutorials, and support options |
Configurability | Highly flexible configuration file format | Supports fewer complex scenarios |
Hosting Support | Often the default web server option | Fewer web hosts offer it |
This is just the tip of the iceberg. If you’re looking for a more friendly user experience with decent performance and sufficient built-in customizations, LiteSpeed (OpenLiteSpeed) might be the right choice for you.
I don’t recommend spending money on LiteSpeed Enterprise when you can leverage the extensive capabilities of NGINX for free.
NGINX: A High-Traffic Web Server Solution
NGINX is quite easily the best solution for high-traffic websites. As a refresher, consider using NGINX with Apache HTTP Server together, particularly for dynamic content management. Here’s a simple NGINX server block that proxies requests to an Apache server:
server {
listen 80;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
You should be able to decipher the NGINX syntax by now! You can also use NGINX to load balance requests across multiple Apache servers. Additionally, with the help of FastCGI, NGINX can effectively handle static and dynamic content.
Furthermore, you can pair NGINX as a reverse proxy with a dedicated application server. The ball is in your court. Whatever you choose, NGINX won’t let you down.