I'd say that protecting your webserver from abuse is pretty important. One way of protecting it is through limiting traffic, and Nginx happens to be very flexible and efficient at that!

There are many reasons for wanting to limit traffic. For my own personal use, the three most important factors are:

  • Limiting resource consumption (bandwidth, but also cpu and memory)
  • Protecting login pages from brute forcing
  • Learning how it works

Nginx allows for many strategies to limit traffic. I'll walk you through the way I've employed rate_limit and limit_req, starting with the latter.

Using limit_req to limit abuse based on IP

limit_req is great for limiting the number of requests hitting your site. It can be used in several contexts:

  • Use limit_req in the http context in nginx.conf to affect all traffic going through your Nginx server.
  • Use limit_req in the server context to limit the effect to a server block (aka "virtual host" from the Apache world).
  • Or use limit_req in the location context, which allows for granular request limiting for specific paths on diffrent sites.

In order to use limit_req, you must first create at least one request limit zone with limit_req_zone in your http context. Think of a zone as a dictionary where you look up a key when a requests hits your website. If you find the key, you check if they key has reached it's maximum allowance and either discard the request, queue it or pass it on.

Here is a really basic example:

http {
  limit_req_zone $binary_remote_addr zone=credentials_ip:5m rate=2r/s;

  server {

    location /login {
      limit_req zone=credentials_ip burst=6 nodelay;

limit_req_zone defines a zone in which an IP address is only allowed to do two requests per second to resources within the zone. In the example, I've added one location of a specific server block to the zone with limit_req. As a consequence, any client making more than two requests per second to a URL that starts with /login in that server block will receive 503 errors for requests that exceed the request rate of 2 per second. However, I've also included "burst=6 nodelay". That means that an IP address is actually allowed to burst 6 requests without any delay, at the cost of not being able to do further requests for the next couple of seconds, according to the zone limit of 2 requests per second. Neat!

Additional notes to this simplified config:

  • $binary_remote_addr is the binary representation of an IP address. It takes up less space in memory than the string representation.
  • 5m is the memory the zone is allowed to expend. It's more than plenty for a small site!

Using limit_req to limit abuse, but allow some IPs more headroom with whitelisting

An interesting feature with Nginx is the geo module. It gives you the ability to create variables which values depend on a client IP address.

Another very useful Nginx feature is the map module. It gives you the ability to create variables which return values repend on another variable you specify.

Geo + Map together give you everything you need to allow whitelisted clients more headroom.

See this example:

http {
  geo $ip_limited {
      # By defaylt, apply limit
     default 1;
     # Whitelisted IP ranges, do not apply limit 0; 0; 0;

  map $ip_limited $limit_by_ip {
      # Do not apply limit if $ip_limited=0
      0 "";
      # Apply limit if $ip_limited=1
      1 $binary_remote_addr;

  limit_req_zone $limit_by_ip zone=credentials_ip:5m rate=2r/s;
  limit_req_zone $binary_remote_addr zone=whitelisted_ip:1m rate=500r/s;

  server {

    location /login {
      limit_req zone=credentials_ip burst=6 nodelay;
      limit_req zone=whitelisted_ip;

What happens here is as follows.

1) The client requests the login location.

2) Nginx checks the updated credentials_ip zone. The input key zone is now the variable $limit_by_ip instead of $binary_remote_addr from the previous example.

3) $limit_by_ip is a map that depends on the variable $ip_limited

  • $ip_limited is a geo variable that depends on the client IP address. If the client IP is within the whitelisted IP ranges, $ip_limited will be 0. All other IPs will return the value 1.
  • The $limit_by_ip map states that if the $ip_limited resolves to 0, it's value should be "" (empty string). If $ip_limited resolves to 1, the value of $limited_by_ip should become the client's IP address.

4) If the client IP is whitelisted, the "credentials_ip_zone" will be skipped, because the input variable to the zone, the key, is an empty string.

5) If the client IP isn't whitelisted, the "credentials_ip_zone" is applied, because the input variable to the zone now becomes the IP address of the client. The IP will now have a request limit or 2r/s towards the zone.

6) In any case, the next "whitelisted_ip" zone will also be applied, because it always gets a non empty key as its input. This zone has a request limit of 500 r/s..

7) The strictest set of limits from the various zone is applied in the end.

Another thing to remember is that limit_req is only inherited from superior contexts if you don't specify limit_req in the lower contexts.

Using limit_req to stop certain URLs from being abused

Request limit zones don't have to use IPs as the key to limit access to your server. It can use any variable, for instance URIs. When you use IPs to limit access, you say that one client can't spend too many resources within the contexts you choose with limit_req. When you use URIs as the key to limit access, you say that one resource/page on your site, is prohibited from being invoked too frequently.

A commom use case is when a story goes viral. It does't help to limit client IPs from accessing the the site with the viral story - there are simply too many different client IPs knocking on the door, so your limit won't be effective. Your server will die if it isn't prepared for the rush. However, if you have a limit in place that specifies how often your pages can be viewed per minute, you're a little more prepared for what is to come. This is how you do it:

Specify this in the http context:

limit_req_zone $request_uri zone=credentials_uri:10m rate=60r/m;

Note that the zone definition now is in requests per minute.

Specify this together with your ip limitations:

limit_req zone=credentials_uri burst=60 nodelay;

Note that when we include a context in a zone, we now allow for much higher bursts, even on a login page. That's because it could be normal in a company for 10-20 people to be logging in at 8 a.m. You need to allow the URI to burst. And it makes sense in my head to do the maths per minute, rather than per second, although for Nginx it doesn't matter.

Whenever you limit requests by IP, you should add a limit by URI too - unless you scale really, really well. If you combine both uri and ip rules, you end up with a pretty solid ruleset.

Using rate_limit to limit bandwith usage based on IP and basic auth

The last topic in this super long post is limiting bandwidth usage. And here I'll throw in IP-based AND Basic Auth based whitelisting too. The whitelisting is just an example; you can whitelist on virtually anything!

    # Limiting bandwidth based on users and ips
    map "$remote_user:$ip_limited" $get_rate {
        # Default bandwidth
        default 3m;
        # If IP is unlimited, open it up
        "~*^:0" 20m;
        # If remote_user is set with >=2 characters, in, open it up
        "~/^.{2,}$/^:~*^" 20m;

$get_rate is a map that returns 3m or 20m depending on the two variables $remote_user and $ip_limites. 3m is the default vaue.

  • If $ip_limited returns 0, that is whitelisted, you get 20m back.
  • If $remote_user is more than 1 character long, we know that the user has done a sucessful Basic Auth and $remote_user contains the username, the map returns 20m.

The "m" stands for megabytes per second. You could use "k" for kilo or nothing for just bytes.

To apply this bandwith limitation, simply place the following in http, server or location context you'd like to limit:

set $limit_rate $get_rate;

Beautifully simple!

Previous Post Next Post