Back to blog

Follow and Subscribe

Grinch bots penalized w/ enriched security data & our edge cloud platform | Fastly

Brooks Cunningham

Senior Security Strategist, Fastly

In a previous post, I discussed how to enrich requests with information from our edge cloud platform to your origin. Your central cloud or origin has a lot of capabilities and a lot of centralized information, and that was just one example of how to leverage all that info for better visibility. How awesome would it be to also be able to share security decisions from the origin to the Fastly edge?

In this post, I will be going over how you can use the information from an origin response to add an abuse IP address to our penalty box. We've been touting the promise of security at the edge, and this is just one example of what it can do. Let’s jump in.

‘Tis the season for “grinch bots,” where various tools are checking inventory and buying it before a consumer has the chance to purchase. While this is not desirable for the end user trying to purchase that now-unavailable inventory, the “grinch bot” tooling can also have a negative impact on your company by sending a high volume of requests to constantly check the inventory. These requests will often go to your origin, where there is a centralized record of the available inventory over a long duration of time.   

There are several options for dealing with this type of origin traffic problem, including caching the inventory at the edge and purging the cache only when an update to the inventory is necessary. Another option is to block this traffic. To do this, your origin simply needs to observe an undesirable behavior for a given IP address, return a custom response code such as 406 using our next-gen WAF, and then let the Fastly edge block that IP address for a configurable duration.

Edge rate limiting and the penalty box

Edge rate limiting is currently in Limited Availability. Documentation covering the concepts that we will be using for the remainder of this article may be found on our Developer Hub.

Here’s how it works: the edge receives a 406 or 206 response (this is customizable), and client.ip is added to the penalty box. As subsequent requests are received by edge nodes from that IP, the IP address will be blocked with a 429 response at the edge. A short delay between when the client.ip is added to the penalty box and when the edge nodes start taking an action on requests is expected. For more details on this behavior, please see our rate limiting documentation.

While we could use the native edge rate limiting functionality to count requests, there are many times when we would like a more exact count of the client requests before taking an action such as block. This can easily be accomplished with our next-gen WAF deployed at your origin, or you may use existing origin logic to do the counting and return the blocking status code.

The way this works is that we use the functions ratelimit.penaltybox_add and ratelimit.penaltybox_has. The offending IP address is added to the penalty box with ratelimit.penaltybox_add. The subsequent requests are checked for the penalty box entry with ratelimit.penaltybox_has. When an edge node detects the penalty box entry, the request is blocked.

The full snippet is below. This snippet was placed in the init placement. Before implementing this snippet into production, you should test to ensure that your application will work as expected.

# Snippet rate-limiter-v1-origin_waf_response-init-init : 100
# Begin rate-limiter Fastly Edge Rate Limiting
penaltybox rl_origin_waf_response_pb {}
ratecounter rl_origin_waf_response_rc {}


table rl_origin_waf_response_methods {
  "GET": "true",
  "PUT": "true",
  "TRACE": "true",
  "POST": "true",
  "HEAD": "true",
  "DELETE": "true",
  "PATCH": "true",
  "OPTIONS": "true",
}


# Start rate-limiter Fastly Edge Rate Limiting
sub vcl_recv {
    # call rl_origin_waf_response_process;
      if (req.restarts == 0 && fastly.ff.visits_this_service == 0
      && table.contains(rl_origin_waf_response_methods, req.method)
      ) {
        if (ratelimit.penaltybox_has(rl_origin_waf_response_pb, client.ip)) {
            error 829 "Rate limiter: Too many requests for origin_waf_response";
        }
      }
}
# End rate-limiter Fastly Edge Rate Limiting


# Start check backend response status code
sub vcl_fetch {
    # perform check based on the origin response. 206 status makes for easier testing and reporting
    if (beresp.status == 406 || beresp.status == 206) {
        log "406 or 206 response";
        ratelimit.penaltybox_add(rl_origin_waf_response_pb, client.ip, 10m);
    }
}
# End check backend response status code


# Start useful troubleshooting based on the response
sub vcl_deliver {
  if (req.http.fastly-debug == "1"){
    set resp.http.X-ERL-PenaltyBox-has = ratelimit.penaltybox_has(rl_origin_waf_response_pb, client.ip);
  }
}
# End useful troubleshooting based on the response


sub vcl_error {
    # Snippet rate-limiter-v1-origin_waf_response-error-error : 100
    # Begin rate-limiter Fastly Edge Rate Limiting - default edge rate limiting error - origin_waf_response
  if (obj.status == 829 && obj.response == "Rate limiter: Too many requests for origin_waf_response") {
    set obj.status = 429;
    set obj.response = "Too Many Requests";
    set obj.http.Content-Type = "text/html";
    synthetic.base64 "PGh0bWw+Cgk8aGVhZD4KCQk8dGl0bGU+VG9vIE1hbnkgUmVxdWVzdHM8L3RpdGxlPgoJPC9oZWFkPgoJPGJvZHk+CgkJPHA+VG9vIE1hbnkgUmVxdWVzdHMgdG8gdGhlIHNpdGUgLSBGYXN0bHkgRWRnZSBSYXRlIExpbWl0aW5nPC9wPgoJPC9ib2R5Pgo8L2h0bWw+Cg==";
    return(deliver);
  }
    # End rate-limiter Fastly Edge Rate Limiting - default edge rate limiting error - origin_waf_response
}

How can I see a demo?

One of my favorite load generation tools is siege, which I’ve used in the following example. The command below is an example where a 206 will be returned back to the Fastly edge, which will then add my IP address to the penalty box once I’ve implemented the functionality. You will need to update the command with your domain and have the edge rate limiting feature enabled for your account. As stated earlier, it is expected that a number of requests will get through to the origin as nodes start to check the penalty box for the entry.

! siege https://[yourdomain.foo.bar]/206 -t 15s
** SIEGE 4.1.1
** Preparing 25 concurrent users for battle.
The server is now under siege...
HTTP/1.1 206     0.17 secs:      19 bytes ==> GET  /206
HTTP/1.1 206     0.17 secs:      19 bytes ==> GET  /206
HTTP/1.1 206     0.17 secs:      19 bytes ==> GET  /206


#### Removed for brevity ####


HTTP/1.1 429     0.09 secs:     151 bytes ==> GET  /206
HTTP/1.1 429     0.08 secs:     151 bytes ==> GET  /206
HTTP/1.1 429     0.09 secs:     151 bytes ==> GET  /206
HTTP/1.1 429     0.08 secs:     151 bytes ==> GET  /206


Lifting the server siege...
Transactions:		        3797 hits
Availability:		      100.00 %
Elapsed time:		       14.58 secs
Data transferred:	        0.50 MB
Response time:		        0.09 secs
Transaction rate:	      260.43 trans/sec
Throughput:		        0.03 MB/sec
Concurrency:		       22.65
Successful transactions:         386
Failed transactions:	           0
Longest transaction:	        0.54
Shortest transaction:	        0.06

Give it a try

Grinch bots are just one use case, but the penalty box can access any type of arbitrary identifier. That means there are a number of other applications for this type of situation including:

  • High volume of requests from Suspicious ASNs that are non-revenue generating.

  • Large numbers of compromised credentials are used at login endpoints.

  • High volume of requests are causing 400s or 500s response codes.

This is just one example of what you can do with enriched requests and Fastly. Have other examples? Tweet at us! And stay tuned for more examples of the power of security at the edge.