Smart Rate Limiter in Go
This article builds on the theme of the “Smart” Reverse Proxy in a previous installment Why every DevOps needs a “Smart” Reverse Proxy written in Go. We’ll explore the ins and outs of implementing a keyed rate limiter for our GO-based reverse proxy.
My hope here is that we’ll end up with a “blueprint” for a smart reverse proxy base engine that any smart reverse proxy can build upon.
The GO standard library provides built-in support for rate-limiting. It’s incredibly easy to use (from someone who had to write one in Java!) and easy to tweak to an application’s unique needs.
Part of what makes the rate limiter “smart” is that it will be context aware meaning the keys will come from a database rather than a static list of sorts. The enables the limiter to be cognizant of business logic: user accounts, api keys, etc. This is especially useful for api users with different levels of access (professional, enterprise, free, etc.).
In our example here, we will use Redis as a back end database. We will purposely not rely on Redis’ blazing fast speed by putting a caching mechanism in place. So any back end however slow / monolithic can be used.
So let’s get coding! We’ll start with a working reverse proxy, main.go.
package main
import(
"encoding/json"
"fmt"
"log"
"net/http"
"net/http/httputil"
"net/url"
"github.com/gorilla/mux"
)
func adminHelloHandler(w http.ResponseWriter, req * http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
resp: = map[string] interface {} {
"status": "smart reverse proxy admin access - hello!"
}
out, err: = json.Marshal(resp)
if err != nil {
log.Println(err)
}
fmt.Fprintf(w, string(out))
}
func testHandler(w http.ResponseWriter, req * http.Request) {
log.Print("testHandler")
url, err: = url.Parse("http://www.lightbase.io/freeforlife")
if err != nil {
log.Println(err)
}
log.Print(url)
proxy: = httputil.NewSingleHostReverseProxy(url)
director: = proxy.Director
proxy.Director = func(req * http.Request) {
director(req)
req.Header.Set("X-Forwarded-Host", req.Header.Get("Host"))
req.Host = req.URL.Host
req.URL.Path = url.Path
}
proxy.ServeHTTP(w, req)
}
func limitMiddleware(next http.Handler) http.Handler {
return next
}
func main() {
bind: = ":3080"
adminbind: = "localhost:3081"
pool = newPool()
conn := pool.Get()
defer conn.Close()
gmux: = mux.NewRouter()
gmux.HandleFunc("/test", testHandler).Methods("GET")
adminmux: = mux.NewRouter()
adminmux.HandleFunc("/test", adminHelloHandler).Methods("GET")
go func() {
if err: = http.ListenAndServe(adminbind, adminmux);
err != nil {
log.Fatalf("unable to start server: %s", err.Error())
}
}()
log.Printf("starting smart reverse proxy on [%s], administrative
endpoint: [%s] with /test", bind, adminbind)
if err: = http.ListenAndServe(bind, limitMiddleware(gmux));err != nil {
log.Fatalf("unable to start server: %s", err.Error())
}
return
}We’re using two multiplexers to listen on two ports: one administrative and one for the reverse proxy service. One of the tenets of the smart reverse proxy is that it will provide an api for dynamically updating business / operational logic on-the-fly. We accomplish this by instantiating a second “mux” that listens only on localhost inside a multi-theaded Go function.
You may notice the mux we’re using is gorilla mux — that’s because gorilla mux is a drop-in replacement to golang’s mux and provides better route matching features.
We have a default handler for /test on the administrative multiplexer which returns a json payload:
$ go run .&
$ curl localhost:3081/test
{"status":"smart reverse proxy admin access - hello!"}To test that the reverse proxy is working, we implement a /test handler for the reverse proxy service that proxies one of Lightbase’s community pages.
You might notice the limitMiddleware function is blank. We’ll be using this function to provide rate limiting logic to our proxy services. The ListenAndServe function will call the limitMiddleware to obtain it’s handler. That will be our chance to intercept request and do our rate limiting.
http.ListenAndServe(bind, limitMiddleware(gmux));
Before starting on our rate limiter code, we’ll focus on database logic using Redis: data.go.
var pool * redis.Pool
REDISDB := 2
type ApiKey struct {
id int64
Email string
ApiKey string
ApiRate int
ApiMinutes int
ApiDisabled bool
First string
Last string
Company string
ContactID string
}
func newPool() * redis.Pool {
return &redis.Pool {
MaxIdle: 80,
MaxActive: 12000,
Dial: func()(redis.Conn, error) {
c, err: = redis.Dial("tcp", "mission:6379")
if err != nil {
panic(err.Error())
}
return c, err
},
}
}
func getKey(key string)(ApiRate int, ApiMinutes int) {
log.Printf("getKey(%s)", key)
ApiRate = 0
ApiMinutes = 0
conn: = pool.Get()
conn.Do("SELECT", REDISDB)
r_currkeydata, err: = redis.Values(conn.Do("HGETALL", key))
if err != nil {
log.Printf("getting key=%s failed. %s", r_currkeydata, err)
}
defer conn.Close()
log.Printf("current key data: %s = %s\n", r_currkeydata)
var currkeydata ApiKey
err = redis.ScanStruct(r_currkeydata, & currkeydata)
if err != nil {
log.Printf("getting key=%s failed. %s", key, r_currkeydata)
}
if !currkeydata.ApiDisabled {
ApiRate = currkeydata.ApiRate
ApiMinutes = currkeydata.ApiMinutes
}
return
}The pool is declared as a global variable so it can be accessed in another area. A function newPool is created to instantiate the pool variable. It is actually called at startup in the main function.
We have a custom struct for ApiKey. This structure is designed to capture the aspects of the user information that is needed to create an appropriate rate limiter — including a int rate field. With this, we can define that a certain user is allowed 10 requests per second before the rate limiter kicks in.
The getKey function give us basic CRUD access. It assumes that our Redis database value is in the form of a Hash, includes the values defined in ApiKey struct and can be accessed via the HGETALL command.
The most important lines in the getKey function is:
r_currkeydata, err: = redis.Values(conn.Do("HGETALL", key))
...
err = redis.ScanStruct(r_currkeydata, & currkeydata)
Now that we can access a database of keys and rates, we’ll start on new file to house our rate limiting logic: ratelimiter.go.
type KeyRateLimiter struct {
keys map[string] * rate.Limiter
mu * sync.RWMutex
}
func NewKeyRateLimiter() * KeyRateLimiter {
i: = & KeyRateLimiter {
keys: make(map[string] * rate.Limiter),
mu: & sync.RWMutex {},
}
return i
}
func(i * KeyRateLimiter) rmKey(key string) {
i.mu.Lock()
defer i.mu.Unlock()
delete(i.keys, key)
log.Printf("rmKey(%s)", key)
return
}
func(i * KeyRateLimiter) addKey(key string) * rate.Limiter {
i.mu.Lock()
defer i.mu.Unlock()
ApiRate, ApiMinutes: = getKey(key)
var every rate.Limit
if ApiRate > 0 {
every = rate.Every(time.Duration(ApiMinutes) * time.Minute / time.Duration(ApiRate))
} else {
every = rate.Every(0)
}
burst: = int(ApiRate / 2)
log.Printf("every=%s minutes, burst=%s", ApiMinutes, burst)
limiter: = rate.NewLimiter(every, burst)
i.keys[key] = limiter
log.Printf("AddKey(%s)", key)
return limiter
}
func(i * KeyRateLimiter) getLimiter(key string) * rate.Limiter {
i.mu.Lock()
limiter, exists: = i.keys[key]
if !exists {
i.mu.Unlock()
return i.AddKey(key)
}
log.Printf("GetLimiter(%s)", key)
i.mu.Unlock()
return limiter
}
func limitMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r * http.Request) {
apikey: = ""
keyarray,
exists: = r.URL.Query()["key"]
if exists {
if len(keyarray) > 0 {
apikey = r.URL.Query()["key"][0]
}
}
if len(apikey) != 48 {
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
return
}
log.Printf("limitMiddleware req.URL.Query", apikey)
limiter: = limiter.GetLimiter(apikey)
allow: = limiter.Allow()
log.Printf("limiter.Allow? %s", allow)
if !allow {
http.Error(w, http.StatusText(http.StatusTooManyRequests), http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}The Go standard library provides a ratelimiter. We have a struct KeyRateLimiter that is a hashmap where we will store ratelimiters. We’ll use this hashmap cache data read from the database. Of course, for production use, this hashmap would need to be managed so that stale entries can expire and it doesn’t grow uncontrolled. By using the KeyRateLimiter hashmap, our database does not need to be blazingly fast.
When adding a new ratelimiter to the KeyRateLimiter, we obtain the rate and minutes from the database (see getKey function in data.go).
ApiRate, ApiMinutes: = getKey(key)
So if we wanted a limiter to be set up such that it allows 60 requests per minute, we use the following calculate an Rate.Limit value:
every = rate.Every(time.Duration(ApiMinutes) * time.Minute / time.Duration(ApiRate))
burst: = int(ApiRate / 2)
limiter: = rate.NewLimiter(every, burst)
i.keys[key] = limiter
The Rate.Limit every value is used to create a new limiter using the rate.NewLimiter function. The burst is the amount of requests that can exceed the rate limit bucket before limiting kicks in. We arbitrarily set it to ApiRate /2 which means that the user is allowed to “burst” 30 requests before the minute is up before the rate limiter puts the brakes on.
The sync.RWMutex is how we are making our KeyRateLimiter thread-safe by locking and unlocking the map prior to updating.
i.mu.Lock()
defer i.mu.Unlock()
OK, now we’re all set up to pull everything together in the limitMiddleware function. We’ll assume that requests will come in with a HTTP GET “key” parameter specified:
http://localhost:3080/SOME_API_CMD?key=ABC123…To capture the api key supplied by the user, we use Go’s built-in url query parameter parser:
apikey = r.URL.Query()["key"][0]
We should do some basic validation on the key so we don’t waste time on invalid requests. As a good security practice, always to sanitize anything user supplied. In our case, we check the the length of the key is 48 characters.
To perform the actual rate limiting we have:
limiter: = limiter.GetLimiter(apikey)
allow: = limiter.Allow()
The GetLimiter function will obtain the limiter from the hash in memory if available and if necessary, queries the underlying Redis store to build an appropriate limiter.
The Allow function uses the standard Go rate limiting function to calculate if this requests should be throttled or not. If it is now allowed, we return an appropriate HTTP status code:
if !allow {
http.Error(w, http.StatusText(http.StatusTooManyRequests), http.StatusTooManyRequests)
return
}
And that’s it — a fully functional rate limiter that user and key aware capable of handling different rates based on the API key being used.
So what was the “admin” port business all about? We didn’t get to that, but it’s part of the concept of the smart reverse proxy. We’ll be building on this concept in future articles, one of them will be about the admin / management port. So stay tuned — and keep Git’n it Done!