ratelimit

package module
v1.4.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 27, 2025 License: GPL-2.0, GPL-3.0 Imports: 11 Imported by: 0

README

Rate Limiter

Go Version License Tests

A high-performance, non-blocking token bucket rate limiter designed for HTTP projects. Features accurate retry-after headers, multiple cache backends (in-memory and Redis), and flexible IP normalization to prevent circumvention via IPv6 source address spoofing.

Features

  • Token Bucket Algorithm: Smooth rate limiting with burst capacity
  • Non-blocking: Fast, lock-free operations
  • Accurate Retry-After: Precise timing for client backoff
  • Multiple Cache Backends: In-memory LRU and Redis support
  • IP Normalization: Configurable subnet masking for IPv4/IPv6
  • Flexible Keying: Support for source IP, destination, and custom identifiers

Installation

go get github.com/blacklanternsecurity/ratelimit

Quick Start

Basic Usage
package main

import (
    "fmt"
    "net/http"
    "time"
    
    "github.com/blacklanternsecurity/ratelimit"
)

func main() {
    // Create a rate limiter with default settings
    limiter := ratelimit.NewDefault()
    
    http.HandleFunc("/api", func(w http.ResponseWriter, r *http.Request) {
        // Check rate limit for client IP
        retryAfter := limiter.IsAllowed(r.RemoteAddr, r.Host, "api")
        
        if retryAfter > 0 {
            w.Header().Set("Retry-After", fmt.Sprintf("%d", retryAfter))
            w.WriteHeader(http.StatusTooManyRequests)
            w.Write([]byte("Rate limit exceeded"))
            return
        }
        
        // Process request
        w.Write([]byte("OK"))
    })
    
    http.ListenAndServe(":8080", nil)
}
Custom Configuration
config := ratelimit.Config{
    Capacity:           1000,              // Cache capacity
    WindowSize:         60 * time.Second,  // Rate limit window
    MaxReqs:            100,               // Requests per window (this doubles as the max burst capacity)
    Expiration:         5 * time.Minute,   // Cache entry TTL
    IPv4SubnetMask:     24,                // /24 subnet for IPv4
    IPv6SubnetMask:     56,                // /56 subnet for IPv6
}

limiter := ratelimit.New(config)
Redis Backend
redisConfig := ratelimit.RedisConfig{
    Addr:      "localhost:6379",
    Password:  "",
    DB:        0,
    KeyPrefix: "ratelimit:",
    TTL:       5 * time.Minute,
}

limiter, err := ratelimit.NewWithRedis(config, redisConfig)
if err != nil {
    log.Fatal(err)
}
defer limiter.Close()

Configuration Options

Option Type Description Default
Capacity int Maximum number of rate limiters to cache 100000
WindowSize time.Duration Time window for rate limiting 1 * time.Second
MaxReqs int Maximum requests allowed per window (doubles as max burst) 10
Expiration time.Duration How long to keep rate limiters in cache 1 * time.Hour
IPv4SubnetMask int IPv4 subnet mask for IP normalization 32 (no squashing)
IPv6SubnetMask int IPv6 subnet mask for IP normalization 56

API Reference

IsAllowed(ip, destination, identifier string, maxReqs ...int) int

Checks if a request should be allowed and returns the retry-after time in seconds. This method counts the request against the user's quota.

Parameters:

  • ip: Source IP address (optional, automatically normalized)
  • destination: Destination hostname/URL (optional, automatically normalized)
  • identifier: Custom identifier for the rate limit (e.g. API key, user ID, etc.)
  • maxReqs: Optional override for max requests (uses config default if not provided)

Returns:

  • 0: Request is allowed
  • >0: Request blocked, retry after N seconds
CheckAllowed(ip, destination, identifier string, maxReqs ...int) int

Same as IsAllowed except it doesn't count the request against the user's quota. Useful for health checks, monitoring, or pre-flight requests.

Clear()

Removes all rate limiting entries from the cache, effectively resetting all rate limits.

Examples:

// Clear all rate limits
limiter.Clear()

// After clearing, all clients start fresh
retryAfter := limiter.IsAllowed("192.168.1.1", "api.example.com", "user123")
// retryAfter will be 0 (allowed) since the rate limit was reset
Examples
// Basic rate limiting
retryAfter := limiter.IsAllowed("192.168.1.1", "api.example.com", "user123")

// Custom rate limit for specific endpoint
retryAfter := limiter.IsAllowed("192.168.1.1", "api.example.com", "login", 5) // 5 req/min

// Global rate limiting (no destination)
retryAfter := limiter.IsAllowed("192.168.1.1", "", "global")

// Check quota without consuming it (useful for health checks)
retryAfter := limiter.CheckAllowed("192.168.1.1", "api.example.com", "health")

IP Normalization

The rate limiter supports configurable IP normalization to group similar IPs:

// Group all IPs in /24 subnet (192.168.1.0/24)
config.IPv4SubnetMask = 24

// Group all IPs in /56 subnet for IPv6
config.IPv6SubnetMask = 56

Cache Backends

In-Memory Cache
  • Uses LRU eviction with TTL
  • Fast, no external dependencies
  • Suitable for single-instance deployments
Redis Cache
  • Distributed rate limiting across multiple instances
  • Persistent across restarts
  • Requires Redis server
// Custom cache implementation
type CustomCache struct {
    // Your implementation
}

func (c *CustomCache) Get(key uint64) *ratelimit.ClientLimiter { /* ... */ }
func (c *CustomCache) Set(key uint64, value *ratelimit.ClientLimiter) { /* ... */ }
func (c *CustomCache) Delete(key uint64) { /* ... */ }
func (c *CustomCache) Close() error { /* ... */ }

limiter := ratelimit.NewWithCache(config, &CustomCache{})

HTTP Middleware Example

func RateLimitMiddleware(limiter *ratelimit.Limiter) func(http.Handler) http.Handler {
    return func(next http.Handler) http.Handler {
        return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
            retryAfter := limiter.IsAllowed(r.RemoteAddr, r.Host, "api")
            
            if retryAfter > 0 {
                w.Header().Set("Retry-After", fmt.Sprintf("%d", retryAfter))
                w.Header().Set("X-RateLimit-Limit", fmt.Sprintf("%d", limiter.maxReqs))
                w.Header().Set("X-RateLimit-Remaining", "0")
                http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
                return
            }
            
            next.ServeHTTP(w, r)
        })
    }
}

Development

Running Tests
# Run all tests
go test -v ./...

# Run with race detection
go test -race -v ./...

# Run specific test
go test -v -run TestBasicRateLimit
Linting

The project uses standard Go tools for code quality:

# Check for common issues
go vet ./...

# Check formatting
gofmt -s -l .

# Auto-format code
gofmt -s -w .

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache interface {
	// Get retrieves a value from the cache
	// Returns the ClientLimiter if found, nil if not found
	Get(key uint64) *ClientLimiter

	// Set stores a value in the cache
	// The value will be stored with the specified key
	Set(key uint64, value *ClientLimiter)

	// Delete removes a value from the cache
	// If the key doesn't exist, this is a no-op
	Delete(key uint64)

	// Clear removes all entries from the cache
	Clear()

	// Close cleans up any resources used by the cache
	// Returns an error if cleanup fails
	Close() error
}

Cache represents a key-value cache interface for rate limiting

type ClientLimiter

type ClientLimiter struct {
	// contains filtered or unexported fields
}

type Config

type Config struct {
	Capacity       int
	WindowSize     time.Duration
	MaxReqs        int
	Expiration     time.Duration
	IPv4SubnetMask int
	IPv6SubnetMask int
}

type Limiter

type Limiter struct {
	// contains filtered or unexported fields
}

func New

func New(config Config) *Limiter

func NewDefault

func NewDefault() *Limiter

NewDefault creates a limiter with sensible defaults

func NewWithCache

func NewWithCache(config Config, cache Cache) *Limiter

NewWithCache creates a limiter with a custom cache implementation

func NewWithRedis

func NewWithRedis(config Config, redisConfig RedisConfig) (*Limiter, error)

NewWithRedis creates a limiter with Redis cache

func (*Limiter) CheckAllowed added in v1.4.0

func (l *Limiter) CheckAllowed(ip, destination, identifier string, maxReqs ...int) int

CheckAllowed performs the same rate limiting check as IsAllowed but doesn't count against the user's requests

func (*Limiter) Clear added in v1.3.0

func (l *Limiter) Clear()

Clear removes all entries from the rate limiter cache

func (*Limiter) Close added in v1.2.0

func (l *Limiter) Close() error

Close closes the underlying cache and cleans up resources

func (*Limiter) IsAllowed

func (l *Limiter) IsAllowed(ip, destination, identifier string, maxReqs ...int) int

IsAllowed performs the rate limiting check and decrements the user's requests

func (*Limiter) IsAllowedWithDecrement added in v1.4.0

func (l *Limiter) IsAllowedWithDecrement(ip, destination, identifier string, decrement bool, maxReqs ...int) int

type MemoryCache

type MemoryCache struct {
	// contains filtered or unexported fields
}

MemoryCache implements the Cache interface using an in-memory LRU cache

func NewMemoryCache

func NewMemoryCache(capacity int, expiration time.Duration) *MemoryCache

NewMemoryCache creates a new memory cache with the specified capacity and expiration

func (*MemoryCache) Clear added in v1.3.0

func (m *MemoryCache) Clear()

Clear removes all entries from the cache

func (*MemoryCache) Close

func (m *MemoryCache) Close() error

Close cleans up any resources used by the cache

func (*MemoryCache) Delete

func (m *MemoryCache) Delete(key uint64)

Delete removes a value from the cache

func (*MemoryCache) Get

func (m *MemoryCache) Get(key uint64) *ClientLimiter

Get retrieves a value from the cache

func (*MemoryCache) Set

func (m *MemoryCache) Set(key uint64, value *ClientLimiter)

Set stores a value in the cache

type RedisCache

type RedisCache struct {
	// contains filtered or unexported fields
}

RedisCache implements the Cache interface using Redis

func NewRedisCache

func NewRedisCache(config RedisConfig) (*RedisCache, error)

NewRedisCache creates a new Redis cache with the specified configuration

func (*RedisCache) Clear added in v1.3.0

func (r *RedisCache) Clear()

Clear removes all entries from the cache by deleting all keys with the prefix

func (*RedisCache) Close

func (r *RedisCache) Close() error

Close closes the Redis connection

func (*RedisCache) Delete

func (r *RedisCache) Delete(key uint64)

Delete removes a value from Redis

func (*RedisCache) Get

func (r *RedisCache) Get(key uint64) *ClientLimiter

Get retrieves a value from Redis

func (*RedisCache) Set

func (r *RedisCache) Set(key uint64, value *ClientLimiter)

Set stores a value in Redis

type RedisConfig

type RedisConfig struct {
	Addr      string        // Redis server address (e.g., "localhost:6379")
	Password  string        // Redis password (empty for no auth)
	DB        int           // Redis database number
	KeyPrefix string        // Prefix for all keys (e.g., "ratelimit:")
	TTL       time.Duration // Default TTL for keys
}

RedisConfig holds configuration for the Redis cache

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL