Missing or Misconfigured Rate Limiting

A rate limit defines the maximum number of requests a user or system can make to a service or endpoint within a specified timeframe. Rate limiting helps prevent misuse or overuse of resources, such as API requests, database queries or server processes.




ATTACK

A rate limit is a mechanism to limit the number of requests a user can make to a server in a given time period. When this limitation is either not set or misconfigured, attackers can exploit the system, potentially leading to a Denial of Service (DoS) attack.

IMPACT

When rate limits are either missing or misconfigured, attackers can exploit the system by flooding it with excessive requests. Here are some ways this vulnerability can be exploited:

Navigating the Web Application Security Landscape

Denial of Service (DoS) Attacks:

Without proper rate limiting, attackers can send an overwhelming number of requests to the server, causing it to slow down, become unresponsive or even crash. This prevents legitimate users from accessing the service, leading to downtime and a degraded user experience.

Brute Force Attacks:

Attackers can attempt to guess passwords, authentication tokens, or other sensitive data by making repeated requests. Without rate limits, they can automate these attacks, significantly increasing the chances of success.

Resource Exhaustion:

Excessive requests can exhaust server resources like CPU, memory or bandwidth. This can result in resource starvation, causing the application to become unavailable.

API Abuse:

If there are no rate limits on an API, malicious actors can abuse the system, extracting large amounts of data, causing congestion, or degrading the performance of the entire application.

SOLUTION

Below are detailed steps on how to implement and configure rate limits correctly to mitigate this vulnerability.

1. Identify Key Endpoints

First, identify the endpoints and resources in your application that are most likely to be targeted by abuse. This includes:

  • Login and authentication forms
  • Registration endpoints
  • Password reset functionalities
  • APIs with sensitive data access (e.g., user data, payment details)
  • Search functions or public-facing forms
These are prime targets for attacks such as brute force or resource exhaustion.

2. Implement Rate Limiting Mechanism

Once you've identified the critical endpoints, you can implement a rate limiting mechanism. There are various approaches to achieve this:

  • Fixed Window: A fixed time window (e.g., 1 minute, 1 hour) where the number of requests is counted. Once the limit is exceeded, no further requests are allowed until the time window resets.
  • Sliding Window: Similar to the fixed window, but with a more granular approach. The window "slides" forward after each request, allowing more flexible control over request rates.
  • Token Bucket: This is a more advanced rate-limiting technique where tokens are added to a "bucket" at a fixed rate. Each request consumes a token from the bucket. If the bucket is empty, the request is rejected until more tokens are added.
  • Leaky Bucket: Similar to the token bucket but designed to handle bursts of traffic. Excess requests are queued and processed at a fixed rate, ensuring the system doesn’t get overwhelmed.

3. Set Appropriate Limits

Once the rate limiting mechanism is in place, it’s crucial to set the right thresholds. These limits should be based on the expected usage patterns of legitimate users.

For example:
  • API rate limits can be set to allow 100 requests per minute for a regular user.
  • Sensitive actions like login or password reset should have stricter limits, such as 5 requests per minute.
You should also account for bursts in traffic from legitimate users (e.g., during high-traffic events), so the limits shouldn’t be too restrictive.

4. Implement CAPTCHA or Multi-Factor Authentication (MFA)

For critical endpoints such as login and password recovery, consider integrating CAPTCHA or Multi-Factor Authentication (MFA) alongside rate limiting. These mechanisms help thwart automated attacks, especially brute force attempts, by introducing additional layers of verification.

5. Monitor and Alert for Abuse

Once rate limiting is in place, you should actively monitor the system for signs of abuse. Set up alerting mechanisms to notify administrators of unusual request patterns or when rate limits are being hit frequently.

6. Consider Distributed Rate Limiting

For systems that are distributed across multiple servers or microservices, ensure that rate limiting is applied consistently across all components. You can use tools like Redis or specialized load balancers to synchronize rate limits across servers.

Author Avatar

Radhika Lad

Cyber Security Analyst

Location: Pune, India

Radhika is a web and network Pentester and ethusiast in cyber security domain. Her primary focus is on Vulnerability Assessment and Penetration testing of corporate networks, firewalls, web and cloud apps, mobile apps. Coming from finance and education background, she has a passion to get into the world of IoT and OT Cyber security. She is always on the path of learning and trying new things in the domain she likes.