Overview
Databuddy implements rate limiting to ensure fair usage and system stability. Rate limits vary based on your authentication method, endpoint, and subscription plan.
Rate Limit Types
Databuddy uses different rate limits for different types of operations:
Public API
Applied to unauthenticated public endpoints
Used for public-facing endpoints that don’t require authentication.
Standard API
Applied to authenticated API requests
1,000 requests per minute
Applied to most authenticated endpoints including /v1/query.
Authentication Endpoints
Applied to login and authentication endpoints
Protects against brute-force attacks on authentication endpoints.
Expensive Operations
Applied to resource-intensive queries
Applied to complex analytics queries, exports, and batch operations.
Admin Operations
Applied to administrative endpoints
Applied to admin-only endpoints.
Every API response includes rate limit information in the headers:
The maximum number of requests allowed in the current window
The number of requests remaining in the current window
Unix timestamp (in seconds) when the rate limit window resets
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1704153600
Rate Limit Algorithm
Databuddy uses a sliding window rate limiting algorithm:
- Requests are tracked in Redis using sorted sets
- Old requests outside the time window are automatically removed
- Current request count is checked against the limit
- Each successful request adds a timestamped entry
Implementation: /packages/redis/rate-limit.ts:10-52
// Pseudocode for rate limiting
const key = `rl:${identifier}`;
const now = Date.now();
const windowMs = 60000; // 60 seconds
// Remove old requests
await redis.zremrangebyscore(key, 0, now - windowMs);
// Count current requests
const count = await redis.zcard(key);
if (count >= limit) {
return { success: false, remaining: 0 };
}
// Add current request
await redis.zadd(key, now, `${now}:${randomId}`);
return { success: true, remaining: limit - count - 1 };
Handling Rate Limits
Rate Limit Exceeded Response
When you exceed the rate limit, you’ll receive a 429 status code:
{
"success": false,
"error": "Rate limit exceeded. Please try again later.",
"code": "RATE_LIMIT_EXCEEDED",
"limit": 1000,
"remaining": 0,
"reset": "2024-01-01T12:00:00.000Z",
"retryAfter": 45
}
Always false for rate limit errors
Human-readable error message
Error code: RATE_LIMIT_EXCEEDED
The rate limit that was exceeded
Remaining requests (always 0 when rate limited)
ISO 8601 timestamp when the limit resets
Seconds to wait before retrying
Rate limit responses include a Retry-After header:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 45
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704153600
Best Practices
Always inspect the rate limit headers before making additional requests:
const response = await fetch('https://api.databuddy.cc/v1/query', {
method: 'POST',
headers: { 'x-api-key': API_KEY },
body: JSON.stringify(queryData),
});
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const reset = parseInt(response.headers.get('X-RateLimit-Reset'));
if (remaining < 10) {
console.warn(`Only ${remaining} requests remaining`);
console.warn(`Rate limit resets at ${new Date(reset * 1000)}`);
}
2. Implement Exponential Backoff
When rate limited, use exponential backoff:
async function fetchWithRetry(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url, options);
if (response.status !== 429) {
return response;
}
const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;
const backoff = Math.min(retryAfter * Math.pow(2, i), 300);
console.log(`Rate limited. Retrying in ${backoff}s...`);
await new Promise(resolve => setTimeout(resolve, backoff * 1000));
}
throw new Error('Max retries exceeded');
}
3. Batch Requests
The Query API supports batch requests to reduce the number of API calls:
// Instead of multiple requests
const traffic = await queryAPI({ parameters: ['traffic'] });
const devices = await queryAPI({ parameters: ['devices'] });
const geo = await queryAPI({ parameters: ['geo'] });
// Batch them into one
const data = await queryAPI({
parameters: ['traffic', 'devices', 'geo'],
});
4. Cache Responses
Cache API responses when appropriate:
const cache = new Map();
const CACHE_TTL = 60000; // 60 seconds
async function cachedQuery(params) {
const key = JSON.stringify(params);
const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await queryAPI(params);
cache.set(key, { data, timestamp: Date.now() });
return data;
}
5. Monitor Usage
Track your rate limit usage over time:
const rateLimitMetrics = [];
function trackRateLimit(response) {
rateLimitMetrics.push({
timestamp: Date.now(),
remaining: parseInt(response.headers.get('X-RateLimit-Remaining')),
limit: parseInt(response.headers.get('X-RateLimit-Limit')),
});
// Alert if consistently low
const recent = rateLimitMetrics.slice(-10);
const avgRemaining = recent.reduce((sum, m) => sum + m.remaining, 0) / recent.length;
if (avgRemaining < recent[0].limit * 0.1) {
console.warn('Approaching rate limit consistently');
}
}
Custom Rate Limits
For enterprise customers, custom rate limits can be configured:
- Higher request limits
- Burst allowances
- Dedicated rate limit pools
Contact support to discuss custom rate limits for your use case.
Rate Limit Exemptions
Certain endpoints are exempt from rate limiting:
- Health check endpoints (
/health)
- Webhook endpoints (
/webhooks/*)
- Internal RPC calls (when bypassed)
See /apps/api/src/middleware/rate-limit.ts:35-37 for implementation details.