Rate Limits Overview
88Pay enforces rate limits to ensure fair usage and system stability.
Token Generation
10 requests/min100 requests/hour
Transactions
60 requests/min1,000 requests/hour
Balance Check
60 requests/min1,000 requests/hour
Every API response includes rate limit information:
HTTP/1.1 200 OK
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1737140460
| Header | Description |
|---|
X-RateLimit-Limit | Total requests allowed per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when limit resets |
When You Hit The Limit
Response:
{
"status": "Error",
"code": 429,
"message": "Rate limit exceeded. Please try again later.",
"retry_after": 60
}
Headers:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
X-RateLimit-Remaining: 0
Optimization Strategies
Token Caching
Request Batching
Queue System
Problem: Generating token on every request // ❌ Bad: 100 requests = 100 tokens
for (let i = 0; i < 100; i++) {
const token = await generateToken(); // DON'T!
await createPayment(token);
}
Solution: Cache tokens for 50 seconds // ✅ Good: 100 requests = 2 tokens
const tokenManager = new TokenManager();
for (let i = 0; i < 100; i++) {
const token = await tokenManager.getToken(); // Cached!
await createPayment(token);
}
Impact: 98% reduction in token requests Problem: Creating transactions one-by-one // ❌ Bad: 100 sequential requests
for (const order of orders) {
await createPayment(order); // Slow!
}
Solution: Batch with concurrency limit // ✅ Good: Concurrent batches of 5
const limit = pLimit(5); // Max 5 concurrent
await Promise.all(
orders.map(order =>
limit(() => createPayment(order))
)
);
Impact: 80% faster, stays under rate limit Problem: Burst traffic causes rate limit errorsSolution: Queue requests with rate limiter import Bottleneck from 'bottleneck';
const limiter = new Bottleneck({
maxConcurrent: 5, // 5 concurrent requests
minTime: 1000 // 1 second between requests
});
// All requests go through limiter
const payment = await limiter.schedule(() =>
createPayment(data)
);
Impact: No rate limit errors, smooth traffic
Quick Wins
1. Cache Tokens (50s TTL)
class TokenCache {
constructor() {
this.token = null;
this.expiresAt = 0;
}
async get() {
if (Date.now() < this.expiresAt) {
return this.token;
}
this.token = await generateToken();
this.expiresAt = Date.now() + 50000; // 50s
return this.token;
}
}
Savings: ~98% fewer token requests2. Implement Exponential Backoff
async function withBackoff(fn, retries = 3) {
for (let i = 0; i < retries; i++) {
try {
return await fn();
} catch (error) {
if (error.status !== 429) throw error;
const delay = Math.pow(2, i) * 1000;
await sleep(delay);
}
}
}
Effect: Automatic recovery from rate limits
class RateLimitMonitor {
track(response) {
const remaining = response.headers['x-ratelimit-remaining'];
const limit = response.headers['x-ratelimit-limit'];
const usage = ((limit - remaining) / limit) * 100;
if (usage > 80) {
console.warn(`⚠️ Rate limit usage: ${usage}%`);
}
}
}
Benefit: Proactive alerts before hitting limits
Rate Limit Calculator
Estimate your usage:
| Operation | Requests/min | With Caching |
|---|
| 100 payments | 100 token + 100 payment = 200/min ❌ | 2 token + 100 payment = 102/min ✅ |
| Balance checks every 5s | 12 balance = 12/min ✅ | Same (already optimal) |
| Webhook validation | 0 (server-to-server) | 0 |
Without token caching, you’ll hit token generation limits at 6 requests/min (10/min limit ÷ 2 requests per operation)
Best Practices
Do
✅ Cache tokens (50s TTL)✅ Batch operations✅ Use queues for bursts✅ Monitor rate limit headers✅ Implement exponential backoff
Don't
❌ Generate token per request❌ Ignore rate limit headers❌ Retry immediately on 429❌ Run unbounded parallel requests❌ Skip error handling
Testing Rate Limits
Simulate rate limiting in development:
// Mock rate limiter for testing
class MockRateLimiter {
constructor(limit = 10) {
this.limit = limit;
this.requests = 0;
this.resetAt = Date.now() + 60000;
}
async check() {
if (Date.now() > this.resetAt) {
this.requests = 0;
this.resetAt = Date.now() + 60000;
}
if (this.requests >= this.limit) {
const error = new Error('Rate limited');
error.status = 429;
throw error;
}
this.requests++;
}
}
// Test your rate limit handling
const limiter = new MockRateLimiter(5);
for (let i = 0; i < 10; i++) {
try {
await limiter.check();
console.log(`Request ${i + 1} succeeded`);
} catch (error) {
console.log(`Request ${i + 1} rate limited`);
}
}
Quick Reference
| Endpoint | Per Minute | Per Hour | Cache Strategy |
|---|
/api/auth/token | 10 | 100 | 50s TTL |
/api/transactions/* | 60 | 1,000 | No cache |
/api/balance | 60 | 1,000 | 60s TTL recommended |
Next Steps