Skip to main content

Rate Limits Overview

88Pay enforces rate limits to ensure fair usage and system stability.

Token Generation

10 requests/min100 requests/hour

Transactions

60 requests/min1,000 requests/hour

Balance Check

60 requests/min1,000 requests/hour

Rate Limit Headers

Every API response includes rate limit information:
HTTP/1.1 200 OK
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1737140460
HeaderDescription
X-RateLimit-LimitTotal requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when limit resets

When You Hit The Limit

Response:
{
  "status": "Error",
  "code": 429,
  "message": "Rate limit exceeded. Please try again later.",
  "retry_after": 60
}
Headers:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
X-RateLimit-Remaining: 0

Optimization Strategies

Problem: Generating token on every request
    // ❌ Bad: 100 requests = 100 tokens
    for (let i = 0; i < 100; i++) {
      const token = await generateToken(); // DON'T!
      await createPayment(token);
    }
Solution: Cache tokens for 50 seconds
    // ✅ Good: 100 requests = 2 tokens
    const tokenManager = new TokenManager();
    
    for (let i = 0; i < 100; i++) {
      const token = await tokenManager.getToken(); // Cached!
      await createPayment(token);
    }
Impact: 98% reduction in token requests

Quick Wins

    class TokenCache {
      constructor() {
        this.token = null;
        this.expiresAt = 0;
      }
      
      async get() {
        if (Date.now() < this.expiresAt) {
          return this.token;
        }
        
        this.token = await generateToken();
        this.expiresAt = Date.now() + 50000; // 50s
        return this.token;
      }
    }
Savings: ~98% fewer token requests
    async function withBackoff(fn, retries = 3) {
      for (let i = 0; i < retries; i++) {
        try {
          return await fn();
        } catch (error) {
          if (error.status !== 429) throw error;
          
          const delay = Math.pow(2, i) * 1000;
          await sleep(delay);
        }
      }
    }
Effect: Automatic recovery from rate limits
    class RateLimitMonitor {
      track(response) {
        const remaining = response.headers['x-ratelimit-remaining'];
        const limit = response.headers['x-ratelimit-limit'];
        
        const usage = ((limit - remaining) / limit) * 100;
        
        if (usage > 80) {
          console.warn(`⚠️ Rate limit usage: ${usage}%`);
        }
      }
    }
Benefit: Proactive alerts before hitting limits

Rate Limit Calculator

Estimate your usage:
OperationRequests/minWith Caching
100 payments100 token + 100 payment = 200/min2 token + 100 payment = 102/min
Balance checks every 5s12 balance = 12/minSame (already optimal)
Webhook validation0 (server-to-server)0
Without token caching, you’ll hit token generation limits at 6 requests/min (10/min limit ÷ 2 requests per operation)

Best Practices

Do

✅ Cache tokens (50s TTL)✅ Batch operations✅ Use queues for bursts✅ Monitor rate limit headers✅ Implement exponential backoff

Don't

❌ Generate token per request❌ Ignore rate limit headers❌ Retry immediately on 429❌ Run unbounded parallel requests❌ Skip error handling

Testing Rate Limits

Simulate rate limiting in development:
// Mock rate limiter for testing
class MockRateLimiter {
  constructor(limit = 10) {
    this.limit = limit;
    this.requests = 0;
    this.resetAt = Date.now() + 60000;
  }
  
  async check() {
    if (Date.now() > this.resetAt) {
      this.requests = 0;
      this.resetAt = Date.now() + 60000;
    }
    
    if (this.requests >= this.limit) {
      const error = new Error('Rate limited');
      error.status = 429;
      throw error;
    }
    
    this.requests++;
  }
}

// Test your rate limit handling
const limiter = new MockRateLimiter(5);

for (let i = 0; i < 10; i++) {
  try {
    await limiter.check();
    console.log(`Request ${i + 1} succeeded`);
  } catch (error) {
    console.log(`Request ${i + 1} rate limited`);
  }
}

Quick Reference

EndpointPer MinutePer HourCache Strategy
/api/auth/token1010050s TTL
/api/transactions/*601,000No cache
/api/balance601,00060s TTL recommended

Next Steps