Pagination
Every Plugipay list endpoint uses cursor-based pagination. This page covers how cursors work, the query parameters, and how to iterate large result sets safely.
Quick example
GET /v1/customers?limit=50
Response:
{
"data": [
{ "id": "cus_01H...", ... },
/* 49 more items */
],
"meta": {
"page": {
"limit": 50,
"hasMore": true,
"nextCursor": "cur_01HXxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
Pass the cursor to get the next page:
GET /v1/customers?limit=50&cursor=cur_01HXxxxxxxxxxxxxxxxxxxxxxx
When hasMore is false, you've reached the end.
Why cursors, not offsets
Cursor-based pagination is stable under inserts. If new resources are created between page requests, you don't see duplicates or skip items. With offset-based pagination, inserting at the top causes everyone to shift down and you re-see the item that used to be at position 50.
For analytical use cases that need stable counts, this matters. For a busy workspace, items are being created constantly.
Parameters
| Param | Default | Max | Notes |
|---|---|---|---|
limit |
50 |
100 |
Items per page |
cursor |
none | — | Opaque token from previous nextCursor |
direction |
desc |
— | asc (oldest first) or desc (newest first) |
limit is a request — we may return fewer items if a hard cap is hit, or if no more items exist.
Direction
By default, lists return newest items first. To paginate from oldest to newest:
GET /v1/customers?direction=asc&limit=50
This is mostly useful when you want to backfill chronologically (e.g., importing into a data warehouse).
Filters
Most endpoints support filter parameters. Filters and pagination compose:
GET /v1/payments?status=succeeded&since=2026-05-01&limit=100&cursor=cur_xxx
The cursor encodes the filter set. Don't change filters between pages — mint a new request without a cursor instead.
Common filter params (per-resource specifics on each resource page):
| Param | Type | Example |
|---|---|---|
since |
ISO 8601 or epoch seconds | since=2026-05-01 |
until |
ISO 8601 or epoch seconds | until=2026-05-12T23:59:59Z |
status |
string | status=succeeded |
customerId |
ID | customerId=cus_xxx |
metadata[<key>] |
string (exact match) | metadata[campaign]=spring-2026 |
Iterating a complete list
To consume an entire result set, loop until hasMore is false:
Node.js:
async function listAll(resource, params = {}) {
const all = [];
let cursor = undefined;
do {
const page = await client[resource].list({ ...params, limit: 100, cursor });
all.push(...page.data);
cursor = page.meta.page.nextCursor;
} while (cursor);
return all;
}
Python:
def list_all(client, resource, **params):
all_items = []
cursor = None
while True:
page = getattr(client, resource).list(limit=100, cursor=cursor, **params)
all_items.extend(page['data'])
if not page['meta']['page']['hasMore']:
break
cursor = page['meta']['page']['nextCursor']
return all_items
Go:
func ListAll(ctx context.Context, c *plugipay.Client, params plugipay.ListParams) ([]plugipay.Customer, error) {
var all []plugipay.Customer
cursor := ""
for {
params.Limit = 100
params.Cursor = cursor
page, err := c.Customers.List(ctx, params)
if err != nil {
return nil, err
}
all = append(all, page.Data...)
if !page.Meta.Page.HasMore {
break
}
cursor = page.Meta.Page.NextCursor
}
return all, nil
}
Most SDKs have a higher-level helper (e.g., client.customers.listAll(...)) that does this for you. See the per-SDK docs.
Set
limitto the max (100) when iterating a full list. Fewer round trips means lower latency and less rate-limit pressure. Use smaller pages for interactive UI lists.
Streaming large lists
For very large result sets (10K+ items), iterating with pagination can be slow. Two alternatives:
Use a more aggressive filter. Instead of
GET /v1/payments?limit=100for everything, filter by date:GET /v1/payments?since=2026-05-01&until=2026-05-08. Multiple parallel filter buckets process faster than one sequential cursor walk.Use the CSV export endpoint. Several resources have a
/exportcompanion that streams CSV:GET /v1/payments/export?since=2026-01-01The response is a CSV stream, not JSON. No pagination needed. Ideal for monthly reconciliation and reporting.
Stable ordering
Within a page, results are ordered by createdAt descending by default (newest first). Cursor pagination preserves this order across pages.
Some endpoints support custom ordering via an orderBy parameter:
GET /v1/payments?orderBy=amount&direction=desc
When ordering by something other than time, ties are broken by createdAt to keep ordering deterministic.
Cursor lifetime
Cursors are opaque tokens that encode:
- The filter parameters of the originating request
- The position within the result set
They're valid for 24 hours after issuance. Trying to use a stale cursor returns 400 invalid_cursor — restart the iteration with a fresh request.
For long-running export jobs, save the data as you go and don't rely on cursor longevity.
Total counts
We don't return total counts in list responses (computing them is expensive on large tables and changes between pages anyway). To approximate a count:
- Use the
/exportendpoint for an exact count (CSV row count). - Use Reports endpoints for aggregated counts:
GET /v1/reports/payments-summary?since=2026-05-01. - For very small counts, iterate the list with
limit=1and checkhasMore.
Common errors
400 invalid_cursor
The cursor is expired, malformed, or doesn't match the current filter set. Restart the iteration without cursor.
400 invalid_limit
limit must be between 1 and 100. Out-of-range values are rejected (we don't silently clamp).
Next
- Idempotency
- Conventions — envelope and field rules.
- Resources — per-resource list endpoints and their filters.