Pagination

Plugipay paginates list endpoints with cursors, not page numbers. The Go SDK exposes results through a single generic type: plugipay.Page[T]. You ask for a page, look at HasMore, pass the Cursor back in to get the next one.

type Page[T any] struct {
    Data    []T     `json:"data"`
    Cursor  *string `json:"cursor"`
    HasMore bool    `json:"hasMore"`
}

Every list method returns (Page[Resource], error) — no Iterator interface, no channel by default, no surprises.

A first call

page, err := c.Customers.List(ctx, plugipay.CustomerListParams{
    Limit: ptr(50),
})
if err != nil {
    return err
}

for _, cust := range page.Data {
    fmt.Println(cust.ID, deref(cust.Email))
}

if page.HasMore {
    // there's at least one more page; page.Cursor points to it.
}

A few conventions:

  • Limit defaults to whatever the server picks (typically 50). Pass an explicit *int to override; max is 100 on most endpoints.
  • page.Cursor is *string because it's nil when the page is the last one. Don't dereference without a nil check.
  • page.HasMore is the authoritative "another page exists" flag. Check it, not page.Cursor != nil (they usually match, but HasMore is the contract).

Walking all pages

The standard loop:

var (
    all    []plugipay.Customer
    cursor *string
)

for {
    page, err := c.Customers.List(ctx, plugipay.CustomerListParams{
        Limit:  ptr(100),
        Cursor: cursor,
    })
    if err != nil {
        return nil, err
    }
    all = append(all, page.Data...)
    if !page.HasMore {
        break
    }
    cursor = page.Cursor
}

This is fine for small-to-medium result sets. For million-row exports, see Streaming with a channel below — you usually want to process pages as they arrive rather than load all into memory.

Cursors are opaque. They're URL-safe strings that encode "where we left off" on the server side. Don't try to construct or parse them — the format is not part of the public contract.

A generic helper

If you'd rather not write the loop on every endpoint, define one helper. The SDK ships the building blocks; here's the pattern:

// ListAll calls listFn repeatedly with the previous cursor until
// HasMore is false. Returns every item concatenated.
func ListAll[T any, P any](
    ctx context.Context,
    listFn func(ctx context.Context, params P) (plugipay.Page[T], error),
    base P,
    setCursor func(*P, *string),
) ([]T, error) {
    var (
        out    []T
        cursor *string
    )
    for {
        params := base
        setCursor(&params, cursor)
        page, err := listFn(ctx, params)
        if err != nil {
            return nil, err
        }
        out = append(out, page.Data...)
        if !page.HasMore {
            return out, nil
        }
        cursor = page.Cursor
    }
}

// usage:
all, err := ListAll(
    ctx,
    c.Customers.List,
    plugipay.CustomerListParams{Limit: ptr(100)},
    func(p *plugipay.CustomerListParams, cur *string) { p.Cursor = cur },
)

It's verbose because Go generics can't (yet) reach into struct fields. Most teams skip the helper and write the loop inline — both are fine.

Streaming with a channel

For large result sets, stream pages over a channel and process them as they arrive:

func StreamCustomers(
    ctx context.Context,
    c *plugipay.Client,
    base plugipay.CustomerListParams,
) (<-chan plugipay.Customer, <-chan error) {
    out := make(chan plugipay.Customer)
    errs := make(chan error, 1)

    go func() {
        defer close(out)
        defer close(errs)

        params := base
        var cursor *string
        for {
            params.Cursor = cursor
            page, err := c.Customers.List(ctx, params)
            if err != nil {
                errs <- err
                return
            }
            for _, cust := range page.Data {
                select {
                case <-ctx.Done():
                    errs <- ctx.Err()
                    return
                case out <- cust:
                }
            }
            if !page.HasMore {
                return
            }
            cursor = page.Cursor
        }
    }()

    return out, errs
}

Caller:

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

customers, errs := StreamCustomers(ctx, c, plugipay.CustomerListParams{Limit: ptr(100)})
for cust := range customers {
    if err := process(cust); err != nil {
        cancel()  // bail; stops the producer
        break
    }
}
if err := <-errs; err != nil {
    log.Fatal(err)
}

The ctx is wired through so cancellation propagates back to the producing goroutine — no leaks.

Endpoints that return slices (not Page[T])

A few endpoints return all results in one shot — small, bounded lists where pagination would be overkill:

  • c.Adapters.List(ctx)[]plugipay.AdapterConfig
  • c.ApiKeys.List(ctx)[]plugipay.ApiKey
  • c.Templates.List(ctx, params)[]plugipay.Template
  • c.Workspaces.List(ctx)[]plugipay.Workspace
  • c.WebhookEndpoints.List(ctx)[]plugipay.WebhookEndpoint
  • c.Account.ListSessions(ctx)[]plugipay.BrowserSession
  • c.Account.ListMembers(ctx)[]plugipay.WorkspaceMember
  • c.Account.ListLinked(ctx)[]plugipay.LinkedAccount
  • c.Billing.ListTiers(ctx)[]plugipay.BillingTier
  • c.Ledger.Balances(ctx)[]plugipay.LedgerBalance

For these, just range the slice. The Reference page marks each method with the right return type.

Filtering vs paginating

List params let you filter and paginate at the same time:

page, _ := c.Invoices.List(ctx, plugipay.InvoiceListParams{
    Limit:      ptr(100),
    Status:     ptr("paid"),
    CustomerID: ptr("cus_01HXX..."),
})

Filter first, then iterate. Hitting the server with Limit: 100 and filtering client-side wastes bandwidth and, more importantly, you might stop iterating before reaching matches that are further back.

Order

Some list endpoints accept an Order param ("asc" or "desc"):

page, _ := c.Plans.List(ctx, plugipay.PlanListParams{
    Limit: ptr(50),
    Order: ptr("asc"),
})

Default is descending by createdAt. The cursor is order-aware, so don't change Order mid-iteration — the cursor becomes meaningless.

Picking a Limit

Need Recommended Limit
Dashboard table 50
Sync to your DB 100 (the cap on most endpoints)
Realtime "did this just happen?" tail 10-25

Larger pages mean fewer round trips but larger memory blips; smaller pages mean more round trips. For bulk sync, 100 and a streaming consumer is the sweet spot.

Next

Plugipay — Payments that don't tax your success