Shrink third-party API responses before they hit the browser

Use SaltingIO's ?select= parameter and GJSON paths to reshape OpenAI, Unsplash, and other upstream responses at the gateway — no BFF required.

SaltingIO Team April 18, 2026 5 min read Developer Tools
saltingioapi-gatewayresponse-transformationgjsonbridgeopenaifrontendbffpayload-sizejavascript
Shrink third-party API responses before they hit the browser

The OpenAI chat completion response is around 800 bytes of JSON for a one-sentence reply. About 40 of those bytes are the content you actually want to show the user. The rest is model metadata, finish reasons, usage counters, and an id you'll never log.

Multiply that by every suggestion, every autocomplete poke, every "explain this paragraph" button in your app, and you're shipping kilobytes of JSON that your UI throws away on arrival. On a flaky train wifi connection, that's the difference between an app that feels alive and one that spins.

This is the kind of problem a backend-for-frontend quietly solves. You proxy the call through a Node service, strip the response down to the two fields the UI renders, and pipe that back to the browser. Fifteen lines of Express, a JSON.parse, a few property accesses, a res.json. It works. It's also yet another service to deploy, monitor, and wake up at 3am when the container image bloats past some memory limit.

SaltingIO has a smaller answer: a query parameter.

The ?select= parameter

Every Bridge record on SaltingIO accepts a ?select= query param that reshapes the upstream response before it returns to your caller. The path syntax is GJSON — dotted paths, array indices, and a tiny object-mapping syntax for picking multiple fields.

Here's the situation. You've created a Bridge at https://api.salting.io/r/b9f1c4e0-... that proxies to https://api.openai.com/v1/chat/completions with your API key stored in the record's headers. A normal call looks like this:

const res = await fetch('https://api.salting.io/r/b9f1c4e0-...', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: 'Name three pastel shades.' }],
  }),
});
const full = await res.json();
// full.choices[0].message.content is what you want
// full.usage, full.id, full.system_fingerprint, full.model ... are not

Add ?select=choices.0.message.content and the response collapses to a single field:

const url = 'https://api.salting.io/r/b9f1c4e0-...?select=choices.0.message.content';
const res = await fetch(url, { method: 'POST', /* same body */ });
const { data } = await res.json();
// data === "Dusty mauve, powder blue, sage green."

The wrapper is { "data": "..." } because that's the shape SaltingIO uses for transformed responses. One allocation, one string, no recursive walk through usage and fingerprint metadata.

Object mapping for the fields you actually need

Single-field extraction is fine for a one-shot prompt. For a UI that also wants to render token usage, use the object-mapping form — comma-separated {key: path} pairs inside braces:

?select={answer: choices.0.message.content, tokens: usage.total_tokens, finish: choices.0.finish_reason}

The response:

{ "data": { "answer": "Dusty mauve, powder blue, sage green.", "tokens": 42, "finish": "stop" } }

That's the complete shape your UI component needs. No dead fields. No TypeScript types that drift as OpenAI adds new properties. No parsing every ephemeral metric you don't care about. You've also narrowed the surface area of what the browser sees — if OpenAI starts returning some new internal field, it won't leak into your frontend just because nobody updated the BFF mapper.

It composes with template variables

The useful thing about doing reshaping at the gateway is that ?select= travels with every other query parameter. Template variables — SaltingIO's {{placeholder}} substitution — work alongside it.

Say your Bridge URL is configured as https://api.unsplash.com/search/photos?query={{q}}&per_page=6, with the Authorization header stored once at the record level. A search-as-you-type UI needs a thumbnail URL, alt text, and the photographer's name — three fields out of about twenty per result. Call it:

GET /r/<uuid>?q=mountains&select={photos: results.#.{url: urls.thumb, alt: alt_description, by: user.name}}

The # is GJSON's iterate-array operator — it applies the inner object mapping to every entry. The response is a pure { photos: [...] } array, each item exactly three fields, ready to drop into a component with no intermediate mapping layer. The Unsplash access key never touches the browser. You wrote zero lines of backend code.

One round trip, three concerns handled in one record: hide the credential, shape the URL, trim the payload.

Where this stops working

GJSON is a path language, not a scripting environment. If your transformation needs to merge data from two endpoints, apply a regex, or compute anything — totals, conditionals, date math — ?select= won't do it. You'd batch the calls with /batch and handle the merge on the client, or reach for an actual BFF.

There's also no escape hatch for binary or streaming responses. Server-sent events from OpenAI's streaming completions don't pass through the select stage cleanly; you'd call the streaming endpoint directly through the Bridge without ?select= and process chunks as usual. Response transformation is JSON-only, and it applies after the upstream completes. For latency-sensitive hot paths where every millisecond of gateway parse time matters, consider whether you actually need reshaping at all. Sometimes the fattest response is still the fastest to return.

Why a query parameter is the right shape for this

The argument for doing transformation at the gateway isn't really about bytes on the wire — the handful of kilobytes you save on an OpenAI call are rounding error compared to the model's response latency. It's about ownership. The request shape, the credential, the URL, and the response shape all live in the same record in the same dashboard.

When you rotate an OpenAI key, you don't also hunt through a BFF repo to check which routes consume it. When a product manager asks for one more field from the upstream, you edit the ?select= at the call site and ship, without a deploy of a separate service. When a new teammate joins, "where does the frontend get this field from" has a single answer instead of two.

For small teams and static frontends, that consolidation is often worth more than the flexibility a full backend gives you. When it isn't, you still have the backend option.

Read the docs for the full GJSON reference and more transformation recipes.