The Problem: OpenAI API Keys Leak in Seconds
Most developers call OpenAI directly from React, Next.js, or Vanilla JS. The result? Your sk-... secret key becomes visible in browser DevTools, source maps, or network logs. Bots scan GitHub and public apps 24/7 for exposed keys.
Environment variables do NOT protect frontend builds. Tools like .env files get bundled into client-side code at build time. Serverless functions still require setup, cold starts, CORS configuration, and rate limiting. You are paying the Backend Tax for a single API call.
What Happens When Your Key Is Exposed?
- Runaway charges — Attackers spin up thousands of GPT-4o or o1 requests on your billing account
- Model abuse — Your key gets used for spam generation, policy-violating content, or prompt injection attacks
- Account suspension — OpenAI flags unusual activity and disables your account
- Production downtime — Your app stops working the moment OpenAI revokes the compromised key
- Data exposure — If your key has access to fine-tuned models or assistants, attackers can extract proprietary training data
Security is not optional once you go live.
The Solution: The Salting Layer
Instead of calling:
https://api.openai.com/v1/chat/completions
You call your private bridge URL:
https://api.salting.io/r/salting-io-bridge-uuid
Your real OpenAI sk-... key stays encrypted inside Salting's vault using AES-256-GCM encryption with a zero-knowledge architecture. The key is never exposed to the client, never logged, and never leaves the secure edge layer.
Salting handles:
- Secret key vaulting — Your API key is encrypted at rest with AES-256-GCM
- Request forwarding — Bridges proxy requests to OpenAI with your credentials injected server-side
- CORS enforcement — Restrict which domains can call your bridge via an allowlist
- Rate limiting — Built-in abuse protection without custom middleware
- Response transformation — Use GJSON-style
selectqueries to return only the fields you need - Failover URLs — Automatic fallback if the primary upstream is unavailable
You get backend-level security without running a backend.
How It Works
- Go to your Salting dashboard at salting.io/user/dashboard and navigate to the Secrets tab.
- Click "Add Secret" and select Bridge as the secret type. Paste the OpenAI endpoint (e.g.
https://api.openai.com/v1/chat/completions) and add yourAuthorization: Bearer sk-...header as the credential. Salting encrypts everything before storage. - Copy your Bridge URL — it will look like
https://api.salting.io/r/your-unique-bridge-id. Replace the OpenAI base URL in your frontend code with this bridge URL. - Test in the Playground — Use the Playground tab in your dashboard to send a test request and verify the response before deploying. Then ship your app.
No Node.js proxy. No serverless config. No CORS debugging. Under 2 minutes from start to production.
Salting vs Traditional Backend Setup
Traditional Backend Proxy
- Requires a Node.js, Python, or Go server
- Manual CORS configuration and debugging
- Secret key stored in your own infrastructure
- Custom rate limiting implementation needed
- Ongoing server maintenance, patching, and monitoring
- Deployment pipeline complexity
- Monthly hosting costs
Serverless Function (Lambda / Vercel / Cloudflare Workers)
- Cold starts add latency to first requests
- CORS headers must be configured manually
- Separate deployment pipeline from your frontend
- Secret management via environment variables or vaults
- Usage spikes can increase costs unpredictably
- Still infrastructure you own and maintain
Salting Layer (Recommended)
- No server required — zero infrastructure
- Built-in CORS enforcement via origin allowlist
- AES-256-GCM encrypted secret key vault with zero-knowledge architecture
- Integrated rate limiting and abuse protection
- Zero maintenance — no patching, no monitoring
- Deploy in under 2 minutes
- Template variables and response transformation built in
- Playground for real-time testing before deployment
What Can You Build?
- AI chat interfaces — Real-time conversational UIs powered by GPT-4o
- Streaming assistants — Token-by-token streaming responses with low latency
- Content generators — Blog posts, product descriptions, email drafts
- Code assistants — In-app coding helpers and code review tools
- AI dashboards — Internal tools for summarization, classification, and extraction
- SaaS AI features — Embed AI capabilities into your product without backend overhead
- Educational tools — Tutoring apps, quiz generators, and study aids
Perfect for React, Next.js, Vue, Svelte, static sites, and any frontend framework.
Frequently Asked Questions
Is Salting a proxy?
Technically yes — Salting bridges act as secure edge proxies. But unlike a traditional proxy, Salting encrypts your credentials with AES-256-GCM, enforces CORS, adds rate limiting, and requires zero infrastructure from you.
Does this replace my backend?
If your backend exists solely to hide API keys and forward requests, yes. Salting eliminates that entire layer. If your backend does business logic, authentication, or database operations, you still need it — but Salting handles the API key security piece.
Can I stream OpenAI responses?
Yes. Streaming works normally through your Salting Bridge. Set stream: true in your request body and handle the response as a readable stream.
Does Salting add latency?
Minimal edge overhead, typically sub-30ms. Salting's edge infrastructure is optimized for low-latency forwarding.
Can I use GPT-4o, o1, and future models?
Yes. Salting forwards requests transparently to the upstream URL you configure. Any model available via the OpenAI API works through your bridge.
Can I restrict which domains can use my bridge?
Yes. Use the CORS allowlist in your bridge configuration to restrict access to specific origins like https://myapp.com.
Can I test my bridge before deploying?
Yes. The Playground tab in your Salting dashboard lets you send test requests to your bridge and see the response from OpenAI in real-time. You can also generate batch requests to test multiple scenarios at once.
Stop Exposing Your OpenAI Key
Every public frontend app with an embedded OpenAI key is a liability. One leaked key can cost thousands in API charges and get your account suspended. Secure it in minutes with the Salting Layer and ship with confidence.