Security Guide
Every AI Coding Agent Is a Secret-Spilling Machine
Cursor, Claude Code, GitHub Copilot, Windsurf — they read your entire codebase to write code. That includes every .env file, every hardcoded API key, and every config.ts with a database URL pasted inline. When the agent scaffolds a new service, it copies patterns it's seen — including the insecure ones.
This isn't theoretical. It's happening right now, every day, at every company using AI-assisted development.
The Attack Surface Just Got Bigger
AI coding tools have changed the threat model for secrets in three specific ways:
| Before AI Agents | After AI Agents |
|---|---|
| Developers manually type secrets into config | Agents scaffold entire projects with placeholder secrets that get replaced with real ones |
| One .env file per project | Agents create .env, .env.local, .env.development, .env.production — often without .gitignore entries |
| Secrets stay in the developer's terminal | Agents read secrets from context, copy patterns across files, and log commands to session history |
| Security review catches mistakes | Agents commit faster than review can keep up |
The core problem: AI agents are context maximizers. They work better with more context. .env files sitting in the project root are context. The agent reads them, learns the pattern, and reproduces it — sometimes in files that get committed, sometimes in prompts that get logged, sometimes in generated code that inlines values instead of reading from process.env.
Real Patterns We've Seen
These aren't hypothetical. These are patterns that show up in repos built with AI assistance:
1. Inlined secrets in generated code
The agent sees OPENAI_API_KEY=sk-proj-abc123 in .env and generates:
// AI agents sometimes inline values they've seen in context
const openai = new OpenAI({
apiKey: "sk-proj-abc123", // ← hardcoded, not process.env
});This passes linting. It passes type checking. It works locally. And it ships to production with a real API key baked into the JavaScript bundle.
2. Multiple .env files created without gitignore
An agent scaffolding a new service creates:
my-new-service/
├── .env ← created by agent
├── .env.local ← created by agent
├── .env.production ← created by agent
├── src/
└── package.json ← no .gitignore generatedThree files with credentials. No .gitignore. The developer runs git add . and pushes. GitHub's secret scanning might catch it — hours or days later. By then the key has been in a public repo, indexed by bots that scrape for credentials in real time.
3. Secrets copied across project boundaries
Agent is working on Service A, reads its .env, then gets asked to scaffold Service B. It reuses the same database URL, the same API key, the same JWT secret across both services — because that's what was in context. Now two services share credentials that should be isolated.
How secr Fixes This
The fix isn't telling developers to be more careful. The fix is removing secrets from the places where agents (and humans) can leak them.
Remove .env files from disk entirely
With secr run, secrets are injected as environment variables at process start. No .env file exists on disk for agents to read, copy, or commit:
# Before: agent sees .env, copies patterns
cat .env
DATABASE_URL=postgresql://admin:password@db.example.com:5432/prod
STRIPE_SECRET_KEY=sk_live_abc123
# After: no file to read, no file to leak
secr run -- npm start
# ✔ 13 secrets loaded → server running on :3000The agent can still see process.env.DATABASE_URL in your code — that's fine. It's a reference, not a value. It can't accidentally inline a credential it never saw.
Block secrets at commit time
Even with .env files removed, agents occasionally hardcode values they've seen in terminal output or conversation context. secr guard catches this before it reaches the repo:
$ git commit -m "add openai integration"
✗ Blocked: 1 potential secret found
[HIGH] OpenAI API Key src/ai/client.ts:4
Commit aborted. Remove the secret and try again.Scan for existing leaks
Before you start using AI agents on a codebase, audit what's already there:
$ secr scan
✗ Found 5 potential secret(s)
.env.backup ← forgot this existed
[HIGH] AWS Access Key ID L2:1
[HIGH] AWS Secret Access Key L3:1
src/config/database.ts ← hardcoded by a previous agent session
[HIGH] Database URL with Password L12:18
scripts/deploy.sh ← credentials in a shell script
[HIGH] Stripe Live Secret Key L8:12
[MED] Generic API Token L15:8
5 high, 1 medium, 0 low | 287 files scanned in 124msIsolate secrets per environment
When agents scaffold new services, they should get development credentials — never production. secr's environment model ensures this:
# Developer's local setup — only sees dev secrets
secr run --env development -- npm start
# CI pipeline — staging secrets
secr run --env staging -- npm test
# Production — only accessible to deploy tokens, not developer machines
secr run --env production -- npm startEven if an agent reads DATABASE_URL from the dev environment, it's a disposable local database — not production.
The CLAUDE.md Defense
If you're using Claude Code, add these lines to your CLAUDE.md to explicitly prevent the agent from handling secrets incorrectly:
## Secrets
- NEVER hardcode secret values in source files. Always use `process.env.SECRET_NAME`.
- NEVER create or modify .env files. Secrets are managed by secr.
- NEVER log, print, or include secret values in comments or documentation.
- For local dev, use `secr run -- <command>` to inject secrets at runtime.This won't catch everything, but it reduces the surface area significantly. Claude Code reads CLAUDE.md at the start of every session and treats it as authoritative.
Workflow for AI-Assisted Teams
Here's the setup we recommend for teams using AI coding agents:
1. Remove all .env files from disk
secr set --from-env .env --env development
rm .env .env.local .env.production
echo ".env*" >> .gitignore2. Install the pre-commit guard
secr guard install3. Update dev scripts to use secr
{
"scripts": {
"dev": "secr run -- tsx watch src/server.ts",
"test": "secr run --env staging -- vitest"
}
}4. Add rules to your AI agent config
Whether it's CLAUDE.md, .cursorrules, or Copilot instructions — tell the agent that secrets come from the environment, not from files.
5. Scan weekly
# Add to your CI pipeline
secr scan --fail-on highThe Speed/Security Tradeoff Is a Myth
AI agents make teams faster. secr makes them safer. The two compound — faster scaffolding with fewer security reviews needed, because the most common class of vulnerability (leaked credentials) is structurally prevented.
The teams shipping fastest in 2026 aren't choosing between AI speed and security hygiene. They're using both.
Start protecting your AI-assisted workflow
npm i -g @secr/cli
secr scan
secr guard install
secr run -- npm start