Securing OpenClaw Agents — What NHI Means in Practice
OpenClaw exposed the gap between AI agent autonomy and credential governance. Here's what changed, what didn't, and how to actually secure an OpenClaw deployment without slowing it down.
OpenClaw is the framework most teams reach for when they want an AI agent to do things rather than just answer questions. It plans, takes action, and works across systems — and that is exactly why it has become a security problem.
The pattern looks familiar to anyone who has watched a new tooling category mature too fast: capability outran governance. Agents got permission to call live APIs, mutate cloud resources, and send messages on behalf of humans, before anyone had agreed on what authentication, scoping, or audit looked like for an autonomous actor.
This post is about the practical version of that problem — what an OpenClaw agent actually needs in production, and how Non-Human Identity (NHI) thinking maps to it.
The original sin: credentials in the config file
The default OpenClaw setup asks you to paste API keys, OAuth tokens, and cloud credentials into a config file or environment variable. For a single developer running a single agent on their laptop, that worked.
In 2026, security researchers documented over 40,000 publicly exposed OpenClaw deployments with credentials sitting in plaintext on disk. Some were misconfigured Docker images, some were committed config files, some were forgotten dev instances. The common thread: each one had a credential broad enough to do real damage, with no identity attached.
A leaked credential is bad. A leaked credential that nobody knows belongs to anything is worse. You can't rotate what you can't trace.
The credential question, restated
Forget the framework name for a moment. The real questions every OpenClaw deployment needs to answer:
- Who is this credential? Not "what API does it call" — what identity is acting? Can you point to a row in a database that is this agent?
- What is it allowed to read? Not at the API level — at the secret level. If your agent only needs
SLACK_TOKEN, why does it have access toSTRIPE_API_KEYtoo? - What is it allowed to do? A "can call any tool" agent is indistinguishable from a compromised agent.
- Who approved this action? When the agent calls
delete_repo, was there a human in the loop, or did the agent decide on its own authority? - What did it actually do? Six weeks from now, can you answer "show me every action this agent took on production between Tuesday 14:00 and 15:00"?
If you can't answer these for an OpenClaw agent in your org today, you have what NHI practitioners call a "shadow identity" — an actor with privileges and no accountability.
What "treating an OpenClaw agent as an identity" actually looks like
The shorthand answer is: stop thinking of the agent as a script that needs an API key, and start thinking of it as a service account that needs a scoped, expiring, owned, audited credential.
In practice that means a few non-negotiable properties:
A typed token, not a generic API key
The credential should signal in its very prefix what it is. If your logs show secr_agent_a1b2c3... you immediately know this is an agent identity. You don't have to cross-reference five tables to find out who it belongs to.
Project and environment scope
An OpenClaw agent should be told, at creation time, which project and which environment it works in. Not "all secrets in the org" — support-bot in the production environment of acme/customer-ops. Anything outside that scope returns 403, regardless of what API call the agent makes.
A secret allowlist
Most agents only need a handful of named credentials. Pin those at creation time. The slackbot agent gets SLACK_BOT_TOKEN and nothing else. It doesn't get to enumerate what other secrets exist; it doesn't get to pull STRIPE_API_KEY "just in case." Server-enforced, on every read path.
An owner
Every machine credential needs a human accountable for it. When the owner leaves the organisation, the credential goes into a 48-hour brownout — soft-disabled, returning 403 — before being permanently revoked. This catches the "we don't actually use this anymore" credentials without the 2am incident.
An expiry
Credentials that live forever are credentials that get forgotten. Every agent token should have an expiry. 30 days for high-privilege agents, 90 days for low-privilege ones. Renewal is a deliberate action, not a default.
Tool-level governance
Reading a secret is one privilege. Calling github.delete_repo is another. Reading a secret can be allowed by default; calling a destructive tool should require either an explicit allowlist or a one-time human approval. The framework most teams use for this is the MCP Gateway: per-tool rules, rate limits, and approval queues.
A short-lived session, not a long-lived bearer
When the agent presents its credential, the broker exchanges it for a 15–30 minute session. Even if the session token is intercepted in transit, it expires before it can be useful. The long-lived agent token only ever leaves your secret store on first use.
Conditional access
Stolen credentials still happen. Pin agents to IP ranges, business hours, and required user-agent patterns where the deployment supports it. A token used at 03:00 from a country your team doesn't operate in should fail closed.
Audit, not metrics
Every credential read, every tool invocation, every approval decision — all of it goes into an immutable log. This isn't for dashboards; it's for the moment six weeks from now when you need to reconstruct exactly what happened.
The minimum viable OpenClaw security posture
If you do nothing else, do these four things:
- Replace the plaintext credential file with a broker. The agent never holds a long-lived API key; it holds an agent token that exchanges into a short-lived session at startup.
- Allowlist the secrets each agent can read. If the slackbot agent reads
STRIPE_API_KEY, that should be a hard error, not a warning. - Mark dangerous tools as approval-required. Anything that mutates production should pause for a human, every time, with a clear audit record of who approved.
- Detect shadow agents. If someone runs OpenClaw with their personal CLI token and bypasses the broker, you want to know about it within minutes, not when the next audit lands.
Each of those is its own post:
- Getting started with OpenClaw and secr — the practical setup
- OpenClaw secret allowlists — limiting credential blast radius
- OpenClaw MCP approval queues — human-in-the-loop for dangerous tools
- Detecting shadow OpenClaw agents — finding unmanaged deployments
- The OpenClaw NHI posture checklist — the full checklist with remediations
The closing point
OpenClaw isn't going to slow down. Autonomous agents are useful enough that the question is no longer whether to deploy them — it's whether to deploy them with or without the credential layer that should have been there from the start.
The good news: NHI tooling for OpenClaw has caught up. The framework hasn't changed; the way credentials reach it has. If your OpenClaw deployment still pastes API keys into a config file, it's a one-line refactor away from being something you can defend in an audit.
The OpenClaw plugin (@secr/openclaw-plugin) is on ClawHub and free for 1 agent. Install reference → · The /agents pillar · Approve tool calls from Telegram · Approval webhooks
Ready to get started?
Stop sharing secrets over Slack. Get set up in under two minutes.
Create your account