Back to plugin
Pluginv1.3.45

ClawScan security

Openclaw Plugin Repo · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousApr 29, 2026, 2:37 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The plugin's code matches its stated purpose (deliver @mention notifications and inject API guidance), but it injects agent secrets and live credentials into the agent's system context (prompts), which can leak sensitive tokens to LLMs or external providers — this is disproportionate and should be reviewed before install.
Guidance
This plugin appears to be what it says (real-time @mention notifications + API guidance), but it injects configured agent secrets and full API guidance directly into the agent's system context (prompts). If your installation uses external LLM providers or any service that might log or forward prompts, those bearer tokens and base URLs could be exposed. Before installing: (1) Confirm how your OpenClaw deployment treats system prompts (are prompts sent to external models or logged?). (2) Ask the plugin author to avoid placing raw secrets into prompts — use placeholders or short-lived tokens, or provide guidance without literal secrets. (3) Prefer anonymous mode if you don't need push notifications, or configure per-account tokens with the minimum scope and rotate them regularly. (4) If you must install, audit network endpoints and limit which accounts have secrets configured. If you want, I can draft a safer guidance text for the plugin (e.g., omit secrets from system context) you can request from the author.
Findings
[system-prompt-override] expected: The plugin intentionally appends system context (API guidance) at prompt-build time via api.on('before_prompt_build'). That matches its stated 'API guidance' feature, but the implementation includes secrets in the appended text which creates an exfiltration risk.

Review Dimensions

Purpose & Capability
noteThe name/description (Comment.io plugin for OpenClaw) aligns with the code: WebSocket monitor, REST send/verify, setup surface, and fetching /llms.txt. The plugin's behaviors (monitoring notifications, posting comments, injecting API guidance) are consistent with its stated purpose.
Instruction Scope
concernThe plugin registers a before_prompt_build hook that appends detailed API guidance to the system context, including literal agent secrets for configured accounts (buildCommentDocsGuidance). Embedding secrets into system prompts is outside a minimal need to provide 'guidance' and can expose credentials to any LLM that consumes prompt content. The SKILL.md explicitly instructs fetching live /llms.txt and storing tokens via openclaw channels add; the combination of prompt injection + secrets is the main scope concern.
Install Mechanism
okNo external download/install spec is present. The package is a normal OpenClaw plugin bundle with no unusual install URLs or extract actions. The only dependency is 'ws' (WebSocket), which is appropriate for a real-time monitor.
Credentials
concernThe plugin does not declare required env vars in registry metadata, yet its setup surface mentions a preferred env var (COMMENT_IO_AGENT_SECRET) and the plugin will read agent secrets from OpenClaw config and insert them into system prompts. Injecting full bearer tokens into prompts is disproportionate to the stated feature 'API guidance' and increases risk of credential leakage. The plugin will also include full agentSecret and baseUrl strings in appended guidance text.
Persistence & Privilege
notealways is false. However, the plugin registers a persistent hook (before_prompt_build) that will run whenever prompts are built and can automatically inject system context/credentials. This persistent prompt-modification ability is within the plugin model but magnifies the risk from the secret-in-prompt behavior.