Back to plugin
Pluginv0.2.3

ClawScan security

SeekDB Memory · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousApr 5, 2026, 11:24 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The plugin largely matches a cloud-memory purpose but it accesses host model credentials and uses server-provided prompts (potential prompt-injection) while auto-uploading conversations to a remote endpoint — behaviors that are coherent for a cloud memory service but are security-sensitive and not fully declared.
Guidance
This plugin will send conversation messages to the configured remote 'Cloud Memory' endpoint and automatically recall/inject memories. Before installing: (1) Only point baseUrl/apiKey to a service you trust — the service will receive your chat content. (2) Review and consider disabling autoCapture/autoRecall if you don't want automatic upload/injection. (3) Be aware the plugin will try to use your host model provider API keys (runtime auth or env vars) to run extraction/rerank steps — if you don't want it to use those keys, remove them from environment/config or avoid granting runtime.modelAuth. (4) The plugin fetches prompts from the remote service and uses them as system prompts — a malicious remote prompt could influence LLM behavior (prompt-injection risk). (5) If you need higher assurance, run the service on a trusted internal endpoint or audit the plugin code and the memory service before enabling. Additional information that would reduce concern: explicit documentation that the plugin never sends host credentials, an opt-in toggle for using host model keys, and confirmation that server-provided prompts are auditable/controlled.
Findings
[system-prompt-override] expected: SKILL.md and src/extract.ts include explicit 'Reply with ONLY valid JSON' / system-prompt enforcement to constrain LLM outputs; this is expected for extraction/decision prompts but also flagged because the plugin fetches server-side prompts and will apply them as system prompts, creating an injection surface.

Review Dimensions

Purpose & Capability
concernThe name/description (cloud memory) aligns with the code: it requires an API key and sends/searches memories to a service. However the plugin also attempts to resolve and use the host's model provider API keys (via runtime.modelAuth, environment variables, or openclaw config) to run LLM calls — this access to unrelated provider credentials is not declared in the SKILL.md or metadata and is disproportionate to the stated 'one API key' claim.
Instruction Scope
concernRuntime behavior auto-captures conversation content (agent_end) and sends it to the configured remote endpoint, and auto-recalls/injects remote memories before replies. The plugin fetches prompts from the remote service and may pass them as system prompts to LLM calls, which creates a prompt-injection vector. It also registers marketplace tools that can upload/download agent files. These actions go beyond simple local helpers and involve transmitting/consuming potentially sensitive data on a remote server.
Install Mechanism
okNo install script or remote download is used (code is packaged). Dependencies are typical (pi-ai used via peerDependency). No extract-from-URL installs were detected.
Credentials
concernThe skill metadata declares no required env vars and 'one API key' in plugin config, but the implementation actively tries three layers to obtain model API keys (runtime.modelAuth, process.env via pi-ai, and config.providers). That means it can read or use host model credentials (e.g., OPENAI_API_KEY or provider config) without this being documented — a notable mismatch and potential sensitive access.
Persistence & Privilege
okalways:false and no special system-wide modifications are requested. Autonomous invocation is allowed (default) which is expected for a memory plugin. The plugin does auto-capture by default (configurable), which is a normal persistence behavior but has privacy implications.