Back to plugin
Pluginv0.5.0
ClawScan security
KongBrain · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 25, 2026, 8:25 PM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The plugin's code and README mostly match its stated purpose (a SurrealDB-backed persistent memory engine), but there are important metadata inconsistencies and runtime behaviors (system-prompt/directive injection, auto-downloaded local model, background daemons, and console logging of connection info) that warrant caution before installing.
- Guidance
- Key things to consider before installing: - Metadata mismatch: the registry metadata shown to the evaluator omitted the 'surreal' binary and SURREAL_* env vars even though SKILL.md and openclaw.plugin.json require them — verify the plugin manifest in the registry and the SKILL.md are consistent before automatic installation. - SurrealDB access: you will need a running SurrealDB and credentials. Follow the SKILL.md advice: bind to 127.0.0.1 and do not expose SurrealDB to the network unless you intentionally want remote access. Use strong, non-default credentials. - Local model download & native deps: the default local provider auto-downloads a ~420MB BGE‑M3 GGUF from Hugging Face and depends on node-llama-cpp (native bindings). If you want to avoid large downloads or native modules, switch to an 'openai-compat' provider before first run. - Sensitive keys & re-embedding: switching to openai-compat will require an API key (env var). The re-embed CLI and backfill tools will send text to whatever embedding endpoint you configure — make sure that endpoint is trusted and that you are comfortable with that dataset being sent there. The migration CLI logs SurrealDB and provider info to stdout; it avoids putting API keys into configs by naming the env var, but you are responsible for how you set OPENAI_BASE_URL/OPENAI_API_KEY. - Prompt/directive injection: KongBrain intentionally injects directives/'cognitive checks' into system prompts and structured outputs. This is part of its design but it expands the plugin's ability to steer LLM behavior. If you rely on higher-level system policies, review the implementation and test in an isolated environment first. - Least privilege and isolation: run the plugin in an environment you trust (e.g., container or VM) until you audit the code/configuration. Review and pin the npm package version, audit native dependencies (node-llama-cpp), and consider running SurrealDB locally bound to loopback or in a container with limited network access. - Review code paths that write under your home directory (~/.kongbrain) and the SurrealDB contents if you have privacy concerns (the engine stores persistent memories of conversations). If you need guarantees, consider an ephemeral or containerized deployment and regular audits/backups. If you want, I can: (1) list exact files that read/write the home directory and network endpoints, (2) point out where in the code prompt/directive injection happens, or (3) extract the exact env variables and CLI commands the plugin will run at runtime.
- Findings
[system-prompt-override] expected: The SKILL.md and code explicitly reference 'cognitive checks' and 'directive injection' and the token flow docs describe system-prompt sections. The plugin intentionally manipulates LLM prompts/structured output as part of retrieval and quality checks; this makes the finding expected but also increases risk because it grants the plugin the ability to inject directives into model prompts.
Review Dimensions
- Purpose & Capability
- concernThe SKILL.md, openclaw.plugin.json, package.json and ~40 source files all implement a SurrealDB-backed context engine with pluggable embedding providers (local BGE‑M3 via node-llama-cpp or 'openai-compat' HTTP). That aligns with the declared purpose. However the top-level registry metadata provided to the evaluator (saying 'no required binaries' and 'no required env vars' / 'No install spec — instruction-only') contradicts the SKILL.md and openclaw.plugin.json which require the 'surreal' binary and SURREAL_* env vars; this metadata mismatch is significant and could mislead an automated installer or reviewer.
- Instruction Scope
- concernRuntime instructions and code perform persistent storage (SurrealDB), background extraction daemons, local model downloads, file I/O under ~/.kongbrain, and tooling that can rewrite embedding provider tags and re-embed rows. The docs and code also describe 'cognitive checks' and 'directive injection' (system-prompt manipulation). The pre-scan flagged a 'system-prompt-override' pattern; in context this appears intentional (the plugin injects directives into prompts as part of its retrieval/quality pipeline), but that capability broadens scope and can be used to alter runtime behavior beyond simple data storage. The skill also logs connection/operation info to stdout (e.g., SurrealDB URL, provider id) during CLI tools.
- Install Mechanism
- noteInstallation is via npm (package: kongbrain) / openclaw plugin mechanism (openclaw plugins install). package.json depends on node-llama-cpp and surrealdb. The plugin will (by default) auto-download a ~420MB BGE‑M3 GGUF from Hugging Face on first startup if using the local provider — Hugging Face is a known host but this is a large runtime download and writes to disk. No arbitrary or opaque remote download hosts were observed; install method is standard (npm + model fetch).
- Credentials
- noteRequired credentials (SURREAL_URL, SURREAL_USER, SURREAL_PASS, SURREAL_NS, SURREAL_DB) are appropriate for a DB-backed plugin. Optional/alternate credentials (OPENAI_API_KEY / OPENAI_BASE_URL or a custom env named in embedding.openaiCompat.apiKeyEnv) are reasonable when switching to remote embedding providers. The earlier metadata that claimed 'no required env vars' is inconsistent with these requirements and should be corrected. The plugin writes model/weights to ~/.kongbrain and persists learned weights; that is expected but notable.
- Persistence & Privilege
- okThe plugin persists state (SurrealDB tables, ~/.kongbrain weights, and local data directories) and runs background daemon workers — this is consistent with a persistent memory engine. It does not request 'always: true'. It does not appear to modify other plugins' configs. Persistent files are stored under user home (e.g., ~/.kongbrain) which is proportional to its function.
