Back to plugin
Pluginv0.2.3
ClawScan security
Plugin · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousMar 24, 2026, 6:48 PM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The package claims to be a memory plugin but its runtime code is missing and its configuration exposes fields (like an LLM API key and auto-capture behaviors) that aren't reflected in declared requirements—this mismatch warrants caution.
- Guidance
- This package is inconsistent: it declares a memory plugin that expects a runtime file (./dist/index.mjs) and an LLM API key, but neither the runtime code nor any declared environment requirements are included. Before installing or enabling this skill: 1) Ask the author for the missing dist/index.mjs (or a rebuild) and for a clear explanation of how/where memories are stored (files, remote service, database). 2) Confirm how the llmApiKey is provided and whether any credentials will be sent to third-party endpoints. 3) Note the commercial license forbids redistribution and reverse-engineering—this reduces transparency. 4) Do not enable autoCapture/autoRecall or provide secret API keys until you verify the runtime code and storage behavior, and consider testing in a sandbox environment.
Review Dimensions
- Purpose & Capability
- concernThe bundle identifies itself as a 'memory' plugin with many runtime configuration options (autoCapture, autoRecall, llmEndpoint, llmApiKey, qmdCollection, probe queries, etc.), but no runtime code (dist/index.mjs) is included in the published files. That means the declared capability (a memory extension that injects/consumes conversation context) is not actually present in the package as published. Also the package.json/openclaw.plugin.json expect to operate with an LLM API key and filesystem (auto-detect .openclaw-memory.json), but the skill declares no required env vars or config paths—these are inconsistent with the stated purpose.
- Instruction Scope
- concernThere is no SKILL.md with runtime instructions; instead the provided files are package metadata. The plugin metadata hints at behaviors that could read/write memory files and contact an LLM endpoint (e.g., default llmEndpoint and llmApiKey config fields, autoCapture option, and mention of auto-detecting .openclaw-memory.json). Because there is no runtime code included, we cannot verify what the plugin would actually do at runtime; that lack of clarity is a scope concern.
- Install Mechanism
- okNo install spec or code is included in the published bundle, so nothing will be written to disk or downloaded during install. From an install-mechanism perspective this is low risk. However, the absence of the referenced runtime file (dist/index.mjs) makes the package non-functional as published.
- Credentials
- concernThe plugin schema exposes an llmApiKey and llmEndpoint (defaulting to api.openai.com) and numerous config flags that imply it will store and recall conversation content. Yet the skill metadata declares no required environment variables or primary credential. This mismatch is concerning because the plugin appears to need secret credentials (LLM API key) to operate but does not declare how it expects to receive them, and the schema contains sensitive-sounding fields without justification in the manifest.
- Persistence & Privilege
- noteThe plugin is not marked 'always' and is user-invocable (normal). The config explicitly supports 'autoCapture' and 'autoRecall', which imply persistent storage of conversation context; the published files do not include storage implementation details. There's no evidence it attempts to modify other skills or system-wide configuration, but enabling capture would cause the plugin to persist conversation data if the runtime were present—so treat persistence as data-sensitive.
