Back to plugin
Pluginv0.1.2
ClawScan security
Mobile GUI Bundle · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 29, 2026, 7:25 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The bundle largely matches its stated purpose (Android GUI automation) but contains elements that increase risk and show small inconsistencies (embedded LLM system prompts, screenshot/LLM exfiltration surface, and a pre-scan injection signal); review before installing and run only in a trusted, isolated environment.
- Guidance
- This bundle appears to implement what it claims (an OpenClaw Android GUI automation bridge), but it has real risk vectors you should accept only after inspection and isolation. Before installing or running: - Review the shipped code locally (especially dist/bundle.js, adapter/*, adapter/yadb, and scripts/start_bridge.sh). Search for network endpoints, unexpected HTTP calls, or hidden credentials. - Treat the configured LLM endpoint as sensitive: the bridge will send device screenshots and task text to whatever llm.api_base and llm.api_key you set. Only point the bundle to LLM services you trust, and avoid supplying production API keys for general-purpose cloud accounts. - If you must run it, execute in an isolated environment (VM/container) and on devices that do not contain sensitive data, or use a disposable Android device. - Confirm the high-risk-action confirmation flow works as documented (the skill should prompt for explicit user confirmation for messaging/payments/deletion) and test that the implementation enforces it before granting the skill broad autonomy. - Because the adapter includes system-level prompts and asks the model for chain-of-thought (<THINK>), ensure logs and outputs do not leak sensitive info. Consider removing or sanitizing CoT outputs in production. - The registry metadata does not list the LLM API key as a required env var — ensure you understand where secrets go (config.yaml vs env) and secure that file (permissions, vault, or env-based secret injection). Given these factors (prompt strings, screenshot exfiltration potential, and the need to run local subprocesses), classify this bundle as suspicious until you or your security team reviews the code and restricts runtime privileges. If you lack the ability to audit or isolate, avoid installing it on production systems or devices with sensitive data.
- Findings
[system-prompt-override] expected: The adapter contains explicit SYSTEM_PROMPT and APP_DETECTION_PROMPT strings (action_parser.py). This is expected for an LLM-driven automation skill because the code needs to instruct the LLM how to behave, but such prompts are also a prompt-injection vector: if you point the adapter at an untrusted LLM or untrusted model responses are used without strict parsing/safety checks, the model could be steered to perform undesired actions.
Review Dimensions
- Purpose & Capability
- okThe name/description (mobile GUI automation) align with the shipped files: an MCP server (dist/bundle.js), Python adapter, ADB bridge, and skills/metadata. Files and scripts (adapter/, scripts/start_bridge.sh, .mcp.json) are coherent for controlling Android devices over ADB and calling an LLM for action planning.
- Instruction Scope
- concernSKILL.md and the adapters instruct the agent to capture device screenshots and send them to a configured LLM (llm.api_base + llm.api_key) and to spawn local Python subprocesses. The adapter code contains explicit system-style prompts and asks the LLM to output chain-of-thought (<THINK>) and structured actions. These behaviors are expected for an autonomous GUI agent but expand scope to include potentially sensitive data (full device screenshots, task details) being transmitted to whichever LLM endpoint is configured. The SKILL.md also instructs using a dangerous install flag and treating the bundle as operator-managed, which admits that the runtime will do actions that trigger security scanners.
- Install Mechanism
- okNo install spec is declared in registry metadata (instruction-only bundle), but the bundle contains runtime binaries/artifacts (dist/bundle.js, adapter/*). There are no remote download/install steps in the bundle itself. The README warns that the bundle triggers dangerous-code scanners because Node spawns a Python subprocess; the lack of an automatic remote installer reduces supply-chain risk, but you must still inspect the included JS and Python files locally before running.
- Credentials
- concernMetadata declares no required env vars, but runtime configuration (config.example.yaml / SKILL.md) requires supplying llm.api_base and llm.api_key (sensitive). .mcp.json also sets MOBILE_GUI_PYTHON in the MCP environment. Requesting an LLM API key and providing screenshots to that endpoint is coherent for an LLM-driven automation skill, but it is a high-privilege capability (exfiltrate screenshots, device contents). The skill does not request unrelated cloud credentials, which is good, but the fact that secret API keys are expected in config.yaml (file-based) rather than declared env vars is an inconsistency worth noting.
- Persistence & Privilege
- notealways:false (normal). The skill can be invoked autonomously (platform default) and will run a local MCP/adapter process that can access ADB. Autonomous invocation plus the ability to access devices and call external LLM endpoints increases blast radius if the bundle or its LLM config is untrusted. The bundle does not appear to modify other skills or global config, and it documents requiring operator review before install.
