Back to plugin
Pluginv10.9.0
ClawScan security
π¦ιΈ½εη Executive OS Β· ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 26, 2026, 10:38 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The package contains a large, powerful Executive OS codebase that can read/write config under your home directory, call external LLM endpoints, and run daemon/automation components β but the SKILL metadata and SKILL.md do not declare required credentials, endpoints, or runtime behaviors, creating important mismatches you should review before installing.
- Guidance
- This skill is large and powerful but the public instructions omit key runtime behaviors and credentials. Before installing or enabling it: - Assume the code may write under your home directory (~/.openclaw/) and may launch background processes or perform network calls. Inspect files referenced (CONFIG_PATHs, heartbeat scripts, daemon_manager, auto_git, auto_backup_uploader, connectors, alerting channels) to confirm what will run. - The package reads LLM-related environment variables (LLM_API_KEY, LLM_BASE_URL, LLM_MODEL) and a config file under ~/.openclaw; treat those as required even though they are not declared. Do not place sensitive credentials in your environment if you are unsure. - If you want to test safely, run in an isolated environment (container or VM) and deny persistent mounts/network access until you audit the code paths you care about (e.g., skill acquisition, auto-fetch, webhook channels). - Ask the author for explicit documentation: which modules run on invocation vs. which spawn daemons; what external endpoints are contacted; what credentials are needed; and a minimal manifest of which files will be created under your home directory. Reason for 'suspicious': the skill's code broadly matches an executive OS, but the metadata and runtime instructions fail to disclose external/network/daemon behaviors and required credentials β a mismatch that requires manual review before trusting or enabling autonomous operation.
Review Dimensions
- Purpose & Capability
- concernThe skill's name/README claim an 'Executive Personal OS' which justifies many internal modules (governance, execution, connectors). However the bundle includes modules that imply network access, automatic Git sync, daemon management, auto backup/upload, connector factories, and skill acquisition (auto-fetching capabilities). Those capabilities reasonably belong to an 'executive OS', but the public metadata declares no required credentials/config and the SKILL.md does not disclose external endpoints or what will run β an inconsistency between claimed purpose and the information provided to an end user.
- Instruction Scope
- concernSKILL.md is a high-level description and shows example commands to run tests, but does not document runtime behaviors such as reading/writing ~/.openclaw files, contacting LLM endpoints, launching daemons, performing Git syncs, or calling webhook channels. The included code (e.g., core/llm/llm_client.py, LLMEngine, infrastructure/daemon_manager.py, auto_git, auto_backup_uploader, connectors, alerting channels) references config files and network calls and will perform filesystem and network I/O if executed β the runtime instructions omit these actions and any consent/approval flow for background or external operations.
- Install Mechanism
- noteNo install spec (instruction-only) reduces the formal installation risk surface, but the package contains 500+ files and scripts (including heartbeat daemons and shell scripts). Although nothing is auto-downloaded during an 'install' step here, the code itself will create directories and caches under the user's home (e.g., ~/.openclaw/.cache, ~/.openclaw/workspace/...), and includes code paths for acquiring and enabling extensions which could fetch remote code at runtime. The absence of an install step is not sufficient reassurance given the size and capabilities of the codebase.
- Credentials
- concernMetadata declares no required environment variables or credentials, yet multiple files read environment/config (e.g., core/llm/llm_client.py reads CONFIG_PATH or environment variables LLM_API_KEY, LLM_BASE_URL, LLM_MODEL; core/llm/llm.py writes cache to the user's home and sends x-api-key). Alerting/connector modules suggest other external credentials (webhooks, Git/GitHub) will be used if configured. This mismatch (no declared env vars but code expects keys/config) is an incoherence and means the skill could attempt to use any credentials present in your environment or config files without that being declared up front.
- Persistence & Privilege
- concernThe package contains daemon scripts (infrastructure/heartbeat_control.sh, heartbeat_daemon.sh), a daemon_manager, auto-sync/auto_git, and auto_backup_uploader components β i.e., code intended to run continuously or as background processes. The skill metadata does not request 'always: true', but the codebase appears designed to persist state and launch processes. That increases potential blast radius if the agent runs components that spawn background tasks or modify user files. This behavior is not documented in SKILL.md.
