LogoAI Just Better
icon of EdgeClaw

EdgeClaw

EdgeClaw Box: Edge-Cloud Collaborative AI Agent for enhanced privacy, cost-efficiency, and security.

Introduction

EdgeClaw is an innovative Edge-Cloud Collaborative AI Agent, jointly developed by leading institutions including Tsinghua University, Renmin University of China, AI9Stars, ModelBest, and OpenBMB. Built upon the OpenClaw framework, EdgeClaw addresses the critical limitations of current AI agent architectures that heavily rely on cloud processing, leading to privacy concerns and inefficient resource utilization. It re-enables the power of edge computing by introducing a sophisticated three-tier security system (S1 Passthrough, S2 Desensitization, S3 Local) and a dual-engine detection mechanism on the edge. This system comprises a rule-based detector for near-instantaneous identification of known sensitive data patterns (like API keys or private keys) and a local LLM semantic detector for understanding the context and complexity of user requests. By classifying requests in real-time, EdgeClaw intelligently routes them to the most appropriate processing path – prioritizing privacy and cost-effectiveness. Sensitive data is desensitized on-device before being sent to the cloud, while truly private data is processed entirely locally, with the cloud only maintaining contextual continuity. This intelligent edge-cloud forwarding allows developers to achieve seamless privacy protection without altering their existing business logic, making it a drop-in replacement for OpenClaw.

Key Highlights:

  • 🤝 Edge-Cloud Division of Labor: The edge handles data attribute perception (sensitivity, complexity), while the cloud manages reasoning and generation. This synergy leverages the edge's privacy capabilities and the cloud's processing power for complex tasks.
  • 🔒 Three-Tier Security Collaboration: EdgeClaw offers granular control over data handling:
    • S1 (Safe): Data is sent directly to the cloud model.
    • S2 (Sensitive): Data is desensitized on-device before being forwarded to the cloud.
    • S3 (Private): Data is processed entirely locally, with the cloud only receiving placeholders.
  • 💰 Cost-Aware Collaboration: A local LLM acts as a "Judge" to assess task complexity, routing simple requests to cheaper models and complex ones to more powerful, expensive models. This can lead to significant cost savings, with 60-80% of requests potentially being handled by low-cost models.
  • 🚀 Plug-and-Play, Zero Code Changes: EdgeClaw integrates seamlessly via its Hook mechanism, intercepting and routing requests without requiring modifications to the core business logic.

Three-Tier Security Collaboration Details:

EdgeClaw employs a robust three-level sensitivity classification system:

  • S1 (Safe): For data deemed safe for cloud processing, such as general queries or creative writing prompts like "Write a poem about spring."
  • S2 (Sensitive): For data containing Personally Identifiable Information (PII) or other sensitive details, like addresses or phone numbers. EdgeClaw's local LLM detects and extracts this PII, which is then programmatically replaced with markers like [REDACTED:PHONE] before being forwarded to the cloud via a privacy proxy. The cloud model receives a desensitized version.
  • S3 (Private): For highly sensitive data such as passwords, SSH keys, or pay slips. This data is processed entirely locally, ensuring it never leaves the user's device. The cloud-side history only receives a placeholder indicating local processing.

Dual Detection Engines:

EdgeClaw utilizes two primary detection engines for comprehensive analysis:

  • Rule Detector: Operates at near-zero latency (~0ms) using keywords and regular expressions to identify known sensitive patterns like API keys, database connection strings, and PEM key headers.
  • Local LLM Detector: Employs a local small LLM (e.g., MiniCPM-4.1, Qwen3.5) for semantic understanding, allowing it to identify sensitive data based on context. This engine has a latency of approximately 1-2 seconds and can handle multilingual data and complex contextual reasoning, such as identifying pay slips or addresses in various formats.

These engines can be stacked and combined, with their execution order and weighting configurable via the checkpoints setting in the privacy configuration.

Composable Router Pipeline:

Security and cost-awareness run in the same pipeline. The RouterPipeline uses a two-phase strategy:

  • Phase 1 (Fast Routers): Routers with higher weights (≥ 50), like the security router, run in parallel. If sensitive data is detected, the pipeline short-circuits, preventing further processing by slower routers.
  • Phase 2 (Slow Routers): Routers with lower weights (< 50), such as the cost-aware router, run on demand. The cost-aware router uses the LLM Judge to classify task complexity, routing requests to appropriate cloud models based on cost and capability.

This design prioritizes security by running the security check first, ensuring sensitive data is handled appropriately before any cost optimization logic is applied.

Smart Caching:

To further optimize performance, EdgeClaw implements prompt hash caching (SHA-256 with a 5-minute TTL). Identical requests are not re-evaluated by the detection engines, reducing latency and computational overhead.

Installation and Configuration:

EdgeClaw can be installed from source or integrated into an existing local LLM environment. The recommended installation involves cloning the repository, installing dependencies via pnpm, building the project, and then running the openclaw onboard --install-daemon command. For local LLM integration, Ollama is recommended, with support for other OpenAI-compatible APIs like vLLM, LMStudio, and SGLang. Configuration is managed through openclaw.json and customizable Markdown files in extensions/guardclaw/prompts/ for detection rules, Guard Agent behavior, and cost-aware routing logic.

Key Features:

  • Edge-Cloud Collaboration: Seamlessly integrates edge and cloud resources for AI tasks.
  • Three-Tier Security: S1 (Passthrough), S2 (Desensitization), S3 (Local) for robust data privacy.
  • Dual Detection Engines: Rule-based and LLM-based detection for comprehensive sensitivity analysis.
  • Cost-Aware Routing: Optimizes cloud model usage based on task complexity.
  • Composable Pipeline: Extensible router system with customizable hooks.
  • Prompt Caching: Reduces latency for repeated requests.
  • Zero Code Changes: Acts as a drop-in replacement for OpenClaw, requiring no modifications to existing business logic.
  • Extensible Configuration: Allows customization of detection rules, agent behavior, and routing logic through JSON and Markdown files.

EdgeClaw is designed for developers and organizations seeking to leverage the power of AI agents while maintaining strict data privacy and optimizing operational costs.

Share with those who may need it
Logo

Also got a product to promote?

Get high DR (50+) backlinks from us to boost your SEO and reach your target audience. Start for free.

AI One-click Submit
Logo of Happy Horse AI Video Generator

Happy Horse AI Video Generator

AD

HappyHorse-1.0: The #1 ranked AI video generator for text-to-video and image-to-video with multilingual audio.

Share with those who may need it

Information

Categories

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates