Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
NewA 27B powerhouse distilled from Claude 4.6 Opus. Built for deep analytical reasoning and stable agentic performance in the watch market.
A 27B powerhouse distilled from Claude 4.6 Opus. Built for deep analytical reasoning and stable agentic performance in the watch market.
A highly efficient 4B parameter model from NVIDIA, optimized for low-latency on-device tasks and high-quality text generation.
A 2026-native reasoning model distilled from R1. Specialized for agentic "Chain of Thought" logic on local hardware.
The mid-2026 flagship using Engram memory architecture, specializing in 1M+ token code generation and autonomous refactoring.
Task-specialized 4B model for natural-language-to-SQL conversion. Distilled from DeepSeek-V3. Quantized GGUF for local database agents.
OpenClaw-recommended general-purpose model. 7B params, 128K context, MIT license. Balanced speed/quality for daily assistant tasks, research, and multi-step reasoning.
OpenClaw-compatible open-weight 20B model. 64K+ context, Apache 2.0. Balanced performance for tool-use, memory persistence, and multi-channel agentic workflows.
Liquid AI hybrid architecture via Unsloth. 700M params, 32K context, CPU-optimized. Built for narrow-scope agentic tasks: data extraction, RAG, multi-turn workflows.
A compact chat-tuned model, distilled from LLaMA-2 for quick interactions.
Mistral AI edge-optimized 3.4B+0.4B vision model. Native function calling, JSON outputs, 256K context. Built for tool-using agentic pipelines.
Unified 3B open-source model from Boss Zhipin. 128K context, Apache 2.0. Specialized for agentic workflows, code generation, and multi-step reasoning with tool-calling support.
Microsoft Phi-4-mini distilled for edge reasoning. 3.8B params, 128K context, MIT license. Optimized for agentic tool-calling and multilingual tasks.