deepseek/r1-distill-qwen-7b
NewA 2026-native reasoning model distilled from R1. Specialized for agentic "Chain of Thought" logic on local hardware.
reasoningagenticdistilled
A 2026-native reasoning model distilled from R1. Specialized for agentic "Chain of Thought" logic on local hardware.
A 2026-native 3B reasoning model from Hugging Face. Dual-mode `/think` and `/no_think` for agentic workflows with 64K-128K context. Fully open recipe.
Jackrong/Qwen3.5-2B-Claude-4.6-Opus-Reasoning-Distilled-GGUF
A 2B dense architecture model fine-tuned with structured step-by-step reasoning trajectories distilled from Claude 4.6 Opus.
To get started, install the `transformers` library:
pip install transformersThen, use the following snippet to load the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Jackrong/Qwen3.5-2B-Claude-4.6-Opus-Reasoning-Distilled-GGUF"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Your inference code here...| Tag / Variant | Size | Format | Download |
|---|---|---|---|
| Jackrong/Qwen3.5-2B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:Q4_K_M | 1.4GB | GGUF | Link |
Claude-4.6-Opus / Qwen3.5-27B
Knowledge Distillation (Logits)
Flickr30k (Conceptual)
Multimodal Generation
| Metric | Student Model | Teacher Model |
|---|---|---|
| Model Size | 1.6GB | 8.5GB |
| BLEU Score | 28.5 | 30.1 |