Similar Models

Phi-4-mini-Reasoning (Agentic Logic)

unsloth/Phi-4-mini-reasoning-GGUF

Microsoft Phi-4-mini distilled for step-by-step reasoning. 3.8B params, 128K context, MIT license. Unsloth bug-fixed GGUF for reliable agentic tool-calling.

How to Use

To get started, install the `transformers` library:

pip install transformers

Then, use the following snippet to load the model:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "unsloth/Phi-4-mini-reasoning-GGUF"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Your inference code here...

Available Versions

Tag / VariantSizeFormatDownload
unsloth/Phi-4-mini-reasoning-GGUF:Q4_K_M2.49GBGGUFLink
unsloth/Phi-4-mini-reasoning-GGUF:Q5_K_M2.85GBGGUFLink

Model Details

Teacher Model

Phi-4-14B-Reasoning

Distillation Method

Knowledge Distillation (Logits)

Training Dataset

Flickr30k (Conceptual)

Primary Task

Multimodal Generation

Performance Metrics (Example)

MetricStudent ModelTeacher Model
Model Size2.49GB8.5GB
BLEU Score28.530.1