unsloth/SmolLM2-1.7B-Instruct-GGUF
NewHugging Face SmolLM2 fine-tuned via Unsloth. 1.7B params, Apache 2.0. Optimized for instruction-following agents in data labeling, product cataloging, and editorial generation.
Hugging Face SmolLM2 fine-tuned via Unsloth. 1.7B params, Apache 2.0. Optimized for instruction-following agents in data labeling, product cataloging, and editorial generation.
Alibaba Qwen3.5 sub-1B via Unsloth Dynamic 2.0. 256K context, Apache 2.0. Optimized for lightweight function-calling agents and document parsing workflows.
Alibaba Qwen3 updated 4B instruct model. 256K native context, Apache 2.0. Optimized for instruction-following, tool-calling, and agentic workflows without CoT overhead.
HuggingFaceTB/SmolLM2-135M-Instruct-GGUF
Ultra-lightweight 135M instruct model from Hugging Face. Apache 2.0. Optimized for browser/mobile edge deployment, classification, and low-latency fallback tasks.
To get started, install the `transformers` library:
pip install transformersThen, use the following snippet to load the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "HuggingFaceTB/SmolLM2-135M-Instruct-GGUF"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Your inference code here...SmolLM2-Base
Knowledge Distillation (Logits)
Flickr30k (Conceptual)
Multimodal Generation
| Metric | Student Model | Teacher Model |
|---|---|---|
| Model Size | 145MB | 8.5GB |
| BLEU Score | 28.5 | 30.1 |