Sweelol-ai/kd-gemma3-pruned-dolly
NewA highly optimized model, first pruned for size and then knowledge-distilled from a larger teacher on the Dolly-15k dataset.
gemma3knowledge-distillationpruned
A highly optimized model, first pruned for size and then knowledge-distilled from a larger teacher on the Dolly-15k dataset.
A Gemma-3 270M model fully fine-tuned on the Dolly-15k dataset, intended to be used as a "teacher" for knowledge distillation.
The base version of Gemma-3 270M with weights pruned for efficiency. This is the starting point for fine-tuning.
sweelol/distilled-gemma-v1
A fast and efficient distilled version of Gemma, great for general tasks.
To get started, install the `transformers` library:
pip install transformersThen, use the following snippet to load the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "sweelol/distilled-gemma-v1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Your inference code here...google/gemma3-14b-it
Knowledge Distillation (Logits)
Flickr30k (Conceptual)
Multimodal Generation
| Metric | Student Model | Teacher Model |
|---|---|---|
| Model Size | 3.5GB | 8.5GB |
| BLEU Score | 28.5 | 30.1 |