Similar Models

Distilled Gemma v1

sweelol/distilled-gemma-v1

A fast and efficient distilled version of Gemma, great for general tasks.

How to Use

To get started, install the `transformers` library:

pip install transformers

Then, use the following snippet to load the model:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "sweelol/distilled-gemma-v1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Your inference code here...

Available Versions

Tag / VariantSizeFormatDownload
sweelol/distilled-gemma-v1:latest (FP16)3.5GBSafetensorsLink
sweelol/distilled-gemma-v1:Q4_K_M2.1GBGGUFLink
sweelol/distilled-gemma-v1:Q5_K_M2.4GBGGUFLink

Model Details

Teacher Model

google/gemma3-14b-it

Distillation Method

Knowledge Distillation (Logits)

Training Dataset

Flickr30k (Conceptual)

Primary Task

Multimodal Generation

Performance Metrics (Example)

MetricStudent ModelTeacher Model
Model Size3.5GB8.5GB
BLEU Score28.530.1