Similar Models

LoRA Gemma-3 270M

Sweelol-ai/lora-gemma3-270m-dolly

A Gemma-3 270M model fine-tuned on the Dolly-15k dataset using Low-Rank Adaptation (LoRA) for maximum efficiency.

How to Use

To get started, install the `transformers` library:

pip install transformers

Then, use the following snippet to load the model:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Sweelol-ai/lora-gemma3-270m-dolly"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Your inference code here...

Available Versions

Tag / VariantSizeFormatDownload
No specific variants listed for this model.

Model Details

Teacher Model

Google/gemma-3-270m

Distillation Method

Knowledge Distillation (Logits)

Training Dataset

Flickr30k (Conceptual)

Primary Task

Multimodal Generation

Performance Metrics (Example)

MetricStudent ModelTeacher Model
Model Size~5MB (Adapter)8.5GB
BLEU Score28.530.1