Sweelol-ai/pt-gemma3-270m-dolly
NewA Gemma-3 270M model adapted to the Dolly-15k dataset using Prompt Tuning, the most memory-efficient PEFT method.
gemma3prompt-tuningpeft
A Gemma-3 270M model adapted to the Dolly-15k dataset using Prompt Tuning, the most memory-efficient PEFT method.
A Gemma-3 270M model, pruned for efficiency and then fully fine-tuned on the Dolly-15k instruction dataset.
A Gemma-3 270M model fully fine-tuned on the Dolly-15k dataset, intended to be used as a "teacher" for knowledge distillation.
Sweelol-ai/lora-gemma3-270m-dolly
A Gemma-3 270M model fine-tuned on the Dolly-15k dataset using Low-Rank Adaptation (LoRA) for maximum efficiency.
To get started, install the `transformers` library:
pip install transformersThen, use the following snippet to load the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Sweelol-ai/lora-gemma3-270m-dolly"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Your inference code here...| Tag / Variant | Size | Format | Download |
|---|---|---|---|
| No specific variants listed for this model. | |||
Google/gemma-3-270m
Knowledge Distillation (Logits)
Flickr30k (Conceptual)
Multimodal Generation
| Metric | Student Model | Teacher Model |
|---|---|---|
| Model Size | ~5MB (Adapter) | 8.5GB |
| BLEU Score | 28.5 | 30.1 |