Similar Models

Gemma 4 E4B-it (Multimodal Edge)

google/gemma-4-E4B-it

Google DeepMind multimodal instruction model. 4.5B effective params, 128K context, text+image+audio. Native function calling, configurable thinking modes, Apache 2.0.

How to Use

To get started, install the `transformers` library:

pip install transformers

Then, use the following snippet to load the model:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "google/gemma-4-E4B-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Your inference code here...

Available Versions

Tag / VariantSizeFormatDownload
google/gemma-4-E4B-it:Q4_K_M6.1GBGGUFLink
google/gemma-4-E4B-it:Q5_K_M6.9GBGGUFLink
google/gemma-4-E4B-it:Q8_08.8GBGGUFLink

Model Details

Teacher Model

google/gemma-4-E4B-base

Distillation Method

Knowledge Distillation (Logits)

Training Dataset

Flickr30k (Conceptual)

Primary Task

Multimodal Generation

Performance Metrics (Example)

MetricStudent ModelTeacher Model
Model Size6.1GB8.5GB
BLEU Score28.530.1