SweeLOL Hub Floating Logo

Hub for Distilled Open Source LLMs.

Library of Expert Models.

Harness the power of distilled open-source LLMs to build, train, and deploy cutting-edge AI all within this hub.

Library of Expert Models.

The future of AI is personal, and its foundation is open source. At Swee.LOL, we are on a mission to democratize state-of-the-art AI by taking it out of the cloud and putting it onto the devices you use every day. We are an open-source research hub dedicated to creating "Personal LLMs"—a library of compact, powerful, and specialized models, distilled and fine-tuned for maximum efficiency in Edge and resource-constrained environments.
• Core Advantages: Open source platforms offer several compelling advantages over proprietary solutions.

Hub

Home of Machine Learning

The easiest way to build, share, and discover distilled machine learning models.
Join the community and bring your ML projects to life.

Tasks

Problems solvers

Thousands of creators, one community—building the future of Audio, Vision, and Language with AI.

Stayed tuned.

Open Source

Hugging Face transformers

Hugging Face transformers is our natural language processing library and our hub is now open to all ML models, with support from libraries likeFlair,Asteroid,ESPnet,Pyannote, and more to come.

jane@sweelol:~
from transformers import  AutoTokenizer,  AutoModelForMaskedLM
 tokenizer=  AutoTokenizer.from_pretrained(
   "sweelol-kd-gemma3"
 )
 model=  AutoModelForMaskedLM.from_pretrained(
   "sweelol-kd-gemma3"
 ) 

On demand

Inference API

Woking on an API interface which would simplifies access to the power of Google Cloud TPU accelerators, allowing you to efficiently run large-scale AI inference, fine-tune foundational models, and manage AI training workloads of any size, from small-scale experiments to massive distributed runs.

Fill-Mask
Examples
Mask token: [MASK]
Computation time on Intel Xeon 3rd Gen Scalable cpu: cached
happiness
0.036
survival
0.031
salvation
0.017
freedom
0.017
unity
0.015
Computation time on Intel Xeon 3rd Gen Scalable cpu: cached
My name isClaraPERand I live inBerkeleyLOC,CaliforniaLOC. I work at this cool company calledSwee.LOLINC.
Science

Our Research contributions

We’re on a journey to advance and democratize NLP for everyone. Along the way, we contribute to the development of technology for the better.

🌸

T0

Multitask Prompted Training Enables Zero-Shot Task Generalization

Open source state-of-the-art zero-shot language model out ofBigScience.

..........

🐎

Gemma3

DistilBERT, a distilled version of GEMMA3: smaller, faster, cheaper and lighter

A smaller, faster, lighter, cheaper version of GEMMA3 obtained via model distillation.

Read more

📚

HMTL

Hierarchical Multi-Task Learning

Learning embeddings from semantic tasks for multi-task learning. We have open-sourced code and a demo.

Read more

🐸

Dynamical Language Models

Meta-learning for language modeling

A meta learner is trained via gradient descent to continuously and dynamically update language model weights.

Read more

🤖

State of the art

Neuralcoref

Our open source coreference resolution library for coreference. You can train it on your own dataset and language.

Read more

🦄

Auto-complete your thoughts

Write with Transformers

This web app is the official demo of the Transformers repository 's text generation capabilities.

Start writing