Our Models
State-of-the-art language models fine-tuned for African languages, optimized for efficiency and cultural relevance.
Swahili Gemma 1B
Fine-tuned Gemma 3 1B Instruction Model
Overview
A fine-tuned Gemma 3 1B instruction model specialized for English-to-Swahili translation and Swahili conversational AI. Accepts input in both English and Swahili, but outputs responses exclusively in Swahili.
Performance Metrics
Capabilities
- • English-to-Swahili translation
- • Swahili conversational AI
- • Text summarization in Swahili
- • Question answering in Swahili
- • Creative writing in Swahili
Key Achievements
- ✓ Highest BLEU-to-parameter ratio among comparable models
- ✓ Outperforms Gemma 3 4B (4x larger) by 153% on BLEU score
- ✓ Achieves 94% of Gemma 3 27B performance with 27x fewer parameters
Usage Example
from transformers import pipeline
# Load the model
pipe = pipeline("text-generation",
model="CraneAILabs/swahili-gemma-1b")
# Translate English to Swahili
result = pipe("Translate to Swahili: Hello, how are you?")
print(result[0]['generated_text'])
Ganda Gemma 1B
Fine-tuned Gemma 3 1B Instruction Model
Overview
A fine-tuned Gemma 3 1B instruction model specialized for English-to-Luganda translation and Luganda conversational AI. Accepts input in both English and Luganda, but outputs responses exclusively in Luganda.
Performance Metrics
Capabilities
- • English-to-Luganda translation
- • Luganda conversational AI
- • Text summarization in Luganda
- • Question answering in Luganda
- • Luganda language research
Key Achievements
- ✓ Highest efficiency ratio among compared models
- ✓ Outperforms Gemma 3 4B (4x larger) by 535% on BLEU score
- ✓ Outperforms GPT-4 Mini by 36% on BLEU score
Usage Example
from transformers import pipeline
# Load the model
pipe = pipeline("text-generation",
model="CraneAILabs/ganda-gemma-1b")
# Translate English to Luganda
result = pipe("Translate to Luganda: Hello, how are you?")
print(result[0]['generated_text'])
Model Variants
GGUF Format
Optimized for CPU inference with llama.cpp
LiteRT Format
Optimized for on-device mobile inference
MLX Format
Optimized for Apple Silicon (M-series chips)