Which runs better on your device? Side-by-side comparison of specs, quantization sizes, and device compatibility.
| Quant | Gemma 3 1B | Llama 3.2 1B | |
|---|---|---|---|
| FP16 | 2.2 GB | 2.7 GB | Gemma 3 1B smaller |
| Q8 | 1.3 GB | 1.6 GB | Gemma 3 1B smaller |
| Q6 | 0.9 GB | 1.1 GB | Gemma 3 1B smaller |
| Q5 | 0.8 GB | 1.0 GB | Gemma 3 1B smaller |
| Q4 | 0.7 GB | 0.8 GB | Gemma 3 1B smaller |
| Q3 | 0.5 GB | 0.6 GB | Gemma 3 1B smaller |
| Q2 | 0.4 GB | 0.5 GB | Gemma 3 1B smaller |
| Device | Gemma 3 1B | Llama 3.2 1B |
|---|---|---|
| 💻 MacBook Air M4 macOS | Runs great FP16 · ~55 tok/s | Runs great FP16 · ~44 tok/s |
| 💻 MacBook Air M3 macOS | Runs great FP16 · ~45 tok/s | Runs great FP16 · ~37 tok/s |
| 💻 MacBook Air M2 macOS | Runs great FP16 · ~45 tok/s | Runs great FP16 · ~37 tok/s |
| 💻 MacBook Pro M4 Pro macOS | Runs great FP16 · ~124 tok/s | Runs great FP16 · ~101 tok/s |
| 💻 MacBook Air M1 macOS | Runs great FP16 · ~31 tok/s | Runs great FP16 · ~25 tok/s |
| 💻 MacBook Air M1 macOS | Runs great FP16 · ~31 tok/s | Runs great FP16 · ~25 tok/s |
| 💻 MacBook Pro M1 macOS | Runs great FP16 · ~31 tok/s | Runs great FP16 · ~25 tok/s |
| 💻 MacBook Pro M1 Pro macOS | Runs great FP16 · ~91 tok/s | Runs great FP16 · ~74 tok/s |
| 💻 MacBook Pro M1 Pro macOS | Runs great FP16 · ~91 tok/s | Runs great FP16 · ~74 tok/s |
| 💻 MacBook Pro M1 Max macOS | Runs great FP16 · ~182 tok/s | Runs great FP16 · ~148 tok/s |
| 💻 MacBook Pro M1 Max macOS | Runs great FP16 · ~182 tok/s | Runs great FP16 · ~148 tok/s |
| 💻 MacBook Pro M2 Pro macOS | Runs great FP16 · ~91 tok/s | Runs great FP16 · ~74 tok/s |
| 💻 MacBook Pro M2 Pro macOS | Runs great FP16 · ~91 tok/s | Runs great FP16 · ~74 tok/s |
| 💻 MacBook Pro M2 Max macOS | Runs great FP16 · ~182 tok/s | Runs great FP16 · ~148 tok/s |
| 💻 MacBook Pro M2 Max macOS | Runs great FP16 · ~182 tok/s | Runs great FP16 · ~148 tok/s |
| 💻 MacBook Pro M3 Pro macOS | Runs great FP16 · ~68 tok/s | Runs great FP16 · ~56 tok/s |
| 💻 MacBook Pro M3 Pro macOS | Runs great FP16 · ~68 tok/s | Runs great FP16 · ~56 tok/s |
| 💻 MacBook Pro M3 Max macOS | Runs great FP16 · ~182 tok/s | Runs great FP16 · ~148 tok/s |
| 💻 MacBook Pro M3 Max macOS | Runs great FP16 · ~182 tok/s | Runs great FP16 · ~148 tok/s |
| 📱 iPhone 16 Pro iOS | Runs well FP16 · ~22 tok/s | Runs well FP16 · ~18 tok/s |
| 📱 iPhone 15 iOS | Tight fit FP16 · ~13 tok/s | Tight fit FP16 · ~11 tok/s |
| 📱 Galaxy S25 Ultra Android | Runs great FP16 · ~24 tok/s | Runs great FP16 · ~20 tok/s |
| 📱 Galaxy S24 Android | Runs well FP16 · ~19 tok/s | Tight fit FP16 · ~16 tok/s |
| 📱 Pixel 9 Pro Android | Runs great FP16 · ~22 tok/s | Runs great FP16 · ~18 tok/s |
| 🎮 Steam Deck OLED Linux | Runs great FP16 · ~40 tok/s | Runs great FP16 · ~33 tok/s |
| 🖥️ Gaming PC (RTX 4070) Windows | Runs great FP16 · ~200 tok/s | Runs great FP16 · ~187 tok/s |
| 🖥️ Gaming PC (RTX 3060) Windows | Runs great FP16 · ~164 tok/s | Runs great FP16 · ~133 tok/s |
| 🖥️ Gaming PC (RTX 4080) Windows | Runs great FP16 · ~200 tok/s | Runs great FP16 · ~200 tok/s |
| 🖥️ Gaming PC (RTX 4090) Windows | Runs great FP16 · ~200 tok/s | Runs great FP16 · ~200 tok/s |
| 🤖 Atom 1 Linux | Runs great FP16 · ~93 tok/s | Runs great FP16 · ~76 tok/s |
| 🤖 Atom 1 Linux | Runs great FP16 · ~124 tok/s | Runs great FP16 · ~101 tok/s |
| 🤖 Atom 1 Linux | Runs great FP16 · ~124 tok/s | Runs great FP16 · ~101 tok/s |
| 📱 iPad Pro M4 iOS | Runs great FP16 · ~38 tok/s | Runs great FP16 · ~31 tok/s |
| 🖥️ Mac Mini M1 macOS | Runs great FP16 · ~31 tok/s | Runs great FP16 · ~25 tok/s |
| 🖥️ Mac Mini M1 macOS | Runs great FP16 · ~31 tok/s | Runs great FP16 · ~25 tok/s |
| 🖥️ Mac Mini M2 macOS | Runs great FP16 · ~45 tok/s | Runs great FP16 · ~37 tok/s |
| 🖥️ Mac Mini M2 Pro macOS | Runs great FP16 · ~91 tok/s | Runs great FP16 · ~74 tok/s |
| 🖥️ Mac Mini M2 Pro macOS | Runs great FP16 · ~91 tok/s | Runs great FP16 · ~74 tok/s |
| 🖥️ Mac Mini M4 macOS | Runs great FP16 · ~55 tok/s | Runs great FP16 · ~44 tok/s |
| 🖥️ Mac Mini M4 macOS | Runs great FP16 · ~55 tok/s | Runs great FP16 · ~44 tok/s |
| 🖥️ Mac Mini M4 Pro macOS | Runs great FP16 · ~124 tok/s | Runs great FP16 · ~101 tok/s |
| 🖥️ Mac Mini M4 Pro macOS | Runs great FP16 · ~124 tok/s | Runs great FP16 · ~101 tok/s |
| 🖥️ Mac Studio M4 Max macOS | Runs great FP16 · ~200 tok/s | Runs great FP16 · ~200 tok/s |
| 🖥️ Mac Pro M2 Ultra macOS | Runs great FP16 · ~200 tok/s | Runs great FP16 · ~200 tok/s |
| 💻 Snapdragon X Elite Laptop Windows | Runs great FP16 · ~62 tok/s | Runs great FP16 · ~50 tok/s |
| 📱 OnePlus 13 Android | Runs great FP16 · ~24 tok/s | Runs great FP16 · ~20 tok/s |
| 🍓 Raspberry Pi 5 Linux | Runs great FP16 · ~15 tok/s | Runs great FP16 · ~12 tok/s |
Both models run on 47 of 47 devices. Llama 3.2 1B has a larger context window (128K vs 32K). Llama 3.2 1B is the larger model and may produce better quality outputs, while Gemma 3 1B is lighter on resources. For memory-constrained devices, Gemma 3 1B is smaller at its lowest quant (0.4 GB vs 0.5 GB).