Machine Specs
Docket analyzes your computer's hardware to help you choose the right models.
AI Capability Score

Click the score badge in the upper right corner of the Models page to see your computer's AI capability rating. This score helps you understand what size models your system can handle.
Understanding Your Score

The Machine Specs modal shows:
- AI Capability Score — A rating from 0-100 with labels like "Excellent", "Good", or "Limited"
- Max recommended — The largest model size your system can comfortably run
- Hardware tags — Features like Metal (Apple GPU acceleration) or Unified Memory
- Recommended Model Sizes — Specific size tiers with suggested quantizations and memory estimates
Score Ratings
| Score | Rating | Max Model Size |
|---|---|---|
| 80-100 | Excellent | 14B+ parameters |
| 60-79 | Good | 7-13B parameters |
| 40-59 | Moderate | 3-7B parameters |
| 20-39 | Limited | 1-3B parameters |
| 0-19 | Minimal | < 1B parameters |
Recommended Model Sizes
The modal lists model sizes your computer can run, showing:
- Size tier — 14B+, 13B, 7B, 3B, etc.
- Best tag — Indicates the optimal size for your hardware
- Quantizations — Recommended quantization levels (Q4_K_M, Q5_K_M, Q8_0)
- Memory estimate — Approximate RAM required
On systems with unified memory (like Apple Silicon Macs), larger models can run more efficiently because the CPU and GPU share the same memory pool. Running models from your computer's memory is fastest for larger models.
Exporting Specs
Click the Export button at the bottom of the Machine Specs modal to save a detailed breakdown of your system's specifications to your Files. This is useful for:
- Troubleshooting performance issues
- Sharing specs when asking for help
- Comparing capabilities across different machines
Improving Performance
If your score is lower than expected:
- Close other applications — Free up RAM for model loading
- Use smaller quantizations — Q4_K_M instead of Q8_0
- Try smaller models — A fast 7B model often beats a slow 14B model
- Use your computer's SSD — Load models from Local Models for faster loading