Skip to main content

Machine Specs

Docket analyzes your computer's hardware to help you choose the right models.

AI Capability Score

Machine Specs Badge

Click the score badge in the upper right corner of the Models page to see your computer's AI capability rating. This score helps you understand what size models your system can handle.

Understanding Your Score

Machine Specs Modal

The Machine Specs modal shows:

  • AI Capability Score — A rating from 0-100 with labels like "Excellent", "Good", or "Limited"
  • Max recommended — The largest model size your system can comfortably run
  • Hardware tags — Features like Metal (Apple GPU acceleration) or Unified Memory
  • Recommended Model Sizes — Specific size tiers with suggested quantizations and memory estimates

Score Ratings

ScoreRatingMax Model Size
80-100Excellent14B+ parameters
60-79Good7-13B parameters
40-59Moderate3-7B parameters
20-39Limited1-3B parameters
0-19Minimal< 1B parameters

Choosing the Right Model

RAM Requirements

As a rule of thumb, you need:

  • Model size + 2-4 GB for the operating system and Docket
Your RAMRecommended Model Size
4 GB1-3B models only
8 GBUp to 7B (Q4 quantization)
16 GBUp to 14B comfortably
32 GB+Can run larger models or multiple models

By Use Case

TaskRecommended
Quick Q&A, simple tasks3B-7B models
Coding assistance7B Coder models (Qwen Coder, DeepSeek Coder)
Complex reasoning13B-14B models
Creative writing7B-14B models
Image analysisVision models (Qwen VL, LLaVA)

Quantization Guide

When downloading models, you'll choose a quantization level. Here's what to pick:

QuantizationQualitySize (7B)Choose If...
Q8_0Excellent~8 GBYou have 16GB+ RAM and want best quality
Q5_K_MVery Good~5.5 GBYou have 12GB+ RAM and want better quality
Q4_K_MGood~4.5 GBBest for most users (Docket default)
Q3_K_MAcceptable~3.5 GBLimited RAM (4-6GB available)
Q2_KReduced~2.5 GBVery limited hardware only
tip

Q4_K_M is the sweet spot for most users. It offers excellent quality-to-size ratio and is what Docket's pre-installed models use.

Speed vs Quality

  • Smaller quantizations (Q3, Q4) run faster but may have slightly lower quality
  • Larger quantizations (Q5, Q8) have better quality but use more RAM and run slower
  • For coding and factual tasks, lower quantization is usually fine
  • For creative writing, you might prefer higher quality

The Machine Specs modal lists model sizes your computer can run, showing:

  • Size tier — 14B+, 13B, 7B, 3B, etc.
  • Best tag — Indicates the optimal size for your hardware
  • Quantizations — Recommended quantization levels
  • Memory estimate — Approximate RAM required
Unified Memory Advantage

On systems with unified memory (like Apple Silicon Macs), larger models can run more efficiently because the CPU and GPU share the same memory pool.

Exporting Specs

Click the Export button at the bottom of the Machine Specs modal to save a detailed breakdown of your system's specifications to your Files. This is useful for:

  • Troubleshooting performance issues
  • Sharing specs when asking for help
  • Comparing capabilities across different machines

Improving Performance

If your score is lower than expected:

  • Close other applications — Free up RAM for model loading
  • Use smaller quantizations — Q4_K_M instead of Q8_0
  • Try smaller models — A fast 7B model often beats a slow 14B model
  • Use your computer's SSD — Load models from Local Models for faster loading

Next Steps

Get an AI explanation of Docket

Ask AI
ChatGPTPerplexity