Mixtral 8x22B (Mistral) Sparse MoE open model delivering top cost/performance among community LLMs; widely fine-tuned and quantized. 0400 Open-source Models# Apache-2.0# Mistral# Mixtral 8x22B
DeepSeek-R1 (and distilled checkpoints) Open (MIT-licensed) reasoning model with distilled 1.5B–70B local checkpoints; known for chain-of-thought quality. 0390 Open-source Models# DeepSeek-R1# distilled# local
RWKV-7 (family) Attention-free RNN-style open models with transformer-level performance and constant-memory inference. 0380 Open-source Models# linear time# local# open source
Llama 3.2 (Meta) Family of open-weight models (1B–90B, some with vision) designed for local and edge deployment and strong instruction following. 0380 Open-source Models# edge AI# Llama 3.2# local inference
QwQ-32B (Qwen Reasoning) Open-weight 32B RL-trained reasoning model reported to rival larger systems while remaining locally runnable. 0370 Open-source Models# open weights# QwQ-32B# reasoning
DBRX (Databricks) Open-license MoE model (Base & Instruct) with strong general performance and active community tooling. 0370 Open-source Models# Databricks# DBRX# instruct
Qwen2.5 (Alibaba Qwen) Latest Qwen series from 0.5B–72B; dense decoder models with strong general, coding and math variants for local use. 0350 Open-source Models# 32B# 72B# 7B