IBM Granite 3.1 Apache-licensed 2B–8B family with long context options; optimized for enterprise/local deployments. 0450 Open-source Models# 8B# Apache-2.0# Granite 3.1
RWKV-7 (family) Attention-free RNN-style open models with transformer-level performance and constant-memory inference. 0380 Open-source Models# linear time# local# open source
OLMo 2 (AI2) Fully open training data, code, and checkpoints; transparent 7B/13B/32B family for reproducible research and apps. 0410 Open-source Models# AI2# fully open# local
Apple OpenELM Open Efficient Language Models (270M–3B) with code, weights, and training recipes for reproducible local use. 0520 Open-source Models# Apple# open# OpenELM
Phi-4 Reasoning Microsoft’s 14B open-weight reasoning model yielding strong complex-task performance with modest hardware. 0440 Open-source Models# 14B# local# open weights
MPT-7B MosaicML’s Apache-licensed 7B family (base/instruct/long-context) widely used as a fine-tuning base. 0530 Open-source Models# Apache-2.0# instruct# long context
QwQ-32B (Qwen Reasoning) Open-weight 32B RL-trained reasoning model reported to rival larger systems while remaining locally runnable. 0360 Open-source Models# open weights# QwQ-32B# reasoning
DBRX Instruct (HF) Hugging Face repo for DBRX Instruct checkpoints under an open license for local inference and finetuning. 0400 Open-source Models# DBRX Instruct# Hugging Face# local
Llama 3.2 (Meta) Family of open-weight models (1B–90B, some with vision) designed for local and edge deployment and strong instruction following. 0370 Open-source Models# edge AI# Llama 3.2# local inference
Yi-1.5 01.AI’s upgraded Yi family (6B/34B) with improved instruction following and coding; widely used in local stacks. 0470 Open-source Models# 01.AI# 34B# instruction
Qwen2.5 (Alibaba Qwen) Latest Qwen series from 0.5B–72B; dense decoder models with strong general, coding and math variants for local use. 0340 Open-source Models# 32B# 72B# 7B
LLaVA-OneVision 1.5 Fully open multimodal (images/video+text) models & training stack; strong results and reproducible recipes. 0430 Open-source Models# LLaVA# local# multimodal
Gemma 2 Google’s open-weight 9B/27B models; practical to run locally and widely adopted for fine-tuning and apps. 0630 Open-source Models# 27B# 9B# Gemma 2
InternVL 2.5 Open multimodal family (1B–78B); 78B surpasses 70% on MMMU; broad image/video understanding. 0480 Open-source Models# InternVL# MMMU# multimodal
Mixtral 8x22B (Mistral) Sparse MoE open model delivering top cost/performance among community LLMs; widely fine-tuned and quantized. 0390 Open-source Models# Apache-2.0# Mistral# Mixtral 8x22B