DBRX Instruct (HF) Hugging Face repo for DBRX Instruct checkpoints under an open license for local inference and finetuning. 0400 Open-source Models# DBRX Instruct# Hugging Face# local
MPT-7B MosaicML’s Apache-licensed 7B family (base/instruct/long-context) widely used as a fine-tuning base. 0520 Open-source Models# Apache-2.0# instruct# long context
Apple OpenELM Open Efficient Language Models (270M–3B) with code, weights, and training recipes for reproducible local use. 0510 Open-source Models# Apple# open# OpenELM
RWKV-7 (family) Attention-free RNN-style open models with transformer-level performance and constant-memory inference. 0380 Open-source Models# linear time# local# open source
TinyLlama 1.1B Compact 1.1B Llama-compatible model; popular GGUF quantizations make it fast on CPUs/GPUs locally. 0500 Open-source Models# 1.1B# GGUF# Llama-compatible
SmolLM2 Ultra-small open models (135M/360M/1.7B) tailored for on-device and constrained local deployments. 0490 Open-source Models# on-device# open weights# small LLM
Falcon 2 11B TII’s newer open series (text+VLM variants) optimized for efficient inference; Apache-style release. 0590 Open-source Models# 11B# Falcon 2# local
MiniCPM-V 2.6 Edge-friendly multimodal 8B model (images/video) with quantized variants for low-VRAM local inference. 0510 Open-source Models# edge# int4# local
InternVL 2.5 Open multimodal family (1B–78B); 78B surpasses 70% on MMMU; broad image/video understanding. 0480 Open-source Models# InternVL# MMMU# multimodal
LLaVA-OneVision 1.5 Fully open multimodal (images/video+text) models & training stack; strong results and reproducible recipes. 0420 Open-source Models# LLaVA# local# multimodal
Yi-1.5 01.AI’s upgraded Yi family (6B/34B) with improved instruction following and coding; widely used in local stacks. 0470 Open-source Models# 01.AI# 34B# instruction
QwQ-32B (Qwen Reasoning) Open-weight 32B RL-trained reasoning model reported to rival larger systems while remaining locally runnable. 0350 Open-source Models# open weights# QwQ-32B# reasoning
Phi-4 Reasoning Microsoft’s 14B open-weight reasoning model yielding strong complex-task performance with modest hardware. 0440 Open-source Models# 14B# local# open weights
OLMo 2 (AI2) Fully open training data, code, and checkpoints; transparent 7B/13B/32B family for reproducible research and apps. 0410 Open-source Models# AI2# fully open# local
IBM Granite 3.1 Apache-licensed 2B–8B family with long context options; optimized for enterprise/local deployments. 0440 Open-source Models# 8B# Apache-2.0# Granite 3.1