AI Hub

Top AI Models

Current best-in-class AI models by category

Google DeepMind
Closed

Gemini 3.1 Pro

Google's latest flagship model. State-of-the-art reasoning, multimodal, and agentic capabilities. Released Feb 19, 2026.

Feb 2026

Anthropic
Closed

Claude Sonnet 4.6

Anthropic's default free/Pro model. Full upgrade in coding, agents, long-context. 1M token context (beta). Released Feb 2026.

Feb 2026

Alibaba
Open Source397B (MoE)

Qwen3.5

Alibaba's open-weight MoE flagship. 397B total / 17B active params, 512 experts, 256K context, 201 languages, native multimodal.

Feb 2026

xAI
Closed

Grok 4.20

xAI's multi-agent system with 4-agent collaboration framework. SuperGrok exclusive. Feb 2026.

Feb 2026

Anthropic
Closed

Claude Opus 4.6

Anthropic's most capable model. Outperforms GPT-5.2 on GDPval-AA by ~144 Elo. 1M token context (beta). Released Feb 2026.

Feb 2026

OpenAI
Closed

GPT-5.3 Codex

OpenAI's best-in-class agentic coding model, combining Codex + GPT-5 stacks. ~25% faster inference, top SWE-bench scores.

Jan 2026

OpenAI
Closed

GPT-5.2

First model to cross 90% on ARC-AGI-1. GPT-5.2 Thinking scores 52.9% on ARC-AGI-2. Advanced reasoning flagship.

Dec 2025

OpenAI
Closed

GPT-5

OpenAI's flagship multimodal model, now default in ChatGPT. Replaces GPT-4o with automatic integrated reasoning.

Jul 2025

DeepSeek
Open Source671B (MoE)

DeepSeek V3

Open-source frontier model. Context window expanded to 1M+ tokens (Feb 2026). Strong reasoning at very low inference cost.

Dec 2024

Google DeepMind
Closed

Gemini 3 Flash

Google's fast frontier model, default in Gemini app. PhD-level reasoning at speed. Replaced Gemini 2.5 Flash.

Jan 2026

Mistral AI
Open Source675B (MoE)

Mistral Large 3

Mistral's open-source flagship. 41B active / 675B total MoE, 256K context, text + image. Apache 2.0.

Dec 2025

xAI
Closed

Grok 4

xAI's flagship model with real-time X/web access, 130K+ context, native tool use. Most intelligent model per xAI.

Jul 2025

OpenAI
Closed

o3 Pro

OpenAI's extended-thinking reasoning model for complex math, science and coding tasks.

Apr 2025

Meta
Open Source128 experts

Llama 4 Maverick

Meta's open-source MoE model. 17B active params / 128 experts, native multimodal (text + image). Apache 2.0.

Apr 2025