Liquid AI
LFM2-24B-A2B
Overall 60Open weightopen-weightLiquid AI's flagship 24B/2.3B-active MoE hybrid model. 112 tok/s on AMD CPU, 293 tok/s on H100. Fits in 32GB RAM. Trained on 17T tokens with 30 conv + 10 attn layers. Supports 9 languages.
Capability profile
Radar view of the model's practical strengths. This chart is backed by textual summaries below for crawlability.
Benchmark summary
Best-in-class efficiency: 24B MoE with 2B active params for on-device deployment.
No benchmark series is attached to this model yet. Source links and product metadata are available below.
Strengths
- • 112 tok/s on CPU
- • 293 tok/s on H100
- • Fits 32GB RAM
- • Native tool calling
- • 9 languages
- • Open license
Trade-offs
- • 32K context limit
- • Not optimized for coding
- • Lower quality than frontier models
Crawlable benchmark analysis
LFM2-24B-A2B is positioned as an edge-optimized moe model with published scores that emphasize its practical fit for buyers evaluating the entry.
Published scores highlight reasoning 62/100, coding 48/100, enterprise readiness 68/100, vision 15/100, speed 94/100, and safety 65/100.
Pricing is not applicable for this self-hosted or open-weight entry. With a context window of 32,768 tokens, it supports large-document analysis and retrieval workflows.
Benchmark coverage is still limited for this entry, so this section focuses on published metadata and deployment fit.
Related models
OpenAI
GPT-5.4
OpenAI
OpenAI's GPT-5.4, the most capable and efficient frontier model for professional work. First general-purpose model with native computer-use capabilities. Combines industry-leading coding from GPT-5.3-Codex with improved agentic workflows.
- Context
- 1,000,000
- Input
- $0.005/1K tok
- Output
- $0.02/1K tok
- Coverage
- Full profile
Anthropic
Claude Sonnet 4.6
Claude 4.6
Anthropic's current Sonnet tier for fast frontier reasoning, coding, and long-context agent work.
- Context
- 1,000,000
- Input
- $0.003/1K tok
- Output
- $0.02/1K tok
- Coverage
- Full profile
Anthropic
Claude Opus 4.6
Claude 1M
Anthropic's most intelligent Claude model for complex agents, coding, and deep reasoning, with 1M token context and 128K output.
- Context
- 1,000,000
- Input
- $0.005/1K tok
- Output
- $0.03/1K tok
- Coverage
- Full profile