LLM AtlasLLM AtlasSearch models

Liquid AI

LFM2-24B-A2B

Overall 60Open weightopen-weight

Liquid AI's flagship 24B/2.3B-active MoE hybrid model. 112 tok/s on AMD CPU, 293 tok/s on H100. Fits in 32GB RAM. Trained on 17T tokens with 30 conv + 10 attn layers. Supports 9 languages.

Last verified: 2026-03-29Confidence: HighSources: 4
texttool-useopen-source
Input price
Not applicable
Output price
Not applicable
Context window
32,768
Max output
16,384
Release date
2026-02-24
Access
open-weight, self-hosted
License
LFM Open License v1.0
Last verified
2026-03-29

Capability profile

Radar view of the model's practical strengths. This chart is backed by textual summaries below for crawlability.

Benchmark summary

Best-in-class efficiency: 24B MoE with 2B active params for on-device deployment.

No benchmark series is attached to this model yet. Source links and product metadata are available below.

Strengths

  • 112 tok/s on CPU
  • 293 tok/s on H100
  • Fits 32GB RAM
  • Native tool calling
  • 9 languages
  • Open license

Trade-offs

  • 32K context limit
  • Not optimized for coding
  • Lower quality than frontier models

Crawlable benchmark analysis

LFM2-24B-A2B is positioned as an edge-optimized moe model with published scores that emphasize its practical fit for buyers evaluating the entry.

Published scores highlight reasoning 62/100, coding 48/100, enterprise readiness 68/100, vision 15/100, speed 94/100, and safety 65/100.

Pricing is not applicable for this self-hosted or open-weight entry. With a context window of 32,768 tokens, it supports large-document analysis and retrieval workflows.

Benchmark coverage is still limited for this entry, so this section focuses on published metadata and deployment fit.

Sources

Provider and distribution links used to verify this model record.

Last verified: 2026-03-29

Related models

OpenAI

GPT-5.4

OpenAI

OpenAI's GPT-5.4, the most capable and efficient frontier model for professional work. First general-purpose model with native computer-use capabilities. Combines industry-leading coding from GPT-5.3-Codex with improved agentic workflows.

Score 933 sources
textreasoningtool-usevisionapihosted
Context
1,000,000
Input
$0.005/1K tok
Output
$0.02/1K tok
Coverage
Full profile

Anthropic

Claude Sonnet 4.6

Claude 4.6

Anthropic's current Sonnet tier for fast frontier reasoning, coding, and long-context agent work.

Score 923 sources
textvisionreasoningcodetool-useapihosted
Context
1,000,000
Input
$0.003/1K tok
Output
$0.02/1K tok
Coverage
Full profile

Anthropic

Claude Opus 4.6

Claude 1M

Anthropic's most intelligent Claude model for complex agents, coding, and deep reasoning, with 1M token context and 128K output.

Score 913 sources
textvisionreasoningapihosted
Context
1,000,000
Input
$0.005/1K tok
Output
$0.03/1K tok
Coverage
Full profile