Provider analysis
Liquid AI
Publisher of LFM2 hybrid models designed for on-device deployment. MoE architectures from 350M to 24B parameters with edge-optimized inference.
This provider page blends full-profile entries with broader verified listings. Use it to separate deeply evaluated flagship models from source-backed records that are tracked primarily for market visibility, access data, and freshness coverage.
Tracked models available through provider-managed APIs.
Models with downloadable weights or self-hosted distribution paths.
Total source references attached across this provider catalog.
Liquid AI
LFM2-24B-A2B
LFM2
Liquid AI's flagship 24B/2.3B-active MoE hybrid model. 112 tok/s on AMD CPU, 293 tok/s on H100. Fits in 32GB RAM. Trained on 17T tokens with 30 conv + 10 attn layers. Supports 9 languages.
- Context
- 32,768
- Input
- Not applicable
- Output
- Not applicable
- Coverage
- Full profile
Liquid AI
LFM2-8B-A1B
LFM2
Liquid AI's 8.3B/1.5B-active MoE hybrid model with 12T training tokens. 24 layers (18 conv + 6 attn). 47.9K downloads on HuggingFace.
- Context
- 32,768
- Input
- Not applicable
- Output
- Not applicable
- Coverage
- Full profile
Liquid AI
LFM2-2.6B
LFM2
Liquid AI's 3B parameter LFM2 for lightweight text generation and tool use on edge devices.
- Context
- 32,768
- Input
- Not applicable
- Output
- Not applicable
- Coverage
- Full profile
Liquid AI
LFM2.5-1.2B-Instruct
LFM2.5
Liquid AI's 1.2B LFM2.5-Instruct with 262K downloads on HuggingFace. Extended 128K context for edge reasoning and instruction following.
- Context
- 131,072
- Input
- Not applicable
- Output
- Not applicable
- Coverage
- Full profile
Liquid AI
LFM2.5-1.2B-Thinking
LFM2.5
Liquid AI's 1.2B LFM2.5-Thinking with chain-of-thought reasoning for edge devices. 30K downloads on HuggingFace.
- Context
- 131,072
- Input
- Not applicable
- Output
- Not applicable
- Coverage
- Full profile