LLM AtlasLLM AtlasSearch models

Morph

flash-compact

Overall 68commercial

Morph's Flash Compact for near-lossless context compaction at 33,000+ tok/sec. Reduces context 50-70% in under 2 seconds. +0.6% on SWE-Bench Pro.

Last verified: 2026-03-29Confidence: HighSources: 3
textcodetool-use
Input price
$0.0003/1K tok
Output price
$0.001/1K tok
Context window
200,000
Max output
65,536
Release date
2026-03-07
Access
api, hosted
License
Proprietary / not disclosed
Last verified
2026-03-29

Capability profile

Radar view of the model's practical strengths. This chart is backed by textual summaries below for crawlability.

Benchmark summary

Flash Compact achieves 33,000 tok/sec context compaction with +0.6% improvement on SWE-Bench Pro.

No benchmark series is attached to this model yet. Source links and product metadata are available below.

Strengths

  • 33,000 tok/s compaction
  • 50-70% context reduction
  • Near-lossless
  • Under 2 seconds

Trade-offs

  • Compaction only
  • May lose nuance in complex contexts

Crawlable benchmark analysis

flash-compact is positioned as a context compaction model with published scores that emphasize its practical fit for buyers evaluating the entry.

Published scores highlight reasoning 50/100, coding 85/100, enterprise readiness 82/100, vision 5/100, speed 99/100, and safety 72/100.

Pricing starts at $0.0003 per 1K input tokens and $0.001 per 1K output tokens. With a context window of 200,000 tokens, it supports large-document analysis and retrieval workflows.

Benchmark coverage is still limited for this entry, so this section focuses on published metadata and deployment fit.

Sources

Provider and distribution links used to verify this model record.

Last verified: 2026-03-29

Related models

OpenAI

GPT-5.4

OpenAI

OpenAI's GPT-5.4, the most capable and efficient frontier model for professional work. First general-purpose model with native computer-use capabilities. Combines industry-leading coding from GPT-5.3-Codex with improved agentic workflows.

Score 933 sources
textreasoningtool-usevisionapihosted
Context
1,000,000
Input
$0.005/1K tok
Output
$0.02/1K tok
Coverage
Full profile

Anthropic

Claude Sonnet 4.6

Claude 4.6

Anthropic's current Sonnet tier for fast frontier reasoning, coding, and long-context agent work.

Score 923 sources
textvisionreasoningcodetool-useapihosted
Context
1,000,000
Input
$0.003/1K tok
Output
$0.02/1K tok
Coverage
Full profile

Anthropic

Claude Opus 4.6

Claude 1M

Anthropic's most intelligent Claude model for complex agents, coding, and deep reasoning, with 1M token context and 128K output.

Score 913 sources
textvisionreasoningapihosted
Context
1,000,000
Input
$0.005/1K tok
Output
$0.03/1K tok
Coverage
Full profile