LLM AtlasLLM AtlasSearch models

Provider analysis

Microsoft

Publisher of the Phi family of compact language, reasoning, and multimodal models.

Last verified: 2026-03-29Confidence: HighPrimary sources: 2

This provider page blends full-profile entries with broader verified listings. Use it to separate deeply evaluated flagship models from source-backed records that are tracked primarily for market visibility, access data, and freshness coverage.

Headquarters
Redmond, WA
Founded
1975
Models tracked
17
Full-profile models
17
Catalog last verified
2026-03-29
Latest model verification
2026-03-29
Newest release tracked
2025-04-30
Confidence
High
Access mix
open-weight, self-hosted, hosted
API models
0

Tracked models available through provider-managed APIs.

Open-weight models
17

Models with downloadable weights or self-hosted distribution paths.

Primary source links
36

Total source references attached across this provider catalog.

Provider sources

Official links used to verify the provider profile and platform coverage.

Last verified: 2026-03-29
  • Microsoft Research website
    official-website
    Open link
  • Microsoft Research
    official-docs
    Open link

Microsoft

Phi-4-multimodal-instruct

Phi multimodal

Microsoft's 5.6B Phi-4 multimodal model with vision, audio, and text input for lightweight assistant features.

Score 713 sources
textvisionaudioopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-4-reasoning-vision-15B

Phi multimodal

Microsoft's Phi-4 reasoning vision model (15B) combining visual understanding with chain-of-thought reasoning.

Score 712 sources
textvisionaudioopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-3.5-vision-instruct

Phi multimodal

Microsoft's Phi-3.5 vision model with 128K context for image understanding and multimodal chat.

Score 712 sources
textvisionaudioopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-3-vision-128k-instruct

Phi multimodal

Microsoft's Phi-3 vision model with 128K context for lightweight image understanding on edge devices.

Score 712 sources
textvisionaudioopen-sourceopen-weightself-hostedhosted
Context
128,000
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-4

Phi

Microsoft's 14B parameter Phi-4, a state-of-the-art small model trained on 9.8T tokens with strong reasoning on MMLU (84.8) and GPQA (56.1).

Score 703 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
16,384
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-4-mini-instruct

Phi

Microsoft's Phi-4 reasoning variants with 128K context for compact, efficient reasoning on constrained infrastructure.

Score 712 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-4-reasoning

Phi

Microsoft's Phi-4 reasoning variants with 128K context for compact, efficient reasoning on constrained infrastructure.

Score 712 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-4-reasoning-plus

Phi

Microsoft's Phi-4 reasoning variants with 128K context for compact, efficient reasoning on constrained infrastructure.

Score 712 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-4-mini-flash-reasoning

Phi

Microsoft's Phi-4 reasoning variants with 128K context for compact, efficient reasoning on constrained infrastructure.

Score 712 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-3.5-mini-instruct

Phi

Microsoft's Phi-3.5 models with 128K context, including a MoE variant for improved efficiency.

Score 712 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-3.5-MoE-instruct

Phi

Microsoft's Phi-3.5 models with 128K context, including a MoE variant for improved efficiency.

Score 712 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
131,072
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-3-mini-4k-instruct

Phi

Microsoft's earlier Phi models with shorter context windows for edge and local deployment.

Score 692 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
4,096
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-3-medium-4k-instruct

Phi

Microsoft's earlier Phi models with shorter context windows for edge and local deployment.

Score 692 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
4,096
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

phi-2

Phi

Microsoft's earlier Phi models with shorter context windows for edge and local deployment.

Score 692 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
4,096
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

phi-1_5

Phi

Microsoft's earlier Phi models with shorter context windows for edge and local deployment.

Score 692 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
4,096
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

phi-1

Phi

Microsoft's earlier Phi models with shorter context windows for edge and local deployment.

Score 692 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
4,096
Input
Not applicable
Output
Not applicable
Coverage
Full profile

Microsoft

Phi-tiny-MoE-instruct

Phi

Microsoft's earlier Phi models with shorter context windows for edge and local deployment.

Score 692 sources
textreasoningcodeopen-sourceopen-weightself-hostedhosted
Context
4,096
Input
Not applicable
Output
Not applicable
Coverage
Full profile