Microsoft's Phi-4 Series delivers cutting-edge multimodal AI with compact design, local deployment, and advanced ...
The new small language model can help developers build multimodal AI applications for lightweight computing devices, ...
Microsoft's new Phi-4 AI models deliver breakthrough performance in a compact size, processing text, images and speech simultaneously while requiring less computing power than competitors.
Microsoft has introduced Phi-4, the latest addition to its Phi family of small language models (SLMs). With 14 billion parameters, Phi-4 stands out for its ability to tackle complex reasoning ...
Microsoft is advancing small language models (SLM), introducing Phi-4-multimodal and Phi-4-mini after the Phi-4 open-source ...
The second new model that Microsoft released today, Phi-4-multimodal, is an upgraded version of Phi-4-mini with 5.6 billion parameters. It can process not only text but also images, audio and video.
Phi-4-multimodal and Phi-4-mini are Microsoft’s latest small language models, both launching on Azure AI Foundry and Hugging Face today. Phi-4-multimodal improves speech recognition, translation ...
One of the two, the Phi-4-multimodal model, has 5.6 billion parameters and can process text, images, and speech ...
In the last week of February, Microsoft introduced new small language models called Phi-4-multimodal and Phi-4-mini. They come with multi-modal capabilities, which means they can process text ...