Intel Phi-3

Intel Optimizes Microsoft’s Phi-3 AI Models for its AI Platforms

Intel has announced that it has successfully validated and optimized several models from Microsoft’s Phi-3 family of open models for Intel Gaudi AI accelerators and Intel Xeon processors for data center applications, as well as Intel Core Ultra processors and Intel Arc™ graphics for client applications. This broad support enables developers to build applications that run locally on multiple system and device types from the data center to the edge.

Pallavi Mahajan, Intel corporate vice president and general manager of Data Center and AI Software, emphasized the importance of the collaboration between Intel and Microsoft to optimize support for these models. Mahajan stated that Intel aims to “provide customers and developers with powerful AI solutions that leverage the latest AI models and software,” adding that active collaboration with AI software ecosystem partners like Microsoft “is key to bringing AI everywhere.”

One of the most important results from this collaboration is the co-design of the accelerator abstraction in DeepSpeed, a deep learning optimization software suite that simplifies the development and deployment of AI models. Additionally, Intel has extended the automatic tensor parallelism support for Phi-3 and other models on Hugging Face, a popular platform for AI model development and deployment.

The compact size of Phi-3 models makes them ideal for on-device inference, allowing for lightweight model development such as fine-tuning or customization on AI PCs and edge devices. Intel’s client hardware is further accelerated through comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch for local research and development, and the OpenVINO™ Toolkit for model deployment and inference.

Looking ahead, Intel will continue to support and optimize software for Phi-3 and other leading language models as a critical part of its long-term strategy to make the development and deployment of AI more accessible and efficient for developers and enterprises.

For performance and technical details, check out the Intel Developer Blog.

Scroll to Top