Nvidia at Microsoft Build

Nvidia optimizes AI capabilities of RTX AI PCs at Microsoft Build

Even though Qualcomm was the belle of the Microsoft CoPilot+ PC ball, Nvidia was keen to show the strength of its relationship with the Redmond-based software giant by unveiling new AI performance optimizations and integrations for Windows at the Build event, including its new R555 Game Ready Driver, which enables large language models (LLMs) to run up to three times faster with ONNX Runtime (ORT) and DirectML.

In a separate blog post, the company announced that all of Microsoft’s growing family of Phi-3 open small language models have been GPU-optimized with Nvidia TensorRT-LLM, while Nvidia cuOpt, a GPU-accelerated AI microservice for route optimization, is now available in Azure Marketplace.

Nvidia also took the opportunity to issue a timely reminder that it was the first company to introduce GPUs with dedicated AI acceleration as far back as 2018 with the GeForce RTX 20 Series featuring Tensor cores along with NVIDIA DLSS, the first widely adopted AI model to run on Windows. Since that initial launch, over 100 million Nvidia RTX AI PCs and workstations running more than 500 accelerated applications and games have already been shipped.

For all the ballyhoo surrounding the launch of CoPilot+ PC this week, this new wave of devices presents little threat to Nvidia’s leadership in the Premium AI PC segment. With the Hexagon NPU in the Qualcomm Snapdragon Elite X processor delivering only 45 TOPS (trillion operations per second) compared with up to 1,300 TOPS offered Nvidia’s latest GPUs, CoPilot+ PCs have nowhere near enough horsepower to run the hugely demanding local workloads required by the gamers, creators, and developers that Nvidia caters to with its RTX platform.

Scroll to Top