Dell Enterprise AI

Dell’s vision for the future of enterprise AI

At Dell Technology World, the company’s senior leadership presented its vision for the future of AI in the enterprise, focusing on hybrid AI solutions and the concept of AI factories. As enterprises increasingly develop and deploy generative AI models and applications, Dell aims to address the challenge of optimizing these technologies within their organizational infrastructures.

Hybrid AI, a central theme of Dell’s strategy, involves performing some AI tasks in the cloud while handling others on-premises within a company’s data centers. This approach leverages the fact that a significant portion of business data remains stored in internal or co-located IT facilities. By bringing AI processing to the data, rather than transferring vast datasets to the cloud, Dell offers a more efficient and secure method for enterprises to utilize AI.

The concept of an AI factory is pivotal to Dell’s vision. These factories are generative AI-enabled platforms that support the optimization of foundation models and data inferencing workloads to facilitate the development of custom applications based on the models.

Historically, running foundation models within enterprise data centers was impractical because tools like OpenAI’s GPT-3 and GPT-4 were only accessible via the cloud. However, the rise of open-source foundation models, such as Meta’s Llama 3 has changed the landscape because these models can be deployed within corporate data centers, allowing companies to fine-tune them with proprietary data and develop bespoke applications.

To support this transition, Dell introduced a suite of products and services at the event, including the PowerEdge XE9680L server equipped with liquid cooling and eight Nvidia Blackwell Tensor Core GPUs, and the PowerScale F910, an all-flash storage array optimized for generative AI workloads. Dell also previewed Project Lightning, a parallel file system designed for PowerScale, and the PowerSwitch Z9864F-ON, a network switch with enhanced throughput speeds for generative AI tasks. New Broadcom-powered 400G PCIe Gen 5.0 Ethernet adapters for PowerEdge XE9680 servers were announced as well.

Dell also launched a new Dell Enterprise Hub on Hugging Face aimed at simplifying the selection of large language models (LLMs) and other tools for custom generative AI applications. Collaborations with Meta on Llama 3 models and with Microsoft on Azure AI Services further expand Dell’s offerings. Complementing these developments is a comprehensive set of new service offerings aimed at guiding organizations through their GenAI journeys.

Dell’s commitment to on-premises AI capabilities extends to both workstations and PCs. The company unveiled five new Copilot+ PCs, marking a significant investment in Qualcomm Snapdragon X Elite and X Plus processors. These devices promise powerful AI acceleration capabilities of up to 45 TOPS through the integrated NPU and extended battery life through their power-efficient design, potentially appealing to customers frustrated with the limitations of current notebook options featuring Intel and AMD processors.

The company is also working to simplify the fine-tuning of open-source models, using its Accelerator Services for RAG (Retrieval Augmented Generation) on its Precision AI Workstations. Given the growing enterprise interest in RAG, this is another promising development.

By embracing hybrid AI and on-premises solutions, Dell is positioning itself as a leader in the emerging enterprise AI landscape. The company’s announcements at the event show that it is charting a clear course for driving rapid future growth and it is moving aggressively to provide its customers with a rich portfolio of flexible end-to-end solutions that can be deployed at scale across even the most complex operating environments.

Scroll to Top