search
close-icon
Data Centers
PlatformDIGITAL®
Partners
Expertise & Resources
About
Language
Login
banner
Direct Liquid Cooling in Data Centres

Disrupting Forward: How Enterprises will Scale AI in 2024

Patrick Lastennet, Director, Business Development Enterprise, LinkedIn

From chatbots to autonomous systems, fraud detection programs to supply chain route optimisation, artificial intelligence (AI) is powering real-world applications and fuelling enterprise revenue.

According to a MIT Technology Review Insights Poll of 1,000 executives, findings reveal that nearly all companies surveyed consider generative AI a significant change in the technology landscape, with only 4% saying it will not affect their company.1

While some companies have dabbled in AI in 2023, to truly scale AI deployments in 2024, enterprises will have to accommodate the unique needs of AI. What's needed to scale AI and do it right?

To consider scale, enterprises leaders will have to consider:

  1. AI’s data processing requirements
  2. Ongoing optimisation and control efforts
  3. How to tap into partnerships for future-proofed strategies

In this article, we'll take a step back and look at what’s led us to the point we are now and how AI has become more accessible than ever. Plus, you’ll also learn more about the unique infrastructure, interconnection, and ecosystem considerations enterprise IT leaders will need to address as AI workloads scale in 2024.

Why now? The democratisation of AI

It's undeniable that AI is having its moment in the spotlight, but what makes it a lasting, disruptive technology rather than a flash in the pan? Let’s look back at the evolution of AI technology and the infrastructure that supports it.

High-performance computing (HPC) hardware is the computing infrastructure necessary to run AI programs. HPC has enabled simulation and modelling environments with business applications like digital twins to assess risk and enable decision making.

Now, AI applications are revving up processing demands for HPC hardware. Complex AI algorithms like large language models (LLMs) and recommender engines need a high amount of compute power and density. These highly-dense computing systems are now delivered in a fraction of the footprint compared with legacy hardware.

So why is AI experiencing this explosion now? First, advancements in computing hardware systems enable the highly-dense computing environment that accelerate advanced AI workflows. Building on top of these advancements, the accessibility of AI development frameworks, cloud-based services, and pre-trained models have lowered the barrier to entry for enterprises, making pragmatic AI development more accessible to a broader audience.

Because of these advancements that support AI at speed and scale, enterprises are assessing AI strategies to enable new products, services, efficiencies, and cost savings.

To scale AI in 2024, enterprise leaders will accelerate adoption of hybrid multi-cloud infrastructure

According to 451 Research, 53% of enterprises expect a high impact from generative AI in the next three years and 49% indicate a high intent to invest in AI in the same period2. This shows that enterprises will evolve their adoption of AI so that it becomes a more prominent piece of their core business operations or offerings.

Some enterprises may be using cloud-as-a-service or AI-as-a-service options to create minimum viable products (MVPs) for their AI projects. Enterprises that have launched high-performance computing projects are likely to already use a data centre model, either as colocation through a global data centre platform like PlatformDIGITAL® or self-managed.

Often it can be an advantageous to periodically burst into the cloud for accessing the necessary compute to train their models, which enables enterprises to maintain a more cost-efficient operation while taking advantage of the scalability of the cloud.

Enterprises turn to Digital Realty as a gateway to access both public and private clouds. ServiceFabric™ Connect enables enterprises to seamlessly connect to their Hybrid IT ecosystem. As an open platform, enterprises can quickly orchestrate connectivity between Digital Realty colocation facilities and third-party data centres, clouds, and service providers.

These different modular blocks create a hybrid multi-cloud infrastructure, including public and private cloud, which enables agility, scale, and the ability to work with best-in-breed partners.

As enterprise leaders take on the adoption of AI and the data-intensive training and inference phases, they will be evaluating where to deploy AI models to balance their Hybrid IT portfolio.

When scaling AI deployments, enterprise IT leaders should create a strategy for how their overall hybrid multi-cloud infrastructure will accommodate regulatory and cybersecurity considerations as well, including:

  1. Data privacy and security: Companies concerned about ransomware and other security challenges might make the switch to private cloud environments where having control over infrastructure can help reduce risk by enabling more oversight into performance.
  2. Data sovereignty: Government regulations about where and how data can be processed are leading enterprises to request more control over geography and location of data processing units. An on-premises data centre and processing close to the source of data helps companies be more flexible with the geographical challenges of storing and processing data.
  3. Proprietary information: AI models and similar workflows are often proprietary assets for companies, and private cloud and secure connectivity can help mitigate the risks from malicious actors.
The future of scaled AI workflows opens up an ecosystem of partnerships and solutions

According to the MIT Technology Review Insights poll referenced earlier in this blog, most executives polled (75%) say they plan to partner with providers to deploy generative AI3. While executives may feel the pressure from their boards to deploy AI strategies, a tempered approach and leaning into partners with expertise may be the way forward to build a sound AI strategy.

The large number of factors that dictate AI infrastructure might overwhelm companies. They might understandably worry about potential missteps. But managed service solutions help ease concerns and are good options in such cases.

Andreas Thomasch, Director HPC & AI DACH at Lenovo, offers this piece of advice: “Don’t be afraid of it, but rather pick the right partners who have the experience from the past to help you accelerate by scaling out of what you have today, or to get started at scale if you’re not there today.”

Collaboration partners with deep expertise in running large data centres and who have previous HPC knowledge under their belt can be valuable in helping configure the right AI infrastructure setup.

“These are the kind of partners who know how to run a large complex HPC or AI system, who know how to deploy AI, and who work closely with the ones who own the infrastructure from a data centre perspective, like Digital Realty,” Thomasch says.

Don’t go it alone. Working with partners that understand scaled deployments will be tantamount to AI strategy success and as the use cases grow for AI in industries, the growing ecosystem of providers will be a layer of added strength to enterprises who will lead the way.

For more on AI and the age of AI-ready infrastructure, watch our webinar: The New AI Footprint, Colocation in the Age of Massive Parallel Processing.

Learn how to leverage our expertise to lay a strong foundation for your AI capabilities, contact our experts today.

1 MIT Technology Review Insights, Generative AI deployment: Strategy for Smooth Scaling, October 2023.
2 451 Research, Voice of the Enterprise: Digital Pulse, Emerging Technologies, 2023.
3 MIT Technology Review Insights, Generative AI deployment: Strategy for Smooth Scaling, October 2023.

Tags