Home Trending Now How Nvidia and HPE plan to simplify deploying enterprise AI

How Nvidia and HPE plan to simplify deploying enterprise AI

by Zeus Kerravala

Generative artificial intelligence  (AI) continues to be the year’s biggest enterprise technology story. For evidence look no further than this week’s HPE Discover 2024 event where HPE and Nvidia unveiled their joint generative artificial intelligence (genAI) offering. Nvidia AI Computing by HPE is a portfolio of co-developed AI solutions backed by the companies’ joint go-to-market integration. The duo also announced another co-developed offering, HPE Private Cloud AI. The solutions combine the capabilities of HPE’s AI-optimized servers with Nvidia’s latest GPUs and superchips.

So, what does this all mean for the two companies and, more importantly, for the enterprises to which the offerings are targeted

Neil MacDonald, HPE’s EVP and  GM, Compute, high performance computing (HPC) and AI, said, “ Nvidia AI Computing by HPE is all about code-developed solutions to simplify enterprise AI. Enterprises face challenges that span from people to technology to economics, and we are accelerating their journey to getting the productivity benefits that are promised by generative AI.”

He said HPE delivers a turnkey private cloud, associated AI training, and services, and the operational systems to run that infrastructure and enterprise-grade genAI. “It embraces virtual assistant use cases, process optimization use cases, and content creation use cases,” he explained. “It’s all about accelerating the enterprise journey to deploying generative AI, codeveloped hand in glove with our great partners at Nvidia. The companies’ joint go-to-market activities include channel training, certifications, and global systems integrators.

Related Resources

Putting the pieces together for enterprises

”When we embarked on this co-development with Nvidia, we didn’t want to build a solution where the customer had to have a lot of services to put together different Lego parts and build a bunch of software and a bunch of hardware to get started on finding use cases for their generative AI solutions,“ explained Fidelma Russo, EVP & GM, Hybrid Cloud and chief technology officer — HPE. ”We took the burden of co-developing with Nvidia, looking at the different pieces of hardware we needed and the different software to allow a customer to get started in just three clicks. To go from ‘I want to have a use case. And I want to have three clicks to get ready and go on my generative AI use case.’“

Russo added that co-development with Nvidia means enterprises can ”have your IT operations, you can have your developers, you can have your data scientists already and up and running. And you get up to 90% productivity savings across the board when you deploy private cloud AI. The point is that a customer doesn’t have to figure out the configurations; we have done all this work for you. And as Nvidia moves and changes with their configurations, we will also figure that out for you and make it very simple for a customer to get started and continue,“ she said.

The prominence of Nvidia in the AI revolution

Looking back over the past year, there’s been a lot of chatter about AI. This year, it seems like the focus has shifted more specifically to Nvidia-based private AI stacks. So, what defines the HPE flavor of AI, and what differentiates it from all the others?

”HPE has engineered a system with a private cloud control plane that brings up the compute, networking, and storage and installs it,“ said Russo. ”It boots and brings up the runtime. It launches the models and brings up the library, and you’re ready to run in three clicks. Your use cases and tools are all available; your pipelines are all available. And there are no professional services. That is completely different from anything else that’s in the market. You can have professional services for different things. You can add them to your business processes, but you don’t need to.“

”An enterprise that’s embarking on a genAI journey needs a lot of different things,“ added MacDonald. ”You need accelerators, you need compute, you need connectivity, you need storage, you need data, fabric, and data management across all of that. We believe it’s critical to accelerate enterprise adoption of generative AI and get them to the productivity gains it offers, which practically every enterprise is trying to figure out how to do right now to give a much more complete offer. This isn’t about some accelerators on the server. This is about an integrated offer that spans all of that and is tightly integrated.“

Bob Pette, VP & GM of Enterprise Platforms for Nvidia, said the key to these joint solutions is the power of integration, and he praised the completeness of HPE’s capabilities. ”The level of integration is the thing that impresses me the most. The integration of NeMo Retriever for the file system to ensure those guardrails are met, data leakage doesn’t happen, and regulatory standards are met. HPE’s liquid cooling expertise is outstanding. We will soon need the full 100% liquid cooling that HP is already providing. The willingness to truly partner and not just take a component but brainstorm with a partner about how the software and hardware come together.

“If you’re building a virtual assistant for a call center,” Pette said, “you want that call center assistant to answer that question within a second or 30 seconds. 30 seconds probably results in an upset customer who hangs up. Latency matters. So, integrating all those components is happening at a level I’ve never seen before.”

Differentiation comes from ease of use

The comments from Nvidia’s Pette on “integration at a level I’ve never seen before” are what I found intriguing. Over the past several months, several infrastructure vendors come to market with an integrated Nvidia solution. After Pette commented on HPE’s tighter integration, I asked CEO Antonio Neri about his thoughts on long-term differentiation in a world where the solutions may look the same at a high level. He commented on several specific areas where HPE is ahead of the pack. The first is ease of use. The solution is designed to be rolled out and turned up with just three clicks of a mouse, and then the system self-provisions the rest, and it’s ready to go.

Also, HPE has a massive services organization and partner community that can help customers tweak the AI stack and optimize it for their requirements. It also has a partner community to take those services capabilities and help scale the solution globally. While services and the partner ecosystem aren’t “tech” per se, they’re critical to ensuring customers’ success.

The AI era is here, and customers want to quickly gain value from their investments. Integrated solutions, like Nvidia AI Computing by HPE, enable customers to jump into AI with both feet. They also eliminate the typical months of tweaking and tuning required to get the solution running optimally. With AI, delays mean lost dollars, so an integrated stack should have strong appeal.

Author

  • Zeus Kerravala

    Zeus Kerravala is the founder and Principal Analyst with ZK Research. Kerravala provides a mix of tactical advice to help his clients in the current business climate and long term strategic advice. Kerravala provides research and advice to the following constituents: End user IT and network managers, vendors of IT hardware, software and services, and the financial community looking to invest in the companies that he covers. Kerravala does research through a mix of end user and channel interviews, surveys of IT buyers, investor interviews as well as briefings from the IT vendor community. This gives Kerravala a 360 degree view of the technologies he covers from buyers of technology, investors, resellers, and manufacturers. Kerravala uses the traditional online and email distribution channel for the research but heavily augments opinion and insight through social media including LinkedIn, Facebook, Twitter, and Blogs. Kerravala is also heavily quoted in business press and the technology press and is a regular speaker at events such as Interop and Enterprise Connect. Prior to ZK Research, Zeus Kerravala spent 10 years as an analyst at Yankee Group. He joined Yankee Group in March of 2001 as a Director and left Yankee Group as a Senior Vice President and Distinguished Research Fellow, the firms most senior research analyst. Before Yankee Group, Kerravala had a number of technical roles including a senior technical position at Greenwich Technology Partners (GTP) where he worked with Johna Til Johnson, the founder of Nemertes Research. Prior to GTP, Kerravala had numerous internal IT positions including VP of IT and Deputy CIO of Ferris, Baker Watts, and Senior Project Manager at Alex. Brown and Sons, Incorporated. Kerravala holds a Bachelor of Science in Physics and Mathematics from the University of Victoria in British Columbia, Canada. Kerravala currently resides in Acton, Massachusetts.

You may also like

Leave a Comment