Servers & Systems: The Right Compute
1748000 Members
4701 Online
108757 Solutions
New Article
ComputeExperts

Enable your AI journey to AI-at-scale with solutions from HPE and NVIDIA®

Organizations of all sizes are in a race to identify how they can best leverage artificial intelligence (AI) — but what does that really mean? What is the optimal technology foundation to handle the capacity and demands for AI modeling, training, and inferencing?

AI-at-Scale_Blog_GettyImages-867792608_800_0_72_RGB.jpgSuccessful AI/ML model implementation is an integrated approach that encompasses best-in breed compute, software tools, and flexible delivery models. You need a technology infrastructure flexible enough to meet you wherever you are on your AI journey. Whether you’re just getting started with on-premises deployments powered by clusters of HPE ProLiant servers, or are working with larger and more complex data sets requiring supercomputing technology on-premises with HPE Cray, or wish to have your infrastructure delivered as-a-service, Hewlett Packard Enterprise can provide solutions, support, and consulting in a hybrid environment built to support the scale you need.

The adoption rates for AI in global businesses grew nearly 2.5 times in 2022 compared to 2017[i]. Nearly every technology-driven organization, large or small, is looking to take advantage of AI to drive better business results. And with the right solution stack, any-sized organization — regardless of experience with artificial intelligence — can develop advanced AI solutions that leverage deep learning training and inference to exponentially improve mission critical processes.

The same technology that enables the development of large language models (LLMs) and natural language processing (NLP) can be used by companies that are new to AI to create efficient and powerful solutions. You can build and train AI models faster and more cost-effectively with a purpose-built solution stack that combines HPE ProLiant Gen11 compute and NVIDIA® GPUs or run model training at scale with HPE supercomputing technology, all leveraging HPE GreenLake.

Getting started with AI

Artificial Intelligence promises every innovation that businesses want to implement —automated processes, increased efficiencies, and better customer experiences. But it can be difficult to get started. Companies looking to begin their journey into the world of AI, will need robust technology that doesn’t break the bank. That includes accelerated GPUs, secure and stable servers, AI/ML software, and a private or hybrid cloud environment.

With these foundational elements, any business can get started building out the solutions that will help them achieve or sustain industry leadership. In the financial sector, for instance, banks use AI to detect fraud. Artificial intelligence is also a useful technology to streamline customer onboarding, loan processing, and account opening — especially during periods of peak demand. In the medical field, healthcare professionals use AI-enabled solutions to monitor the condition of patients from anywhere, and even perform diagnoses outside of the hospital. AI also improves the drug discovery pipeline by identifying characteristics of compounds interacting with cells and proposing potential candidate treatments. Meanwhile, manufacturers leverage AI-based video analytics at the edge to improve quality, safety, efficiency, and productivity in the assembly line, leading to cost savings and improved business performance.

AI-at-Scale_Desktop-carouselsm_GettyImages-867792608.png

 

Taking AI to the next level

Some of the most exciting innovations enabled by AI include LLMs and NLP. From search engines to advanced customer support chatbots, these technologies will define the future of business as we know it. “These models broaden AI’s reach across industries and enterprises and are expected to enable a new wave of research, creativity and productivity, as they can help to generate complex solutions for the world’s toughest problems,” was recently reported in a NVIDIA blog[ii].

To create, optimize, and run these models cost-effectively, enterprises that are more mature in their AI practice need technology that enables predictive maintenance, energy-efficient computing, and data center optimization.

More accurate cancer screening results, faster drug discoveries against new pandemics, self-driving cars, automated face and speech recognition, traffic management, and car safety are some of the advances that AI-at-scale helps innovate. Whether it is developing a large language model that establishes AI accuracy while supporting multi-language diversity or developing a drug discovery platform that industrializes new drug discovery, AI-at-scale solutions from HPE enable ML practitioners to focus on developing AI/ML models to make critical breakthroughs. AI-at-scale capabilities also advance popular use cases involving natural language processing, computer vision, and video and image processing that are growing across industries such as transportation, life sciences, defense, financial services, and manufacturing.

For both entry-level and mature enterprises, HPE has been a trusted partner for delivering the processing power, security, and innovation required to execute AI workloads. With a foundation of HPE ProLiant Gen11 compute and NVIDIA GPUs, delivered through HPE GreenLake, businesses can rapidly bring AI models into production, train machine learning algorithms, and make tangible operational improvements.

HPE AI Solutions at NVIDIA GTC

At this year’s NVIDIA GTC, a global AI conference running online March 20-23, you can learn all about the launch of the new HPE ProLiant DL380a Gen11 server, which offers an intuitive cloud operating experience, trusted security by design, and optimized performance to accelerate workloads. For additional information about the new HPE ProLiant DL380a Gen11 server, built to accelerate AI workloads, you can read our recently published blog.

This server, with a new front-end GPU cage that hosts up to four (4) double-wide GPUs in a 2U industry-standard server, is another great example of HPE’s continued commitment to innovation”, says Joseph George, HPE Vice President of Compute, Industry & Strategic Alliance Marketing. “It also addresses the growing need for graphics-intensive workload solutions. GPU-to-GPU communication, addressed by technologies like NVIDIA NVLink, increases throughput and enables shared GPU memory, which in turn improves performance and enables efficient resource utilization.

And our solution doesn’t stop at the server. NVIDIA has created a next-generation GPU that perfectly aligns with the HPE AI solution stack. The NVIDIA L4 Tensor Core GPU, powered by the NVIDIA Ada Lovelace architecture, is a universal, energy-efficient accelerator designed to meet AI needs — and more — with video, visual computing, graphics, and virtualization for workloads including cloud gaming, simulation, and data science. It’s a true universal GPU in a low-profile form factor that delivers a cost-effective, energy-efficient solution for high throughput and low latency in every server, from the edge to the data center to the cloud.

HPE understands AI-at-scale like no other technology company. We're invested in the frontiers of AI/ML model data management, training, and inference capabilities, and we're equally committed to help whole industries leverage ethical applications. We've built the machines that handle the most complex AI/ML/DL problems in the world, and make them accessible within an HPE Machine Learning Development Environment — a machine-learning software that enables users to rapidly develop, iterate, and scale high-quality models from proof-of-concept to production. The HPE Machine Learning Development Environment runs on anything from a laptop to systems running 1000’s of GPUs, and seamlessly scales model training across multiple systems and GPUs without rewriting infrastructure code.

In addition, with the recent acquisition of Pachyderm, HPE expands its AI-at-scale portfolio with reproducible AI capabilities. Reproducing a machine learning pipeline is critical to AI-at-scale initiatives as it enables use of the same dataset to achieve the same results each time it is applied to increase transparency, trustworthiness, and accuracy in predictions, while optimizing time and resources. With Pachyderm technology, HPE delivers an end-to-end machine learning software platform to enable mission critical AI-at-scale solutions. By integrating Pachyderm, HPE helps customers deliver more accurate and performant AI-at-scale applications with the benefits of data lineage, data versioning, and efficient incremental data processing. Learn more about how HPE is expanding AI-at-scale with Pachyderm.

To get the most out of your servers and GPUs, we recommend running AI workloads on HPE GreenLake, the edge-to-cloud platform that powers data-first modernization. With HPE GreenLake for AI, ML and analytics, you can control and harness data from the edge in a secure private or hybrid cloud environment — all delivered as a service — to help achieve peak efficiency, cut down on costs, and fortify your valuable data. Large-scale model training is suited for the cloud model because prototypes can be tested quickly, and deployments can be scaled up without having to set up the infrastructure. Plus, accelerators can be selected to best meet the needs of different models.

Discover the future of AI

Whether your business is dipping its toes in the promising waters of artificial intelligence and machine learning, or you’re looking to deploy large natural language models at scale via a cloud supercomputing IaaS with a Platform as a Service, you can accelerate your journey with HPE.

Join HPE, along with other AI developers and innovators, at NVIDIA GTC from March 20-23, 2023. Register free for GTC today!

Check out all of HPE’s sessions at GTC:

For more information about HPE and NVIDIA solutions for AI, please visit HPE and NVIDIA® accelerate your AI solutions from Edge to Cloud to learn more about our collaboration.

Meet the authors

Piyush Shukla_headshot_1516160792475.jfifPiyush Shukla is the Director of Artificial Intelligence and Machine Learning Product Marketing here at HPE, responsible for driving critical and challenging AI and ML initiatives that support HPE's industry leadership in this space. During his 20 year career in hi-tech, he has been a frequent expert speaker at industry events where he presented on pertinent topics such as analytics, hybrid cloud, container-based solutions, and VDI, among others. Recognized as a visionary marketing leader, he has a solid reputation of delivering innovative products to our customers. Piyush has previously held senior roles at both Dell and GE. He earned a Master’s degree from Youngstown State University in Marketing and Business Administration.

Sonja Hickey Feb 2023.jpgSonja Hickey is a product marketing manager at Hewlett Packard Enterprise, focusing on HPE ProLiant servers. Working in the IT industry since 1997, Sonja has extensive marketing and product management experience with enterprise software and technology companies, including HPE, Dell, Sun Microsystems and Zebra Technologies. In 2011, Sonja co-authored a book IT Operations Management, which discusses best practices associated with IT infrastructure management, especially as they relate to cloud and virtualized environments.  Sonja’s education includes an MBA from University of Chicago’s Booth School as well as a MS and BS in Engineering from the University of Illinois at Champaign-Urbana.

[i] 20 Artificial Intelligence Statistics that Marketers Need to Know in 2023, Tristan Taylor, March 2023

[ii] What Are Large Language Models Used For?, Angie Lee, NVIDIA, January 2023


Compute Experts
Hewlett Packard Enterprise

twitter.com/hpe_compute
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/servers

About the Author

ComputeExperts

Our team of Hewlett Packard Enterprise server experts helps you to dive deep into relevant infrastructure topics.