New Cloud Service Provides LLMs for Large-Scale AI Applications
Empowering enterprises with on-demand access to LLMs for large-scale AI solutions
Hewlett Packard Enterprise (HPE) has entered the AI cloud market with the expansion of its HPE GreenLake portfolio to include large language models (LLMs) for enterprises of all sizes. The move allows startups to Fortune 500 companies to access on-demand, large-scale AI capabilities through a multi-tenant supercomputing cloud service.
The newly introduced HPE GreenLake for LLMs empowers enterprises to privately train, tune, and deploy large-scale AI utilizing a sustainable supercomputing platform that combines HPE’s AI software and supercomputers. In collaboration with German AI startup Aleph Alpha, the company says it will deliver a field-proven and ready-to-use LLM to cater to various use cases requiring text and image processing and analysis.
HPE GreenLake for LLMs is the first in a series of AI applications that the company plans to introduce in the future. It notes that future large-scale AI applications will cover climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.
“We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud,” said Antonio Neri, President and CEO of HPE. “HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE’s proven, sustainable supercomputers. Now, organizations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly.”
HPE GreenLake for LLMs operates on an AI-native architecture specifically designed for single large-scale AI training and simulation workloads, maximizing computing capacity. Unlike general-purpose cloud services, the application can support AI and high-performance computing (HPC) tasks on hundreds or thousands of CPUs or GPUs simultaneously, leading to faster problem-solving and more accurate models.
The solution offers access to Luminous, a pre-trained large language model developed by Aleph Alpha. Luminous supports multiple languages, including English, Spanish, Italian, German, and French, enabling customers to harness their data, train and fine-tune customized models, and gain real-time insights based on proprietary knowledge.
Through this service, enterprises can gain the ability to build and integrate various large-scale AI applications into their workflows, unlocking value in business and research-driven initiatives.
“By using HPE’s supercomputers and AI software, we efficiently and quickly trained Luminous, a large language model for critical businesses such as banks, hospitals, and law firms to use as a digital assistant to speed up decision-making and save time and resources,” said Jonas Andrulis, Founder and CEO of Aleph Alpha. “We are proud to be a launch partner on HPE GreenLake for Large Language Models, and we look forward to expanding our collaboration with HPE to extend Luminous to the cloud and offer it as a-service to our end customers to fuel new applications for business and research initiatives.”
HPE GreenLake for LLMs eliminates the need for customers to invest in and manage their supercomputers, which can be expensive, complex, and require specific expertise. Leveraging HPE Cray XD supercomputers and the HPE Cray Programming Environment, the offering optimizes HPC and large-scale AI applications, providing developers with a complete set of tools for developing, debugging, and tuning code.
Additionally, the supercomputing platform supports HPE’s AI/ML software, including the HPE Machine Learning Development Environment for rapid training of large-scale models, and the HPE Machine Learning Data Management Software for seamless integration, tracking, and auditing of data with reproducible AI capabilities to generate reliable and accurate models.
According to the company, HPE GreenLake for LLMs will operate on supercomputers initially hosted in QScale’s Quebec colocation that provides power from 99.5% renewable sources.
HPE has started accepting orders for HPE GreenLake for LLMs and expects additional availability by the end of 2023, starting with North America and followed by Europe in early 2024.
About Utilities Analytics Institute (UAI)
UAI is a utility-led membership organization that provides support to the industry to advance the analytics profession and utility organizations of all types and maturity levels, as well as analytics professionals throughout every phase of their career. UAI is poised at the cusp of digital transformation to lead this industry into the future.
Check out UAI Communities and become a member to join the discussions at Utility Analytics Institute.