Get started building and deploying an agile, self-service private cloud wherever you need with HPE GreenLake Private Cloud Business Edition APIs. Use the HPE GreenLake API to speed up Day 0 operations with bulk user onboarding. If you’re using HPE GreenLake for Compute Ops Management, learn how webhooks integration can turbocharge your operational processes.
If you didn’t have a chance to attend HPE Discover 2024, you can read about it here and even learn how Determined AI and Pachyderm helped power the AI avatar of HPE CEO, Antonio Neri. And if you are interested in hearing more from Determined, check out the post on activation memory and how to manage your GPU memory budget.
Python users will be happy to know that the HPE iLOrest tool has been repackaged and is now available on PyPI. If you’re looking to run Python computations faster or handle larger data sets, you can collaborate with our Chapel team to do some pair-programming sessions. Enjoy!
Use these APIs to get started building and deploying an agile, self-service private cloud wherever you need to simplify VM management across on-prem and public clouds.
Expanded platform capabilities of the HPE GreenLake cloud
Learn about the expanded capabilities of HPE GreenLake cloud that were recently announced at HPE Discover, including enhancements for MSPs, addressing highly regulated environments, new APIs, GenAI platform search, and more!
Automating IT Operations with Compute Ops Management Webhooks
There’s a secret weapon that can turbocharge your operational processes when integrated with HPE GreenLake for Compute Ops Management. Learn more about it by reading this article on webhooks.
HPE Discover announcements accelerate AI adoption and offer optionality
HPE unveiled purpose-built AI solutions co-developed with NVIDIA to be delivered through HPE GreenLake at the event and made other significant announcements as well.
Looking to use the HPE iLOrest tool on Python 3-based systems? There’s good news! Both source and binary distributions are now available on PyPI. This post covers installation procedures.
Doing science in Python? Wishing for more speed or scalability?
Are you currently using Python for your computations? Looking to make your computations run faster or handle larger data sets? Join our free pair-programming session to collaborate and see how Chapel might be just what you’re looking for.
The Sleuth and the Storyteller: The Dynamic Duo behind RAG
Large Language Models (LLMs) are amazing at coming up with creative ideas but can struggle to give you an accurate weather report. Retrieval Augmented Generation (RAG) works with LLMs to improve accuracy by acting as a sleuth to find the most relevant data. Learn more about the LLM/RAG symbiotic relationship in this Determined blog post.
Enhancing NLP with Retrieval-Augmented Generation: A Practical Demonstration
July 17, 2024 5pm CET / 8am PT
This presentation delves into the fundamentals and applications of RAG, providing a comprehensive overview of how it integrates retrieval mechanisms with generative capabilities to produce more accurate and contextually aware responses. Following the theoretical overview, we will transition into a live demonstration showcasing the practical usage of RAG. We’ll include a hands-on demo that illustrates the process of augmenting a generative model with external knowledge sources, showcasing how RAG improves the relevance and quality of generated outputs in real-time applications.
Writing programs on modern computers requires parallelism to achieve maximum performance. This is complicated by GPUs, which provide great parallel performance at the price of more complex programming. Chapel, an open-source parallel programming language, supports portable, performant software on CPUs and GPUs using a single unified set of language features. Learn more in this meetup.
Hewlett Packard Enterprise leads with enterprise and open source solutions with the expertise to help developers and customers innovate and solve problems.
We’re all developing something. Come join us in making the future.