AI has the power to enhance our lives in so many ways. It also comes with a lot of questions and concerns as people begin to incorporate it into their business models. In this edition, we look at how AI-infused full-stack observability can help DevOps and IT admins address issues before they impact users. We also cover how Private AI can alleviate some of the concerns around data control, regulatory compliance, and model performance in one article and show you how it can be used in tackling specific use-case challenges, like fraud detection, in an upcoming webinar. We also include posts on LLMs, gdb debuggers, and Chapel. There’s a whole lot here to enjoy, so read on! |
Featured
Transform IT Ops withAI-Infused, Full-Stack Observability |
OpsRamp AI-infused, full-stack observability leverages AI to enhance traditional observability practices, providing unprecedented insights and control over your IT landscape. Learn about the role of AI in observability and how it can address potential issues before they impact users. |
Why Private AI? |
Everyone is getting into AI these days. But how do you control access to data, model performance, and cost? This post explores how running the model in a private, protected environment can provide you with privacy and security guarantees, ensure compliance, improve model performance and quality, and address issues around cost. |
Unlocking Private AI Power: Insurance Fraud Detection and Beyond
February 19, 2025 5pm CET / 8am PT
Explore the potential of HPE Private Cloud AI in tackling real-world challenges as we demo a specific use case and show you how to train and deploy AI models to detect fraud in auto insurance claims. In this session, you’ll learn how HPE Private Cloud AI can empower developers to build, deploy, and manage AI applications with ease, ensuring security, compliance, and scalability.
Introduction to HPE GreenLake cloud webhooks
February 26, 2025 5pm CET / 8am PT
Discover a new webhooks functionality within HPE GreenLake cloud; one that enables platform services and applications to seamlessly publish events to external customer HTTP endpoints. We’ll cover the architecture, features, and benefits of this new functionality and its potential to enhance platform extensibility and integration in our presentation.
Community
How to Pick a Large Language Model for Private AI
Explore key considerations for selecting an LLM, including understanding different classes of models, evaluating performance, and planning for hardware requirements in this blog post.
LLM Agentic Tool Mesh: Harnessing agent services and multi-agent AI for next-level Gen AI
Continuing our series on LLM Agentic Tool Mesh, this post delves into another core feature – the Agent Service. Learn what agents are and how, through high-level abstractions, this service can enable developers and users to unlock the transformative potential of Gen AI.
Using the gdb4hpc debugger at scale with a CUDA/MPI HPC application
A typical HPC application runs tens of thousands of processes on thousands of systems at once. Classic debuggers like gdb weren’t designed to handle that. Learn how gdb4hc, part of the HPE Cray Programming Environment, works to address this issue.
SC24 from the Chapel Language Perspective
Held annually for more than three decades, Supercomputing is a prestigious event for the HPC community. At SC24, for the first time in Chapel’s history, a Chapel/Arkouda demo was featured. Learn more about the event in this post.
7 Questions for David Bader: Graph Analytics at Scale with Arkouda and Chapel
A Distinguished Professor of Data Science reveals how tools like Arkouda and Arachne are accelerating data science to address large-scale data challenges like cybersecurity and bioinformatics in this next installment.
Sign Up and Skill Up
Register for our upcoming technology talks or check out our on-demand training
HPE Developer YouTube channel
Explore the collection of HPE Developer Community videos on our YouTube playlist. Check out our newest here! |
Watch now |
Be an HPE Developer blogger!
Contribute to our blog. Learn how it’s done here.
Share your knowledge
Be an open source contributor!
Start contributing to open source projects.
Get started
HPE Developer newsletter archive
Catch up on what you might have missed.
Browse the archive
Events
AI Inferencing at the Edge: Use cases from Earth to Space
February 12, 2025 5pm CET / 8am PT
High-stake operating environments can add additional challenges when it comes to implementing AI technologies. Whether your use case is focused on speeding up time-to-target while maintaining data security or handling rugged and remote computing environments outside your data center, there are important considerations you need to take into account. Join us in our next session as we explore public sector and space-deployed AI-use cases, spanning the use of RAG (retrieval-augmented generation) for knowledge retrieval to the use of computer vision.Engage
Hewlett Packard Enterprise leads with enterprise and open source solutions with the expertise to help developers and customers innovate and solve problems.
We’re all developing something.
Come join us in making the future.