We start off the new year with a bang – offering you a wide variety of topics to delve into. Interested in learning how to reduce the complexity involved with observing cloud-native applications? We’ve got you covered. Hungering for more on how to accelerate your use of generative AI? We have webinars and blog posts to set you on the right track.
If you’ve lost a server in a network move, we have tips and tricks to help you find it. And for those who are eager to continue exploring Navier-Stokes in Chapel, the next tutorial is here! Explore this edition of the HPE Developer newsletter to gain a better understanding of all these topics and more!
Featured
Learn more about Morpheus Terraform Profiles
The Morpheus platform supports the execution of HashiCorp Terraform Infrastructure as Code (IaC) to provision and manage cloud resources. Learn how Terraform profiles enhance the platform’s Terraform functionality by enabling administrators to assign cloud specific data such as account numbers, credentials and other Terraform variables to a Morpheus cloud.
HPE OpsRamp Continues to Push Autonomous IT Operations Forward
Discover new, enhanced capabilities found in OpsRamp to empower and support ITOps and DevOps teams in managing complex, hybrid IT environments. From observability for disconnected and sovereign clouds to network observability with full-stack insights, new features continue to get us closer to the vision of autonomous IT operations.
From log files to AI insights:
The 60-year evolution of observability and AIOps
January 15, 2025 5pm CET / 8am PT
The ability to monitor, understand, and optimize your IT landscape has evolved dramatically. Join us on a 60-year journey through the evolution of observability and the rise of Artificial Intelligence for IT Operations (AIOps). We’ll explore the origins of systems monitoring, the emergence of distributed tracing, the use of real-time analytics, and the new role of AI in predicting and resolving issues before they impact users.
IT New Year’s resolution: Build an ethical and trustworthy AI system
January 22, 2025 5pm CET / 8am PT
The “era of AI” provides new technological capabilities accompanied by new responsibilities. Learn how we developed AI principles to guide our organization’s development decisions and how to put these principles into action throughout your AI system lifespan and follow-on projects
Community
The Schrödinger’s Cat Challenge of Observing Cloud-Native Applications
Observing cloud-native applications with their complex, ephemeral, and distributed nature poses some unique issues. Find out how OpsRamp addresses these critical challenges and why OpsRamp’s Kubernetes 2.0 integration is a game-changer.
LLM Agentic Tool Mesh: Exploring Chat Service and Factory Design Pattern
Dive deeper into one of LLM Agentic Tool Mesh’s core features, the Chat Service, which provides a robust foundation for creating chat applications using Large Language Models (LLMs).
VLAN versus VXLAN
While VLANs and VXLANs appear very similar from a high level, understanding the nuances of these two popular technologies is essential to understanding how to structure your network so you can reap the most benefit from their capabilities.
Use Redfish and IPv6 to find lost servers
If you’ve ever happened to “lose” a server during a move from one network to another, you know how painful that can be. In this post, we share tips on how to use Redfish and IPv6 to overcome this issue.
Announcing Chapel 2.3!
The newest edition of Chapel is here! Here’s a summary of its major highlights, including calling Python from Chapel, computing with sparse arrays, and advances in dyno resolver.
Navier-Stokes in Chapel continued: Distributed Cavity-Flow Solver
Learn how you can use distributed-programming features to port one of the full Navier-Stokes simulation codes to Chapel, allowing you to efficiently run simulations on any hardware with great performance and, arguably, more readable code.
Read the next post in the series
New! Python 201 – Dive into advanced concepts of the Python programming language
Interested in learning more about Python? This Workshop-on-Demand helps you explore more of the advanced concepts of the Python programming language. In it, you’ll learn about functions and object-oriented programming, as well as conditionals, loops, and iterators.
Sign Up and Skill Up
Register for our upcoming technology talks or check out our on-demand training
- Munch & Learn calendar
- Meetups calendar
- Get Real with AI Jam calendar
- Workshops-on-Demand catalog
- HPE Learn On-Demand catalog
HPE Developer YouTube channel
Explore the collection of HPE Developer Community videos on this YouTube channel. Check out our newest here!
Be an HPE Developer blogger!
Contribute to our blog. Learn how it’s done here.
Be an open source contributor!
Start contributing to open source projects.
HPE Developer newsletter archive
Catch up on what you might have missed.
Events
Democratizing Gen AI with LL-Mesh
December 18, 2024 5pm CET / 8am PT
In this session, we will discuss a pioneering initiative by HPE called LL-Mesh and how it holds promise in democratizing generative AI. LL-Mesh empowers users to create tools and web applications using Gen AI with Low or No Coding. It simplifies the integration process abstracting complex, low-level libraries into easy-to-understand services that are accessible even to non-developers. The platform then allows for the creation of a “mesh” of Gen AI tools, providing orchestration capabilities through an agentic reasoning engine based on Large Language Models (LLMs).
Engage
Hewlett Packard Enterprise leads with enterprise and open source solutions with the expertise to help developers and customers innovate and solve problems.
We’re all developing something.
Come join us in making the future.
HPE Developer and its accompanying resources are part of Hewlett Packard Enterprise, LP.
This article was originally published on December 2 2024, at https://developer.hpe.com/newsletter/dec-2024/