In this month’s edition, we begin by highlighting our many Skill Up opportunities. We’ve just scheduled two more sessions for our newest AI Jam webinar series that will be held on November 6 and December 11. Our November Munch & Learn will feature Ted Dunning who will instruct us on how to fix a common security hole and our Meetups will reconvene in December as we introduce an HPE-pioneered technology called LL-Mesh.
Also in this issue, we spotlight the newest web page addition to our Community portal, which provides details on Morpheus, and another interesting blog post from Hybrid Cloud Chief Technologist, Brian Gruttadauria, on Small Language Models.
Finally, we invite you to explore a new blog series where users explain how Chapel helps them in their specific use cases. These posts reveal some of the remarkable ways Chapel is being used to solve some very complex, real-world problems. Enjoy!
Featured
How to fix your biggest security hole
November 20, 2024 5pm CET / 8am PT
While they may not mean to be, your users tend to be your biggest security risk. And that’s only because today’s access control systems are simply too complex for users to understand. As a result, they don’t implement policies correctly, leading to mistakes that can be catastrophic. In this session, you’ll learn about an access control system that’s understandable, works on-premises, in multiple clouds, integrates with existing systems, and gives you what you need to fix this security hole.
New web page for Morpheus! |
Familiarize yourself with Morpheus, the newest member of the Hewlett Packard Enterprise (HPE) family. This page provides you with access to details on the product, API documentation, access to the Morpheus developer portal, Morpheus integrations and much, much more! You can even find out how you can get hands-on experience with the community edition! |
Hybrid Classical-Quantum workflows on HPE Supercomputers |
![]() |
October 16, 2024 5pm CET/ 8am PT |
![]() |
While quantum devices are expected to perform well for certain problems involving a small amount of data but high complexity, classical supercomputers continue to better serve researchers in areas with data-intensive workloads. In this session, learn how tightly integrating Noisy Intermediate-Scale Quantum devices allows classical supercomputers to be be valuable candidates for the execution of hybrid workflows. |
Implementing your AI breakthroughs effectively
November 6, 2024 5pm CET / 8am PT
Grab a coffee together with your IT Ops Manager, as we discuss different AI infrastructure environment options to suit your use case. Consider what processes are necessary, who will be involved, and what resources you already have to build a toolbox that seamlessly integrates all the AI components together. You don’t have to be an AI infrastructure expert to get started. Come explore how to get your AI proof of concept implemented.
Community
Implementing your AI breakthroughs effectively – The Infrastructure to your AI
Like many technologies that seem to quietly run our digitalized world from out of nowhere, AI requires a fully-configured infrastructure with specialized hardware, software, and networking to make it a reality. Learn more about how we aim to make this real during our upcoming AI Jam webinar in this blog post.
Introducing the Ampere® Performance Toolkit
The use of practical tools to evaluate performance in consistent, predictable ways across various platform configurations is necessary to optimize software. Ampere’s open-source availability of the Ampere Performance Toolkit (APT) enables customers and developers to take a systematic approach to performance analysis.
Announcing Chapel 2.2
Learn about key highlights found in the newest release of Chapel, such as improvements to Chapel libraries, optimizations for array computing, and improved GPU support.
Distributed tuning in Chapel with a hyperparameter optimization example
Tuning a computation is a common challenge that involves calling the same program with many different arguments and analyzing the results to determine the best combination. This post explains how to perform distributed, multicore parallel tuning using Chapel.
Seven questions for Chapel users
In this new blog series, investigate what real-world problems users are trying to solve and how they are implementing Chapel to assist them in their specific use cases.
- Eric Laurendeau on Aircraft Aerodynamics
- Scott Bachman on Analyzing Coral Reefs
- Nelson Luis Dias on Atmospheric Turbulence
Sign Up and Skill Up
Register for our upcoming technology talks or check out our on-demand training
- Munch & Learn calendar
- Meetups calendar
- Workshops-on-Demand catalog
- HPE Innovation workshops
- HPE Learn On-Demand catalog
HPE Developer YouTube channel
Explore the collection of HPE Developer Community videos on this YouTube channel. Make sure to check out our newest and those focused on the HPE Machine Learning Development Environment:
Be an HPE Developer blogger!
Contribute to our blog. Learn how it’s done here.
Share your knowledge
Be an open source contributor!
Start contributing to open source projects.
Get started
HPE Developer newsletter archive
Catch up on what you might have missed.
Events
Picking the right software for your AI use cases
December 11, 2024 5pm CET / 8am PT
Identifying and integrating the right software into your AI infrastructure to manage data pipelines, develop, and deploy AI/ML and GenAI models is no easy task. We’ll explore AI software solutions that can give your teams a competitive advantage, from prototype to secure deployment of your AI models, across your evolving AI use cases.
Democratizing Gen AI with LL-Mesh
December 18, 2024 5pm CET / 8am PT
In this session, we will discuss a pioneering initiative by HPE called LL-Mesh and how it holds promise in democratizing generative AI. LL-Mesh empowers users to create tools and web applications using Gen AI with Low or No Coding. It simplifies the integration process abstracting complex, low-level libraries into easy-to-understand services that are accessible even to non-developers. The platform then allows for the creation of a “mesh” of Gen AI tools, providing orchestration capabilities through an agentic reasoning engine based on Large Language Models (LLMs).
Engage
Hewlett Packard Enterprise leads with enterprise and open source solutions with the expertise to help developers and customers innovate and solve problems.
We’re all developing something.
Come join us in making the future.
HPE Developer and its accompanying resources are part of Hewlett Packard Enterprise, LP.
This newsletter originally appears at https://developer.hpe.com/newsletter/sep-2024/ on October 1, 2024. HPE Developer Community monthly is reproduced here with express permission of Hewlett Packard Enterprisehttps://developer.hpe.com/newsletter/oct-2024/