Migrating to cloud operations can be daunting, but there are so many advantages to be had when you use HPE GreenLake cloud. Let us help. Check out our articles on mastering cloud migration, converting specs, synching users, and enhancing your environment for sustainability.
If it’s parallel compute programming you’re into, our articles on Chapel’s recent 2.1 release and high-level support for CPU-GPU data transfers and features that let you define functions that perform computations on types at compile-time are sure to please. We also continue our Determined AI series on activation memory in this month’s edition. Enjoy!
Featured


Mastering cloud migration with the 6Rs approach
Looking to achieve the agility and efficiency afforded by cloud, but having trouble migrating? Learn how to address the hurdles, like legacy systems, data security and uninterrupted operations, with HPE GreenLake for Private Cloud Enterprise.
Programmatically monitor energy consumption
Learn how to retrieve carbon emissions, energy consumption, and cost of infrastructure data from HPE GreenLake managed environments programmatically with the HPE Sustainability Insight Center API..
Community
Converting HPE GreenLake API specifications in OAS 3.1 using OpenAPI tools
Learn how to convert the HPE GreenLake API for Data Services into a PowerShell client library using open source tools and how to use it in these examples.
HPE GreenLake Flex Solutions SCIM API integration with Okta SCIM adapter
Walk through the process of configuring the Okta SCIM adapter to sync users and user groups from Okta to HPE GreenLake Flex Solutions in this post.
Announcing Chapel 2.1!
Building off this past March’s milestone 2.0 release, 2.1 significantly expands Chapel’s installation options and improves support for AWS. There’s more, too! Get the highlights here.
Chapel’s high-level support for CPU-GPU data transfer and multi-GPU programming
Explore how Chapel’s parallelism and locality features can enable using multiple GPUs and how its high-level array operations can be used to move data between GPUs and CPUs.
Generic linear multistep method evaluator using Chapel
See how you can implement an evaluator for a whole family of numerical methods, allowing you to describe a method in only a line of code, and execute it as fast as a hand-written implementation in this detailed tutorial.
Reflections on ChapelCon ’24: A community growing together
Building off of its predecessor, CHIUW, ChapelCon is a decidedly more community-oriented event. Get the big-picture highlights and ideas for next year’s event here.
Activation Memory: A deep dive using PyTorch
Building off of the first post in this series, this article discusses where activation memory comes from, how to measure it in PyTorch, and why changing the activation function can significantly reduce memory costs.
Sign Up and Skill Up
Register for our upcoming technology talks or check out our on-demand training
HPE Developer YouTube channel
Explore the collection of HPE Developer Community videos on this YouTube channel. Make sure to check out our newest and those focused on the HPE Machine Learning Development Environment:
Be an HPE Developer blogger!
Contribute to our blog. Learn how it’s done here.
Share your knowledge
Be an open source contributor!
Start contributing to open source projects.
Get started
HPE Developer newsletter archive
Catch up on what you might have missed.
Events
HPE AI Foundations
Choose your time & date starting August 13, 2024
Amid all the focus on AI, there is one aspect that often gets overlooked — the IT operating model. During this roundtable session, we aim to listen to and understand your AI challenges and goals. You will also hear how organizations are rethinking their operating models based on customer and business demands and learn about the HPE approach. Please join us to gain insights, get inspired, and share your thoughts on evolving your operations to support AI-driven transformation.
LLM finetuning for mere mortals
August 28, 2024 5pm CET/ 8am PT
With applications ranging from content creation to automated software development, large language models (LLMs) have the potential to transform nearly every industry. In this session, learn how can you make the most of this technology when applying it to your own use-cases. In this talk, you’ll learn about the challenges involved in finetuning models, and how us mere mortals can tackle them using HPE’s new software that leverages the open-source machine learning (ML) ecosystem.
Engage
Hewlett Packard Enterprise leads with enterprise and open source solutions with the expertise to help developers and customers innovate and solve problems.
We’re all developing something.
Come join us in making the future.
HPE Developer and its accompanying resources are part of Hewlett Packard Enterprise, LP.
This newsletter originally appears at https://developer.hpe.com/newsletter/aug-2024/ on August 8, 2024. HPE Developer Community monthly is reproduced here with express permission of Hewlett Packard Enterprise