MCSOlutions2018TITLE3.png
kyle_jeffrey_02_editBW.jpg
DianaCortes.jpg
Jeff Kyle Vice President and General Manager for HPE Mission Critical Solutions

Jeff Kyle is a 25-year technology industry veteran with experience in hardware and software engineering, customer sales and support, business planning and product management and marketing. Jeff leads the product management, planning, and engineering teams for the HPE Data Center Infrastructure Group focused on delivering data management and data analytics solutions in the Mission Critical Systems portfolio. He is based in Palo Alto, California

Diana Cortes Marketing Manager for HPE

Diana Cortes is Marketing Manager for Mission Critical Solutions at Hewlett Packard Enterprise. She has spent the past 20 years working with the technology that powers the world´s most demanding and critical environments including HPE Integrity NonStop, HPE Integrity with HP-UX and the HPE Integrity Superdome X and HPE Integrity MC990 X platforms.

As HPE advances its transformation journey, the company is implementing a recent strategic pivot toward Value Compute. HPE Mission Critical Solutions are at the center of the Value portfolio and their continuous innovation focuses on meeting the evolving requirements of customers that need solutions for continuous business. I sat recently with Jeff Kyle, Vice President and General Manager for HPE Mission Critical Solutions, to get an update on market trends, strategic initiatives, new offerings, and a look to the future of this strategic area.

DC Let’s start with a key HPE initiative we’ve heard about for a while. Can you give us an update on the strategy around Memory-Driven Computing? Why HPE is investing in this paradigm shift—from a processor-driven architecture to a memory-driven approach?
circlesHPEoptics.png

JK This initiative, which we’ve been working on for a number of years, results from the inability of conventional compute to keep pace with data growth. Every other year, the amount of data previously created doubles. But how do you turn that data into action? We have this great opportunity, but existing technologies can’t get there. Conventional architecture has always been limited by the practical tradeoffs of memory speed, cost and capacity. So, at HPE, we concluded the compute paradigm must change—with memory at the center, not the processor.

Our recent milestones include a Memory-Driven Computing prototype with 160TB memory—the world’s biggest single memory computer. Also very exciting, we launched a “sandbox” for our customers. This is a set of large-memory HPE Superdome Flex systems you can access remotely to try out Memory-Driven Computing programming. The Sandbox gives customers a chance to see, without having a Superdome Flex system in-house, how they can get a 10x, 100x, even 1000x speedup on the core workloads that drive their enterprises.

One final point. Although currently leading the industry, HPE is no longer alone in this strategy. We are part of the Gen Z consortium of leading computer industry companies dedicated to creating and commercializing a new data access technology. All 54 member companies—from physical connectors, to microprocessors, to systems companies like HPE, to service providers—see the same future path we do.

DC You’ve just mentioned the HPE Superdome Flex, another key milestone in HPE’s journey toward Memory-Driven Computing. Launched exactly one year ago, can you share with us how the market is responding?
circlesDraw.png

JK We are just ending a fantastic year for Mission Critical Solutions, and market adoption of HPE Superdome Flex has spearheaded this growth. Let’s recap what this highly modular, scalable platform does for our customers. At just 4 sockets and less that 1TB of memory, it allows customers to start with small environments and grow incrementally as their requirements evolve, scaling up to 32 sockets in a single system with 48TB of shared memory. This is Memory-Driven Computing in action, empowering customers to analyze and process enormous amounts of data much faster than they could before.

Customers are deploying highly critical workloads on this platform, whether at 4-sockets or much larger. Because of that, we designed it with advanced Reliability, Availability and Serviceability capabilities not found in other standard platforms. Its unique RAS features span the full stack, and we work very closely with our software partners such as Microsoft, Red Hat, SAP, SUSE, VMware and others so that their software will not only take advantage of the RAS capabilities of the system, but also perform well on it—especially in those large configurations that, frankly, many of these software packages haven’t run on before. We have seen strong adoption of the platform around the globe and across all industries, from manufacturing, to financial services, to telecommunications, public sector, travel and many others, including high performance computing. And we see new use cases develop continuously thanks to the flexibility of the platform.

DC Can you share with us some of the common use cases for the Superdome Flex platform? Why are customers choosing it and how are they using it?
Robotics_Circle.png

JK HPE Superdome Flex is most commonly used as a database server, whether for conventional or in-memory databases.

We see many customers migrating their Oracle databases from either Unix environments or scale-out x86 deployments—including Oracle Exadata—to Superdome Flex. Motivations are two-fold: to reduce both licensing costs and complexity. Oracle license costs depend on the number of processor cores, and, in the majority of Unix systems, Oracle licensing is twice as much per core compared to x86 servers.   Migration promises large savings, but customers are often concerned about availability on x86. With Superdome Flex, they feel confident it can deliver the uptime they need, at a much lower cost and with room to grow. Secondly, customers want to reduce the complexity of their Oracle environment. If they move from x86 clusters to a scale up environment such as Superdome Flex, they can greatly reduce complexity—plus avoid the costs of cluster licensing.

The second primary use case we see is SAP HANA environments. SAP’s clear strategy is to stop supporting third-party databases by 2025;  the hundreds of thousands of customers running SAP for critical applications need to move to SAP HANA. Therefore customers look for a partner that can give them confidence and peace of mind as they embark on a transformational HANA journey. As the clear leader in the SAP HANA infrastructure market, HPE can offer them that.

Beyond this, we also offer the broadest HANA portfolio in the industry, led by HPE Superdome Flex. It’s a perfect fit for HANA because of its modularity, memory capacity and performance—we’ve set a number of world records in SAP HANA benchmarks. Many times, customers start by moving one or two SAP applications to SAP HANA, and then grow their HANA environment. So the ability of Superdome Flex to scale incrementally is very important.  

Customers are also deploying SQL Server. Having evolved from a departmental to an enterprise database, including support for Linux, SQL Server customers now need more scalability and availability than they can get with other x86 platforms. That’s where Superdome Flex comes into play. More and more, customers are deploying critical enterprise workloads on SQL Server and to do that they need a highly available environment. The differentiated RAS features of Superdome Flex bring that extra layer of protection they seek. JK

DC How about high performance computing? You mentioned that as a use case for HPE Superdome Flex.
earth.png

JK High performance computing is a relatively new area for us in HPE Mission Critical Solutions. When we acquired SGI a couple of years ago we not only gained the world-class scalable technology that became the foundation for Superdome Flex; we also gained HPC expertise. While many HPC applications use a scale-out cluster approach, there are certain data-intensive HPC workloads that are challenging to distribute across multiple nodes in an HPC cluster. They are best tackled holistically, using a single “fat” node—one node with a large number of processors and shared memory. That’s exactly the Superdome Flex architecture. Use cases include genomic research, computer aided engineering, cyber security, financial risk management and large data visualization, among others. In fact, in some cases the entire workflow may all take place on a single node, removing key I/O bottlenecks, by keeping all the data in-memory on a single node.

There are two main areas we see more and more HPC customers turn to Superdome Flex. First, workloads dominated by access patterns to large volumes of data. If deployed in traditional clusters, the amount of communications across cluster nodes creates a great deal of waiting time versus productive data processing time; this can be solved by keeping all data in a single, easily accessible memory, as on Superdome Flex. This translates into less time and effort to get results. Second, a particular HPC job may be too big for any one node. If memory is exhausted, the job will fail and the time spent running it is wasted. With the large shared memory capacity of Superdome Flex, the risk of memory exhaustion, and therefore failed jobs, decreases. For instance, the research that the Centre for Theoretical Cosmology and the University of Cambridge are doing around the origin of the universe. They are using Superdome Flex as the platform for their COSMOS system, which is also leveraged extensively by the Faculty of Mathematics at the University of Cambridge to solve problems ranging from environmental issues to medical imaging.

DC Let me shift gears. A top of mind issue for customers is their cloud strategy—you mentioned Unix to Linux migrations in the Superdome Flex use cases, especially for mission-critical workloads. How are you seeing mission critical customers address modernization of their environments in light of the multitude of cloud options they have today?

JK No matter their size, cloud is being discussed at the vast majority of enterprises today. One thing we know about cloud is that one size doesn’t fit all; each customer’s strategy looks different. Specific workload requirements should be driving the particular consumption model. In the Mission Critical Solutions space we see customers being very careful when selecting the deployment model for workloads that are so vital for the functioning of their enterprises.

I’ve just returned from our annual Discover conference in Madrid and my conversations with customers there bear this out. Many continue to operate on-premise in traditional data centers, and some deploy via private clouds. Their main concerns around public cloud are related to security, control and regulations. While efficient and appropriate for a number of workloads, public cloud is often not considered a viable model for many highly critical workloads. IDC recently published some very interesting research around Cloud, and specifically Cloud repatriation, where customers move some workloads back from public cloud environments to either hosted private clouds or on-premises. There are a few reasons why this is happening. One is that they aren’t gaining the economic benefits from public cloud they thought they would. Another is because they can’t comply with government or industry regulations. A third, because of costly and damaging security breaches. And finally, sometimes they are just not getting the performance they need from the public cloud. So, instead, they move back and increase investment in private cloud, both on and off premise, to address security, control, performance and cost issues. So as I said, every environment is different and a hybrid, multi-cloud model is now the norm for companies.

DC Interesting take with workload requirements driving consumption models. Continuing with the cloud topic, what are some innovations HPE Mission Critical Solutions is driving that can enable a confident move to cloud, while mindful of customer and application requirements?

JK Mission critical continues to evolve to address customer requirements, and as such we are moving to cloud-enable our solutions. One example is HPE GreenLake, a flexible, pay-per-use consumption model. Cost is on par with public cloud, and we are now able to offer it for SAP HANA deployments together with our HPE Superdome Flex platform.

Looking at other areas within Mission Critical, we continue to evolve our HPE NonStop platform to ensure it can be consumed in a variety of different ways; for example, with Virtualized NonStop, customers can get all the unique benefits of the NonStop software ecosystem using standard virtualization packages—including VMware, as recently announced. We also offer NonStop dynamic capacity for Virtualized NonStop, as well as a new offering, the HPE Virtualized Converged NonStop, a virtualized, turnkey entry-class system preconfigured by HPE Manufacturing for simplified deployment.

Within our Integrity with HP-UX ecosystem, we recently announced OpenStack support for HP-UX. You can manage and use Integrity servers with HP-UX in private cloud environments, and, as a future possibility, deploy HP-UX in Linux containers. So we have a lot of focus on ensuring our solutions are cloud-enabled and that customers can choose to deploy their critical workloads using various consumption models while preserving those high levels of reliability and uptime that are paramount for that set of workloads and business requirements.

DC You mentioned HP-UX with OpenStack, and we have been hearing a lot about HPE’s vision for HP-UX as a container solution. What has happened with the Integrity with HP-UX family of products in the last year since HPE released the Integrity i6 servers? And how is the HP-UX vision coming along?

JK We continue to advance this important set of solutions for our current customers and innovate in the areas that matter most to them. We have announced support for Intel 3D XPoint with Integrity i6 servers, which will mean significant performance gains for our HP-UX customers, at a lower cost. In fact, some of our estimates predict up to 140% higher performance and 55% lower TCO compared to Integrity i4 servers with HDD. In addition, the HP-UX 2018 Update release offers a variety of enhancements, including integration with storage—both 3PAR and MSA flash—and improvements in our HPE Serviceguard high availability and disaster recovery solution.

As for running the HP-UX environment in Linux containers, our engineering teams continue to make good progress in terms of features, integration and capabilities. The program continues to advance with high customer interest and we are also open for customer trials.

DC Exciting innovations all around your portfolio. How about the future, what’s in store for your customers and for the market overall when it comes to their mission critical workloads?
globe_circle.png

JK As with everything in technology, what I can say with confidence is that Mission Critical Solutions will continue to evolve to meet the changing requirements of our customers. We will see more varied consumption models and customers adopting private cloud and multi-cloud environments, even for more traditional environments. I also believe the cloud repatriation trend will continue; this is a learning process for enterprises. The market is realizing that not everything can be moved to the cloud and customers are adjusting consumption models accordingly.

Moreover, the data explosion will continue and we will continue to advance our Memory-Driven Computing strategy. We will see many advancements in the AI and ML space and see commercial users increasingly adopt these technologies previously tied to the research and scientific communities. I´m looking forward to yet another strong year in the HPE Mission Critical Solutions space.