Cover_final3.png
.
 
INSIDEC2_Spring2018.png
 

Technology + Community

EDITOR'S LETTER

CHAPTER NEWS

ADVOCACY
Connect Tech Forums at Discover Madrid

HPE STORAGE NEWS
Top 10 Storage Trends of 2018 and Why You Should Care

>>TOP THINKING
Putting Infrastructure Management in the Driver’s Seat: Self-driving Cars Are Here, Where’s the Self-driving Data Center?

DON'T FEAR A LAPTOP BAN:
6 Steps to Turn Your Smartphone into an In-Flight Computer

EDUCATION CORNER
The New Digital Learning Era Has Arrived
Here At HPE We Are Prepared With
Our Digital Learner Framework

Inside Story on HPC’s Role in Bridges Strategic Reasoning Research Project at CMU

Edge Compute Made Easier with HPE GreenLake

XYPRO and SailPoint Partner to Provide Identity Management for HPE NonStop

Why Renew Is Better Than New For You  

 
 

 

Connect Converge Staff

CEO/Chief Executive Officer
Kristi Elizondo
Editor-In-Chief
Stacie Neall

Click here to view Connect Board of Directors

Connect Converge Staff

Event Marketing Manager
Kelly Luna
Art Director
John Clark
Partner Relations
info@connect-community.org

 

.

Editor's Letter

Stacie-Neall-mug-shot.jpg

“Stop chasing what’s now and create what’s next.”
 - Unknown


The New Normal is Hybrid IT

Welcome to the Spring edition of Connect Converge.

You’ve no doubt heard the term digital transformation by now, it has become a part of our everyday vernacular in technology today. It is reshaping the modern enterprise. We know that getting to hybrid is about managing both sides of the coin; taking the best of the old, and adding the best of the new and strategically integrating the two. In our feature article, “Simplifying Hybrid IT for a Successful Digital Transformation,” Gary Thome explains how a hybrid IT strategy is revolutionizing the customer journey to the digital transformation age. He says the “picture is not always rosy.” Forrester Consulting tells us that two-thirds of firms end up with hybrid by accident- and not by design. Only 33 percent design a comprehensive hybrid IT strategy from the ground up. Enter HPE OneSphere. To innovate on behalf of customers, experts like Optio Data, a Data Strategy company, are all in on helping customers navigate the challenges.  

As we countdown to HPE Discover Las Vegas 2018 - Connect looks forward to seeing you on the show floor. Don’t forget to register early for a Connect Tech Forum to hear how businesses solve their IT challenges with solutions powered by HPE. Tech Forums provide discussion and technical knowledge from customers and HPE subject experts on a variety of HPE topics and technologies including storage, networking, security, hyperconverged infrastructure, mission critical, HP-UX, HPE Education and more!   Our Tech Forums are a powerful and personal opportunity to make lasting connections that count in the Transformation Zone at Discover.

If continuing education is on your HPE Discover radar don’t miss out on two FREE full-day training classes taking place on Sunday 17th and Monday 18th June.  HPE Education Services will be running hands-on technical sessions on HPE Nimble and their one day hands-on Test Drive Composability class. These sessions are limited in numbers and will fill up quickly. Registration is opening soon. 

For the newbies and even veterans if you have questions regarding the ins and outs of HPE Discover - check out this FAQ on the A to Z guide to get started.

Read on for more technology news, insights and how-to content.  As always, if you have a technical hands-on or inspiring customer success story to share- please send it our way.

Thank you for sharing your technical expertise and your time with your HPE user community. 

See you in Las Vegas!

 

 
 

Asset 1.png
 
LogoMUG.png

The MN Connect Chapter/MUG (MN user group) in partnership with HPE held their Tech Day on Tuesday, February 13th at the TIES Conference Center in St. Paul, Minnesota. This full-day tech event allowed user group members to attend numerous breakout sessions (Aruba, Simplivity, Synergy, Nimble, 3Par, Servers, best practices, tricks of the trade, management, configurations/setup, monitoring, etc.). The full-day training agenda consisted of nine sessions in the morning, twelve sessions in the afternoon, and three sessions running simultaneously throughout the day. The technical team (Dan Baar, Lance Beckenstadt, Curt Benson, Chris Burg, Bryan Lechner, Todd Thill, Gary Tierney, Matt Vogt, Marc Westberg, Tom Wiewiora) shared their knowledge and expertise with the group. They were on-site to show, discuss, and answer questions.

 Jim Schneibel, HPE Northern Plains Sales Director, and John Jankowski, VP or Aruba, kicked off the general session that had approximately 70 HPE customers in attendance including our CEO of Connect Worldwide, Kristina Elizondo, from Austin, Texas. The MN user group continues to grow and provides a learning environment that brings many resources (e.g., educational offerings, networking, up-to-date information, and training) to the group. HPE Connect/MUG member will be attending the Spring Update Event in Mpls. on April 24th.

 
 
 

.
AdvocacyTitle2.png
SteveD_circle.png

Steve Davidek

Steve Davidek started working in the IT world in December 1981.  In August of 1984 he started working as a Computer Operator for the City of Sparks, Nevada on an HP3000 Series III Mini Computer.   Over the last 34 years Steve has been instrumental in building the City’s 21st Century IT Data Center with HPE products including Blades, 3Par Storage, and Aruba Networking.  Steve has been involved in the HP/HPE user community since 1985 starting as secretary for the local High Sierra User Group. He was on the Board of Directors of Connect at our 2008 merger and was President of Connect in 2012 and 2013.  He is currently Connect’s Discussion Forum Host at the North American and EMEA Discover events.  Steve has been the IT Manager for the City of Sparks since March of 2014.  He is married with 2 adult children. 

 

Connect Kicks off Tech Forums at HPE Discover 2017

Keeping things fresh, that was the goal of the change a couple of years ago when we rebranded Connect’s in-booth Discover sessions from “SIG Meetings” to “Tech Forum’s." The idea is to spend a half hour on a customer or product story and then an open forum with HPE support for attendees to ask the hard questions or ask for better ways of doing things.

Discover 2017 Madrid continued to prove that this strategy is working. Our attendance is growing and the technology topics just keep getting better. Every location for Discover offers Connect a challenge on how our booth can best present our sessions. In Madrid, we had a great location in the middle of the Mission Critical Systems area in the Transformation Zone with lots of room for standing, listening and asking questions. Our lineup of presentations was as varied as our members.

We kicked off our Discover Tech Forums at 10am on Tuesday with our first customer presentation. I put on my City of Sparks "hat," and gave a talk about "How we work with Aruba Networking... a real-world example." This session had an official headcount of 34, and was standing room only! I spoke about how the City of Sparks uses Aruba Networking products from the Data Center to the Edge, including our WIFI "mobile workforce" enabled City Hall and Police Departments. I also spoke about how we are using Aruba Clear Pass for our WIFI security, and how we are moving toward having AD security attached to every port on every switch. Finally, I touched on how Aruba Beacons are making it easier for Sparks residents to get around City Hall with "Blue Dot" mapping of city offices, and our future plans to integrate beacons at Golden Regional Sports Complex and Park in order to help people find better parking options closer to their destination (in this case, softball field). We closed out the hour with about 15 minutes of questions and answers from attendees. We discussed what they are doing in their organizations, and the questions they had about how to yield better outcomes. We all seemed to walk away with answers and ideas to bring back to the office when we return to work.

Calvin Zito Explains HPE's Roadmap for Storage

The next hour and a half focused on Storage. During the first customer-led session, Dr. Heinz-Hermann Adam from the University of Münster discussed how the University uses HPE 3PAR Peer Persistence Software with a multi-tier storage strategy to deliver file services to virtualized environments from VMware and Citrix. Dr. Adam enjoyed participation from a large crowd of over 75 attendees, most of whom stayed for our next session, "HPE Storage Update with HPE's Storage Guy, Calvin Zito." Calvin discussed the Nimble purchase, and HPE's strategy going forward. Calvin also presented an HPE Roadmap Update for the next year or so, and fielded customer questions from a large crowd of HPE Storage users.

With an active and thoroughly engaged crowd of storage users, we rolled directly into our next Storage session from Connect Board Secretary, Trevor Jackson. Trevor gave a great talk on the journey that the Society of Composers, Authors and Music Publishers of Canada (SOCAN) took to the All-Flash Data Center. More than 35 storage users heard about the past and future of music licensing in Canada, and how SOCAN went from paper to disk, and now to flash storage for all of their records and content storage.

 

 

After a quick break from Tech Forums, we welcomed Ray Turner and Ken Surplice from HPE EMEA to talk about HPE planning for OpenVMS on x86. Surplice and Turner gave an update on the direction of support forOpenVMS on Itanium, and described the move to x86 either on bare metal or virtual, on-premise, or in the cloud. The included information on how companies are preparing for the new relevance of OpenVMS with a long-term roadmap and strategy that companies can use for planning for their future.

Connect CEO Kristi Elizondo Shows How to Leverage Your Connect Membership to Meet Organizational Goals

KristiElizondo_border.jpg

After a rousing General Session from outgoing HPE CEO, Meg Whitman, and her successor Antonio Neri (among other special guests), we returned to the Connect User Community booth for another round of Tech Forums. Connect CEO Kristi Elizondo led a discussion on "Why you need to partner with Connect." Kristi discussed how Connect's regional chapters, publications, advocacy efforts, and training opportunities help HPE partners and Connect members leverage their user community to be more successful in their IT endeavors.  We ended out the evening with a Welcome Reception in the Transformation Zone where we mingled with Discover attendees and greeted Connect members in the Connect User Community.

HPE Synergy Takes Center Stage

We started the day Wednesday with a great discussion on HPE Synergy. As the IT Manager for the City of Sparks, I’m always looking at what direction we need to go with our entire IT Infrastructure. When I first started hearing about HPE Synergy I didn’t see the relevance or need to really get more information. At Discover in Las Vegas this last year, I saw a trend and a direction that made me think I needed more information. We had 16 people show up for this discussion on what HPE Synergy could do for our companies. I was pleased to see that I’m not the only one looking at this next generation of blade technology. At least 3 others in attendance asked the same questions I had, and one of them even had the same setup as The City of Sparks – 3 c7000 blade chassis with Gen 8 and Gen 9 blades, 3Par for storage and an end of life for our c7000’s just around the corner. We all agreed that HPE Synergy is the direction we need to lean towards as we replace infrastructure in the next couple of years.

Our next Tech Forum focused on Software-Defined and Cloud Infrastructure. HPE's Chris Purcell hosted a roundtable discussion about converged management,  hyperconverged appliances and Composable Infrastructure. He discussed how the integrated management experience with OneView is spreading through the data center and also how it is helping deliver a better cloud experience.

Mission Critical Solutions For Your Business

Next up was a great hour on HP-UX. We had an informative session with HPE’s Michal Supak and Ken Surplice showcasing the HP-UX roadmap and answering questions from the audience. What was really great was seeing many of the same faces from London the year before and even Las Vegas. The HP-UX community is still thriving but not as vocal as they used to be. When I first started working with HP-UX at Interex, (when they merged with Interworks) HP-UX users always complained about patching. That is not the case now. Their biggest concern is the end of PA-Risc and moving to x86. Patch complaints are gone. We’ve come a long way.

Carrying forward the momentum of the previous HP-UX session of 30+ attendees, the crowd grew larger for our next Tech Forum on moving from large-scale UNIX to Linux on the x86 platform. This Tech Forum, led by HPE's Michal Sapak and Ken Surplice, demonstrated how a large-scale bank was able to get the best of both worlds: industry standard servers and robust software! The discussion focused on  how the bank was able to get their desired outcome: UNIX-like availability and scale at a lower cost.  Ultimately, the bank moved from PA-Risc servers to SuperDome X running Linux; Surplice and Sapak described the benefits that the bank is experiencing as a result. 

Another well attended Tech Forum focused on how customers are optimizing and securing Linux and SAP HANA workloads with new automated tools from HPE. HPE's Han Pilmeyer revealed details on the new Application Tuner Express, and how the latest Serviceguard for Linux capabilities for SAP HANA can accelerate workloads on NUMA servers in a completely new and automated way to secure your system against threats and meet security compliance requirements.  Attendees were pleased to discover how HPE customers are getting value from all of this new software innovation from HPE.

HPE Education Launches Digital Learning Framework

HPE Education services presented the next Tech Forum, which focused on one of the most important topics for the user community: How do we get the best training to do our jobs?  Hans Bestebreurtje from HPE Education Services EMEA officially launched the NEW Digital Learning Framework at Discover 2017 Madrid in the Connect User Community. This new Content As A Service subscription program increases access to meaningful learning, when and where you need it, in the modular form you want it, and provides incentives for both students and the organization. This new "digital" format for training will help any organization better spend and allocate their training funds for their HPE technology investments.

When Connect Worldwide's Past President, Rob Lesan, talks about security, you should listen! For our next Tech Forum, Rob led a lively roundtable discussion on Enterprise Security. The conversation touched on everything from data breaches to brainstorming ideas on how to make our enterprise more secure. Those who attended the Tech Forum offered their thoughts, asked engaging questions, and exchanged  ideas on how to yield better outcomes in their own organizations.

Connect Community Matters

When Connect Worldwide was formed ten years ago, those of us involved realized that we were much more than a "user group." We knew that we were a part of a "community." During the next session, (newly elected 2018 Connect President) Navid Khodayari spoke about the importance of leveraging our existing technology focused chapters and communities in order to build more focused groups that serve individual needs and goals. Navid's energy really helps younger members realize that we are not just an "old guys club," and that the Connect community is strong because of member engagement and participation from "up and comers" in IT fields.

AdvocacyMarch.jpg

We capped off two days of amazing presentations and discussions with a Tech Talk on something that many people even 5 years ago would not have thought possible – Virtualized NonStop. This new milestone in the HPE NonStop Journey showed how far we’ve come and what is coming next for the fast-paced changes taking place in the Enterprise today. This session was led by HPE’s David McCloud for a small group of NonStop devotees who were excited to see NonStop moving to the future.

Connect members should be excited about the questions and answers shared by our members and future members at all of our Talks and Tech Forums in Madrid. We demonstrated that actual users helping other users does make a difference. HPE provided resources and subject experts to share background and provide answers to questions that matter to the user community. These HPE experts enjoyed the feedback, and took down contact information so that they could assist people in finding the right resource at HPE to help solve their challenges.

I would challenge everyone who is thinking of attending Discover Las Vegas this June to think about giving a customer talk on a project they have done or are working on. From personal experience, I can say that it is a very rewarding thing to do. A user community that works together can easily find the best solutions and answers in their industry.

 

.
 
Storage.png
 
Calvin_circle.png

Calvin Zito

HPE Blogger
& Storage Evangelist

Calvin Zito is a 35 year veteran in the IT industry and has worked in storage for 27 years. He’s an 8-time VMware vExpert. As an early adopter of social media and active in communities, he has blogged for 10 years.

You can find his blog at
hpe.com/storage/blog

He started his “social persona” as HPStorageGuy, and after the HP separation manages an active community of storage fans on Twitter as @CalvinZito

You can also contact him via email at calvin.zito@hpe.com

In this quarter’s storage column, I want to share with you an article that was originally posted on my Around the Storage Block by Vish Mulchand. Vish is a Senior Director of Product Management and has broad industry experience.

What’s on the horizon for data storage? With 2018 comes a new opportunity to make changes such as replacing aging data center infrastructure, incorporating AI into your storage stack, and revisiting your data protection strategy.

I’ve had a new crop of prediction blogs filling my inbox. If last year taught us anything, it’s that our world has become increasingly unpredictable. However, staying ahead of key IT trends can help you build infrastructure that’s predictive, timeless, and cloud-ready so you can anticipate and prevent issues across the infrastructure stack, support data growth and mobility, and ensure future flexibility.

With 2018 comes the opportunity to consider core business investments such as replacing aging data center infrastructure, incorporating AI into your storage stack, or revisiting your data protection strategy. To help guide your investment decisions, in consultation with my storage colleagues I decided to pen my own list of top data storage industry trends for the year and why you should care.

1. Flash storage adoption will become flashier.

One of the most transformative technologies to hit the storage industry, the ripple effects of flash storage adoption will continue this year. Not only will organizations of all sizes continue to replace spinning disk with SSDs for greater performance, energy savings, space efficiency, and reduced management, but we will continue to see an exciting crop of new technologies like integrated data protection, storage federation/automation, policy-based provisioning and public cloud integration building on top of this flash foundation as they disrupt the market.

Why you should care:
Flash storage is the new standard, and its benefits are well established. But the massive growth in flash popularity produced supply chain shortages in 2017 that impacted many buyers. With growing demand and dropping prices, it’s likely that similar shortages could occur this year, so you would be wise to begin evaluating platforms and planning your flash storage purchases sooner rather than later.

2. Artificial intelligence will gain significant traction in the data center.

Vendors who harness the power of big data analytics will continue to differentiate their products and deliver measurable business impact for customers.

Why you should care: 
AI will result in huge opportunities to radically simplify operations and automate complex manual tasks. Consider the incorporation of AI as a storage purchasing decision criteria for this year.

3. Integrated data protection will do more than protect your data.

Flash storage arrays have revolutionized storage performance, efficiency, and administration. But the flash storage revolution has also cast new light on the need for greater efficiency in protecting data from threats including various types of attack, application corruption, system failures, disasters, and even human errors. This year more organizations will wake up to the fact that they cannot afford the toll of meeting their data protection requirements using disjointed and overlapping technologies and instead opt to invest in integrated approaches to data protection

Why you should care: 
Threats to data security and integrity are a major headache for any organization. Using multiple tools and products not only adds complexity, but can degrade performance and result in multiple copies of data—adding further cost and management overhead. That makes 2018 a good year to consider a well-designed, integrated data protection solution that not only simplifies management but allows you to leverage efficient data copies for activities such as development and testing or other internal processes.

4. Multi-cloud will become a reality.

Remember when people used to talk about bursting data from on-premises to the cloud and back? In 2018 we might still be waiting for the laws of physics to change, but we will start to see more meaningful data mobility between multiple clouds, including on-prem and public clouds from multiple vendors. But it’s not whether the data can move from one place to another quickly, it’s getting the data located where it can be accessed by multiple, application-optimized compute environments, including multiple public clouds.

Why you should care: 
As IT organizations start to understand the strengths and weaknesses of the cloud service providers they are working with, they are targeting certain types of applications for different cloud services running on different clouds. Maximizing value from data will require new perspectives and skills that include both on-premises systems administration as well as cloud-services brokering. Consequently, it’s important to ensure that your on-prem storage is cloud-ready.

5. The NVMe fan club will grow.

Flash will continue to be a major force in 2018, but the real excitement will come from innovations that have been born out of the flash revolution. Non-volatile memory express (NVMe) is one such innovation. The use of NVMe made waves last year, particularly as an alternative to SCSI-based interfaces to take advantage of not only flash media but next-generation solid state technologies like Storage Class Memory (SCM). This year we are going to see the use of NVMe over SAS- and SATA-based SSDs continue to grow, but more importantly we will see NVMe make the leap from back end to the front end. In particular, initial adoption of NVMe over RDMA-enabled Ethernet networks will take place, also known as NVMe over Fabrics.

Why you should care: 
NVMe allows you to take your flash storage to the next level, taking advantage of the massive parallelization of SSDs and next-generation SCM technologies while doing away with SCSI overheads. Although across-the-stack NVMe standardization is still some ways away, the use of NVMe over Fabrics lets you extend the benefits of NVMe between host and array, preserving high bandwidth and throughput and delivering the ability to handle massive numbers of queues and commands.

6. Ethernet storage fabric will come into its own.

The uptick in NVMe over fabrics is tangential to another trend we will see throughout 2018—investment in low-latency, high-bandwidth IP storage networks. Ethernet networking has made grand leaps in terms of performance, pricing, and sophistication over the years, and because it supports any flavor of storage, any storage tier, and even compute, it’s a natural choice for companies looking to modernize their storage network infrastructure, not just those looking at NVMe over fabrics.

Why you should care: 
Fast storage requires fast networking. The latest storage-optimized Ethernet switches are not only designed to deliver consistent performance and ultra-low latency, but they boast zero packet loss. Consider the ability of Ethernet Storage Fabrics to support converged, hyperconverged, software-defined, scale-out, and distributed storage as well as NVMe over Fabrics, and storage network modernization becomes even more attractive. Still not convinced? Switches are now available that support seamless upgrades to 100Gb/s per port once available, meaning your investment will be future proof.

7. Storage will continue to go software-defined and hyperconverged.

The coupling and decoupling of compute, storage, and networking resources has a long and intricate history. However, in 2018 we’re likely to see the growing taste for bare metal drive a surge in software-defined storage while a yen for simplicity will drive hyperconvergence, which is currently the fastest growing segment of the storage market.

 

Why you should care: 
Even though software-defined and hyperconverged models are nearly opposed, increased popularity of both alternative architectures promises you greater choice and greater storage agility in the year to come.

8. Disaggregation of storage from compute will deliver HPC-like performance.

Disaggregation of storage from compute is a new architectural approach that has emerged to address the needs of data-intensive workloads, made possible by low-latency networking and NVMe over Fabrics. As media continues to become more and more durable, the disaggregation of storage controllers from storage enclosures and sharing of data across multiple compute nodes for storage processing has become possible. This architecture takes advantage of NVMe to accelerate workloads at the compute layer.

Why you should care: 
Unlike server-attached flash storage, a disaggregated model allows you greater scalability when it comes to capacity via the use of ultra-dense flash. It also supports high availability features and the ability to manage storage centrally while serving dozens of compute nodes. In sum, disaggregation promises to bring HPC-like performance to the mainstream enterprise, thanks in part to the use of NVMe over Fabrics.

9. Storage class memory will influence buying decisions.

Storage class memory (SCM) is already poised to give the disruptiveness of flash a run for its money. Not exactly memory, not exactly storage, in 2018 we will see SCM start to make big waves. With Intel Optane SSDs now available, the line between RAM and solid state storage has been blurred, and there’s no going back.

Why you should care: 
The potential disruptiveness of SCM is undeniable, and you should plan accordingly. In order to leverage the benefits of SCM media, architectural shifts must take place. Although the cost of SCM media will pose a barrier to immediate mass adoption, the pattern here is a familiar one. For this reason, you should strongly consider the architectural flexibility to incorporate SCM technologies as a purchasing criteria for any storage array investments you make this year. Likewise, you should be sure to ask current and prospective vendors about their plans with respect to SCM.   

10. Use of automation frameworks will become the next big skill set.

Infrastructure automation is how modern data centers are managed at cloud scale with greater efficiency and fewer errors. In 2018, automated virtual infrastructures will continue to be strong, but they will face rising competition from containerization and orchestration based on technologies such as Docker and Kubernetes. As a result, in the storage world, automated frameworks for provisioning persistent storage will become the next big skill set for full stack administrators. The readily available number of plug-ins and SDKs will continue to give data center leaders flexible options for completely new platform integrations in 2018.

Why you should care: 
Containerization can be a key ingredient to building a data center infrastructure capable of keeping up with data and application growth. Automation frameworks reduce operating costs and administration errors by increasing application and workload centricity and also increase staff productivity in containerized environments. With pressure on IT to be more agile and responsive, automation frameworks can be a critical tool in your arsenal.

What do you see on the storage horizon?

There you have the Top 10 trends that HPE sees on the horizon for 2018. To be sure there are many more than these, but the 10 covered here are the ones we see as most important to our customers. Beg to differ? Visit the original article on ATSB and take a moment to weigh in on the trends called out in this by leaving a comment. What did we miss and what did we get right? What storage market trends do you see taking shape on the near horizon? What new products, features, or technologies do you think HPE should invest in this year? We want to hear from you – our customers!

Aligned with the trends:

Discover the complete HPE enterprise data storage portfolio.

Again, thanks to Vish for giving me this quarter off from writing an article – I think he’s got a great list here!  

 

 

TopThinkingBanner2.png
ChrisP_circle3.png

Chris Purcell

Chris Purcell has 29+ years of experience working with technology within the datacenter. Currently focused on integrated systems (server, storage, networking and cloud which come wrapped with a complete set of integration consulting and integration services.)

You can find Chris on Twitter as @Chrispman01 and @HPE_ConvergedDI and his contribution to the HPE CI blog at www.hpe.com/info/ciblog

What makes the future exciting? Simplicity. In all of our wildest imaginings in stories and movies about what the future will be like, a simpler life is almost always a central theme, driven by amazing technologies. The world today is getting closer and closer to these imaginings. The smartphone gives us one place to do pretty much anything, anywhere. You don’t necessarily have to get out of bed to turn the lights off if you’ve equipped your home with “smart lights,” and you can use your phone to activate your robot vacuum while you do more valuable things. The coolest up and coming tech, self-driving cars, are already driving on the roads, and companies such as Uber are equipping themselves with an army of them to take us all on driverless adventures as soon as the technology that supports them is perfect.

The key to simplicity is getting something done with as little effort as possible so that you can use the saved time and energy to do something more valuable. One of the places that needs simplicity the most is the data center where a mix of infrastructure often becomes difficult to manage and maintain. Infrastructure management that leverages software-defined intelligence should be at the core of driving simplicity for the data center, minimizing time spent on manual, repetitive tasks, and reducing human error. Infrastructure management should also do a better job of self-monitoring for issues that, with today’s technologies, can be resolved with little-to-no human intervention.

Finding an infrastructure management solution that brings that kind of software-defined intelligence to the data center doesn’t have to be a dream of the future. When you’re

searching for something to make life easier, you should look to make sure it meets a few basic requirements.

Template-based provisioning and updates

Provisioning servers is often a task that involves manual labor and because of that, introduces a lot of room for errors. Template based provisioning cuts down on the time it takes to provision and significantly reduces errors. Think about a server template as a fixed course menu. This menu is crafted by expert chefs on your staff, or in this case, subject matter experts for servers, storage and networking, all working together to define the very best ingredients (settings) for your meal. This template is then deployed to every server you choose, and then repeated exactly the same way every time you need it. You can queue up the provisioning of hundreds of servers and go work on something else while your infrastructure management software does the work for you. You can create as many unique templates as you need to cover all of your workloads and applications.

Template based provisioning is practically the very definition of self-driving. You press go, sit back, and enjoy the extra time you have to work on things that create more value for the business.

A consolidated view of all of your infrastructure

You probably have more than just one type of infrastructure, and you may even have infrastructures spread out across multiple data centers in multiple locations worldwide. At scale, most infrastructure management software is going to require a separate instance once you reach a scale limit. This results in multiple infrastructure management instances to keep track of.

Just imagine a company with data centers in multiple cities, across several countries, involving a mix of platforms. How do you quickly and easily make sure it’s all running smoothly? To make handling these multiple instances doable, you need a single, consolidated view to see everything at once. Being able to see the health and status of your entire infrastructure from a single dashboard
 

allows you to quickly respond to issues on any platform at any data center. It’s like having a smart phone app for your infrastructure.

Integrated remote support

Last but certainly not least, what if you could have an extra set of helping hands around your data center, checking on mundane but important items? Tasks like maintaining warranties for your equipment, ordering replacement parts, or handling your service tickets. What if this person came included with your management software? Integrated remote support should be part of your infrastructure management, saving you time on issues that a computer can easily perform.

Imagine the time savings if you had integrated support that could open a service ticket for you and send you an email to let you know when it is opened and when it is resolved. If a part breaks, your infrastructure management should be able to identify it, automatically order the part for you, and have it delivered straight to the data center. Systems built with software-defined intelligence will know what is broken without your help. Keeping up with small but very important maintenance tasks like these, without having dedicated staff, can really make a difference in keeping things running smoothly.

The self-driving data center is here today

You don’t have to look far to find these software-defined features. HPE OneView manages all of your HPE servers, storage, and networking at scale, and the HPE OneView Global Dashboard allows you to keep tabs on your entire infrastructure. And, integrated remote support is included with every HPE OneView license, for free.
These capabilities are just the beginning of how HPE OneView can help you simplify life. You don’t have to wait for the future to have a self-driving data center today. To learn more, take a look at the popular HPE OneView For Dummies Guide, or the IDC Business Value of HPE OneView.

 

Depositphotos_19215195_l-2015.jpg
Don’t Fear a Laptop Ban:

6 Steps to Turn Your Smartphone into an In-Flight Computer

by Beth Ziesenis, Your Nerdy Best Friend

Screen Shot 2018-03-19 at 3.48.11 PM.png

What’s worse than a long flight for a frequent flyer? A long flight without a laptop… courtesy of a ban on electronics on planes.

Security officials have banned laptops and tablets on certain international flights, and more travelers could be impacted if the ban extends. But with a little planning, you can transform your smartphone into a powerful productivity tool that can (almost) replace your laptop.

Step One: Get a Larger Phone
If you’re still carrying around a tiny phone, it’s time to upgrade. You want a larger screen, a faster processor and plenty of storage.

Step Two: Get a Bluetooth Keyboard
A portable keyboard may be the most important element for working comfortably on long flights. You have plenty of options – most of them less than $50. You want something that folds up nicely for storage.

I recommend ordering a few styles to find your most comfortable position. Here are a few varieties:

Mini keyboards that include a stand for your phone – make sure your phone case isn’t too big for the lip and that your hands are comfortable on the tiny keys.

Silicone roll-up keyboards – some of these are waterproof. 

 

Ergonomic keyboards – nice option for the angle of your hands on the airport tray. Full-sized keyboards that are the same size as your laptop – I chose one of these that has a backlight for the keys and instant Bluetooth connections.

Step Three: Get a Phone Stand
You're going to want to prop up your phone at a nice angle so you don't have to strain to see the screen. Look for a stand with a little grip to it so it doesn't slide off the tray. Another thing to think about is how much space it needs on your tray. You want as small of a footprint as possible so everything will fit.

Here are some stand options:
Case with a built-in stand – great for portability, but your phone may end up thicker than you want for daily use.
Simple tripod stand – make sure you get one that doesn’t tip over.

PopSocket stand – this was my ultimate choice… it’s a collapsible stand that sticks to the back of my case and serves as a stand and holder wherever I go.

Step Four: Buy a Strong Power Supply
Don’t skimp on your juice: Get a high-power external battery (10000mAh or more) with two or more charging ports so you can simultaneously charge your keyboard and phone. If you're lucky enough to have a plug at your seat, you can plug your charger into the outlet so it continues to charge even as it charges your other devices.

Step Five: Plan Ahead for Your In-Flight Projects. This may be a challenge for some of you who are used to just opening your laptop and having all your files and programs with you. Even when you connect your phone to wifi on the plane, you may have trouble pulling existing projects down from the cloud. So it’s best to think about what kinds of projects you can make progress on during your flight, such as a presentation, spreadsheet, document or other files, and then download them to your device before you leave. Avoid working on extra-large files on your phone. My 300MB PowerPoints are just too dang big to download. 

Microsoft Office, Google Docs and Apple work apps all have options to work offline on copies on your phone. Microsoft Office Mobile Apps are insanely good on this front – and they’re free. Microsoft Word, Excel and PowerPoint apps have almost every single feature that the software programs do. You can format, add pictures, apply themes, add headers/footers, create tables and much more. It takes a little more patience because you have to choose the options from drop down menus and scroll more than you have to in the full versions.

You also might be able to tackle email offline or online during the flight. It’s wonderful to disembark with an empty inbox.

Step Six: Plan for When You Get Off the Plane
If you checked your laptop in your luggage (try a hard case like Pelican for a shockproof laptop protector), all you have to do to keep working is connect to the cloud and wait for your phone and laptop to synchronize.

But if you've ditched your laptop for the trip because working with your phone is all you need, you can use some inexpensive dongles and cables to expand the functionality of your phone even further.

Your phone can connect to monitors and projectors with the help of HDMI and VGA dongles, or even remotely with tools such as Google Chromecast or Apple TV. If you’re giving a presentation, you can even use the live mode in PowerPoint or Apple’s Keynote and have your participants follow along on their own devices.

Beth Ziesenis is Your Nerdy Best Friend. She is the author of several books on technology including The Big Book of Apps (Summer 2017). Beth travels the country talking to organizations about free and bargain technology. Find her at www.yournerdybestfriend.com.

 


 
EDUCornerTag.jpg
 
KellyBaig-HPE_13_FK9C7191.jpg

Kelly Baig

Badging Program Manager
HPE Education Services

KBTitle.jpg

In the Spring of 2017, Deloitte published their annual Human Capital Trends survey. In it, they reported that:

  • Skills development is the #2 topic on the minds of CEOs and HR leaders
  • 83% of organizations rate this topic as  important
  • 54% of organizations rate this topic as urgent

In contrast to this need, however, many organizations report having challenges with obtaining all of the technology training that their IT teams and technical professionals require. Some of the primary inhibitors to sufficient training, include the fast pace of changes in technology, along with the lack of ability for organizations to tolerate time out of office.

In addition, the workforce demographic has changed; millennials and digital natives learn differently and with different motivation than other generations. They value flexibility, collaboration and community. They expect learning to deliver high quality, engaging experiences, and to utilize technology and social platforms as part of the experience.

So what is needed? A combination of new blended with traditional. New content which is modular and can be served on-demand, at the point of need for learning alongside traditional learning methods. Digital communities of learners who can interact with each other and collaborate to accelerate and reinforce the learning experiences. Digital badging credentials which enable professionals with earned skills, to show their capabilities and find each other online for assistance and coaching.

Announcing: HPE Digital Learner

HPE has introduced a modern digital learning platform to provide our customers with deeper and broader access to all types of technical training. This platform provides:

  • An online Digital Learner portal, enabling people to increase their technical skills with hands-on virtual learning experiences presented in tailored learning paths, with supporting materials, videos, labs, and interactive communities of professionals all in virtual digital teams
  • Content Packs, covering the core HPE technology areas like servers, storage, networks, cloud and security along with other adjacent technology
  • Premium Seats, with access to traditional classroom training for licensed seat-holders enabling progression of training without having to budget for each course separately
  • Discussion Forum communities, for interactive collaboration, mentoring and information sharing as part of the training experience
  • Digital Badging credentials, awarded to individual students who complete training journeys and upskill missions
  • Metrics and reporting, to measure team and individual learning progress
  • Education Consulting experts, to help monitor and guide learning journeys, suggesting content and appropriate next steps according to progress and reporting

Learn more

 

Innovation.png
 
DanaGBanner2.png
DanaGPhoto2.png

Dana Gardner

Analyst Dana Gardner hosts conversations with the doers and innovators—data scientists, developers, IT operations managers, chief information security officers, and startup founders—who use technology to improve the way we live, work, and play. View an archive of his regular podcasts.

Transcript of a discussion on how Carnegie Mellon University researchers are advancing strategic reasoning and machine learning capabilities using the latest in high performance computing. 

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable -- even using imperfect information. We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.

To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh.

 Tuomas Sandholm

Tuomas Sandholm

Tuomas Sandholm: Thank you very much.

Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?

Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can't just optimize as if you were the only actor -- because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.

That's what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning -- not all of it, but most of it until about 12 years ago -- was really about perfect information games like Othello, Checkers, Chess and Go.

And there has been tremendous progress. But these complete information, or perfect information, games don't really model real business situations very well. Most business situations are of imperfect information.

Know what you don’t know

So you don't know the other guy's resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent's mistakes and learn to exploit them.

So totally different techniques are needed, and this has way more applications in reality than perfect information games have.

Gardner: In business, you don't always know the rules. All the variables are dynamic, and we don't know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both.

Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw? Heads-Up No-Limit Texas Hold’em has become the leading benchmark in the AI community.

Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold'em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold'em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the chipmakers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold'em turned out to be great benchmark because it is a huge game of imperfect information.

It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes -- it will still be more than that.

Gardner: This is as close to infinity as you can probably get, right?

Sandholm: Ha-ha, basically yes.

Gardner: Okay, so you have this massively complex potential data set. How do you winnow that down, and how rapidly does the algorithmic process and platform learn? I imagine that being reactive, creating a pattern that creates better learning is an important part of it. So tell me about the learning part.

Three part harmony

Sandholm: The learning part always interests people, but it's not really the only part here -- or not even the main part. We basically have three main modules in our architecture. One computes approximations of Nash equilibrium strategies using only the rules of the game as input. In other words, game-theoretic strategies.

That doesn’t take any data as input, just the rules of the game. The second part is during play, refining that strategy. We call that subgame solving.

Then the third part is the learning part, or the self-improvement part. And there, traditionally people have done what’s called opponent modeling and opponent exploitation, where you try to model the opponent or opponents and adjust your strategies so as to take advantage of their weaknesses. However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don't have that many holes to exploit and they are experts at counter-exploiting.

However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don't have that many holes to exploit and they are experts at counter-exploiting.

When you start to exploit opponents, you typically open yourself up for exploitation, and we didn't want to take that risk. In the learning part, the third part, we took a totally different approach than traditionally is taken in AI.

We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

Quote2.png

One way to think about that is that we are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

All three of these modules run on the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC), for which the hardware was built by Hewlett Packard Enterprise (HPE).

Gardner: Is this being used in any business settings? It certainly seems like there's potential there for a lot of use cases. Business competition and circumstances seem to have an affinity for what you're describing in the poker use case. Where are you taking this next?

Sandholm: So far this, to my knowledge, has not been used in business. One of the reasons is that we have just reached the superhuman level in January 2017. And, of course, if you think about your strategic reasoning problems, many of them are very important, and you don't want to delegate them to AI just to save time or something like that.

Now that the AI is better at strategic reasoning than humans, that completely shifts things. I believe that in the next few years it will be a necessity to have what I call strategic augmentation. So you can't have just people doing business strategy, negotiation, strategic pricing, and product portfolio optimization.

You are going to have to have better strategic reasoning to support you, and so it becomes a kind of competition. So if your competitors have it, or even if they don't, you better have it because it’s a competitive advantage.

Gardner: So a lot of what we're seeing in AI and machine learning is to find the things that the machines do better and allow the humans to do what they can do even better than machines. Now that you have this new capability with strategic reasoning, where does that demarcation come in a business setting? Where do you think that humans will be still paramount, and where will the machines be a very powerful tool for them?

Human modeling, AI solving

Sandholm: At least in the foreseeable future, I see the demarcation as being modeling versus solving. I think that humans will continue to play a very important role in modeling their strategic situations, just to know everything that is pertinent and deciding what’s not pertinent in the model, and so forth. Then the AI is best at solving the model.

That's the demarcation, at least for the foreseeable future. In the very long run, maybe the AI itself actually can start to do the modeling part as well as it builds a better understanding of the world -- but that is far in the future.

Gardner: Looking back as to what is enabling this, clearly the software and the algorithms and finding the right benchmark, in this case the poker game are essential. But with that large of a data set potential -- probabilities set like you mentioned -- the underlying computersystems must need to keep up. Where are you in terms of the threshold that holds you back? Is this a price issue that holds you back? Is it a performance limit, the amount of time required? What are the limits, the governors to continuing?

This amount is necessary to conduct serious absolute superhuman research in this field -- but it is something very hard for a professor to obtain. We were very fortunate to have that computing at our disposal.

Gardner: Let's examine the commercialization potential of this. You're not only a professor at Carnegie Mellon, you’re a founder and CEO of a few companies. Tell us about your companies and how the research is leading to business benefits.

Superhuman business strategies

Sandholm: Let’s start with Strategic Machine, a brand-new start-up company, all of two months old. It’s already profitable, and we are applying the strategic reasoning technology, which again is application independent, along with the Libratus technology, the Lengpudashi technology, and a host of other technologies that we have exclusively licensed to Strategic Machine. We are doing research and development at Strategic Machine as well, and we are taking these to any application that wants us.

HPC from HPE
Overcomes Barriers
To Supercomputing and Deep Learning

Such applications include business strategy optimization, automated negotiation, and strategic pricing. Typically when people do pricing optimization algorithmically, they assume that either their company is a monopolist or the competitors’ prices are fixed, but obviously neither is typically true.

We are looking at how do you price strategically where you are taking into account the opponent’s strategic response in advance. So you price into the future, instead of just pricing reactively. The same can be done for product portfolio optimization along with pricing.

Let's say you're a car manufacturer and you decide what product portfolio you will offer and at what prices. Well, what you should do depends on what your competitors do and vice versa, but you don’t know that in advance. So again, it’s an imperfect-information game.

Gardner: And these are some of the most difficult problems that businesses face. They have huge billion-dollar investments that they need to line up behind for these types of decisions. Because of that pipeline, by the time they get to a dynamic environment where they can assess -- it's often too late. So having the best strategic reasoning as far in advance as possible is a huge benefit.

Sandholm: Exactly! If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future. And you can marry these up, of course, where the machine learning gives the strategic reasoning technology prior beliefs, and other information to put into the model.

There are also other applications. For example, cyber security has several applications, such as zero-day vulnerabilities. You can run your custom algorithms and standard algorithms to find them, and what algorithms you should run depends on what the other opposing governments run -- so it is a game.

Similarly, once you find them, how do you play them? Do you report your vulnerabilities to Microsoft? Do you attack with them, or do you stockpile them? Again, your best strategy depends on what all the opponents do, and that's also a very strategic application.

And in upstairs blocks trading, in finance, it’s the same thing: A few players, very big, very strategic.

Gaming your own immune system

The most radical application is something that we are working on currently in the lab where we are doing medical treatment planning using these types of sequential planning techniques. We're actually testing how well one can steer a patient's T-cell population to fight cancers, autoimmune diseases, and infections better by not just using one short treatment plan -- but through sophisticated conditional treatment plans where the adversary is actually your own immune system.

Gardner: Or cancer is your opponent, and you need to beat it?

Sandholm: Yes, that’s right. There are actually two different ways to think about that, and they lead to different algorithms. We have looked at it where the actual disease is the opponent -- but here we are actually looking at how do you steer your own T-cell population.

Gardner: Going back to the technology, we've heard quite a bit from HPE about more memory-driven and edge-driven computing, where the analysis can happen closer to where the data is gathered. Are these advances of any use to you in better strategic reasoning algorithmic processing?

Algorithms at the edge

Sandholm: Yes, absolutely! We actually started running at the PSC on an earlier supercomputer, maybe 10 years ago, which was a shared-memory architecture. And then with Bridges, which is mostly a distributed system, we used distributed algorithms. As we go into the future with shared memory, we could get a lot of speedups.

We have both types of algorithms, so we know that we can run on both architectures. But obviously, the shared-memory, if it can fit our models and the dynamic state of the algorithms, is much faster.

Gardner: So the HPE Machine must be of interest to you: HPE’s advanced concept demonstration model, with a memory-driven architecture, photonics for internal communications, and so forth. Is that a technology you're keeping a keen eye on?

Sandholm: Yes. That would definitely be a desirable thing for us, but what we really focus on is the algorithms and the AI research. We have been very fortunate in that the PSC and HPE have been able to take care of the hardware side.

We really don’t get involved in the hardware side that much, and I'm looking at it from the outside. I'm trusting that they will continue to build the best hardware and maintain it in the best way -- so that we can focus on the AI research.

Gardner: Of course, you could help supplement the cost of the hardware by playing superhuman poker in places like Las Vegas, and perhaps doing quite well.

Sandholm: Ha-ha. Actually here in the live game in Las Vegas they don't allow that type of computational support. On the Internet, AI has become a big problem on gaming sites, and it will become an increasing problem. We don't put our AI in there; it’s against their site rules. Also, I think it's unethical to pretend to be a human when you are not. The business opportunities, the monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Quote4.png

Gardner: I’m afraid we’ll have to leave it there. We have been learning how Carnegie Mellon University researchers are using strategic reasoning advances and pertaining that to poker as a benchmark -- but clearly with a lot more runway in terms of other business and strategic reasoning benefits.

So a big thank you to our guest, Tuomas Sandholm, Professor at Carnegie Mellon University as well as Director of the Electronic Marketplace Lab there.

Sandholm: Thank you, my pleasure.

Gardner: And a big thank you to our audience as well for joining this BriefingsDirect Voice of the Customer digital transformation success story discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Carnegie Mellon University researchers are advancing strategic reasoning and machine learning capabilities using high performance computing. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

 


FocusedOn.png
 
GravicTitle2.png
Keith.png

Keith B. Evans

Keith B. Evans works in Shadowbase Product Management. Mr. Evans earned a BSc (Honors) in Combined Sciences from DeMontfort University, England. He began his professional life as a software engineer at IBM UK Laboratories, developing the CICS application server. He then moved to Digital Equipment Corporation as a pre-sales specialist. In 1988, he emigrated to the U.S. and took a position at Amdahl in Silicon Valley as a software architect, working on transaction processing middleware. In 1992, Mr. Evans joined Tandem and was the lead architect for its open TP application server program (NonStop Tuxedo). After the Tandem mergers, he became a Distinguished Technologist with HP NonStop Enterprise Division (NED) and was involved with the continuing development of middleware application infrastructures. In 2006, he moved into a Product Manager position at NED, responsible for middleware and business continuity software. Mr. Evans joined the Shadowbase Products Group in 2012, working to develop the HPE and Gravic partnership, internal processes, marketing communications, and the Shadowbase product roadmap (in response to business and customer requirements). A particular area of focus is the patented and newly released Shadowbase synchronous replication technology for zero data loss (ZDL) and data collision avoidance in active/active architectures.

PJHcircle.png

Paul J. Holenstein

Executive Vice President, Gravic, Inc.

Paul J. Holenstein is Executive Vice President of Gravic, Inc. He is responsible for the HPE Shadowbase suite of products. The HPE Shadowbase replication engine is a high-speed, uni-directional and bi-directional, homogeneous and heterogeneous data replication engine that provides advanced business continuity solutions as well as moves data updates between enterprise systems in fractions of a second. It also provides capabilities to integrate disparate operational application information into real-time business intelligence systems. Shadowbase Total Replication Solutions® provides products to leverage this technology with proven implementations. HPE Shadowbase software is built by Gravic, and globally sold and supported by HPE. Please contact your local HPE account team for more information, or visit
https://www.ShadowbaseSoftware.com. To contact the authors, please email: SBProductManagement@gravic.com.



Introduction

A perennial problem in the IT world is how to handle the ebb and flow of user demand for IT resources. What is adequate to handle the demand at 3am on Sunday is not going to be sufficient to cope with the demand at noon on Black Friday or Cyber Monday. One way or another, IT resources must be sufficiently elastic to handle this range of user demand – they must be able to scale. If a company’s IT resources are not able to sufficiently scale to keep up with demand at any given time, a service outage will most likely result, with major consequential impact to the business. As a result, the matter of IT resource scaling is of great concern for IT departments.

What is All this Scale-up/out/sideways/down Business Anyway?

How are IT processing resources scaled? The most obvious answer is to simply add more hardware to an existing system – more processors, more memory, more disk storage, more networking ports, etc. Alternatively, simply replace a system with one that is bigger and more powerful. This approach is known as scaling-UP, or vertical scaling.

The biggest problem with this approach is that of diminishing returns. Most scale-up systems use a hardware architecture known as symmetric multiprocessing (SMP). Simply put, in an SMP architecture, multiple processors share a single block of physical RAM. As more processors are added, contention for this shared memory and other shared resources becomes a significant bottleneck so that less and less actual performance benefit is realized for each processor added. Each additional processor yields less than 1x the power of that processor; the more processors, the more contention and the less incremental benefit. As more and more resources are added, eventually the system is simply unable to scale any further to meet user demand. The same restriction applies when a system is replaced; eventually there will not be a single SMP system powerful enough to meet peak capacity demands.

Besides the scalability limits, there are other issues with the scale-up approach:

High cost

  • More and more hardware must be added to the system in order to meet user demand, because of the inefficiencies of scale (diminishing returns of additional processors, etc.). Or the system must be replaced with a larger, faster one. In either case, the hardware costs are significant.
  • The system must be sized to serve the highest projected demand, which is awaste of expensive resources at times of lesser demand (probably the majority of time).

Large failure domain or poor fault-tolerance

  • Loss of a single system will result in a service outage.
  • Greater risk of outage after local incident, since systems are monolithic and cannot be geographically dispersed.
  • Migration to a larger system is usually performed via the “big-bang” technique,1 which has a high degree of risk/failure, is disruptive, and requires an outage. Such “rip and replace” migrations are difficult to test, as is fallback to an existing system.

Hardware vendor lock-in

  • Additional hardware generally must come from the same vendor as existing components.
  • It is costly to change vendors as the entire hardware and software stacks must be replaced, IT staff must be re-trained, etc.

Scale-out: the Elastic Solution

If vertical scaling has such significant issues, what are the alternatives? First, use a server with a different hardware architecture, a massively parallel processor (MPP). In an MPP architecture, each processor acts like a separate system, with its own memory, disks, and other hardware resources (“shared nothing”). Workload is distributed by the operating system and other software components across the processors, which communicate with each other via a high-speed message bus. Compared with an SMP, an MPP has no contention for shared resources (RAM, etc.); therefore, each processor delivers nearly 100% additional performance because MPP capacity scales linearly. The most well-known and successful MPP in the industry is the HPE NonStop server, which can scale linearly from 2-16 CPUs per system. However, what happens when a single system is not sufficient to handle the workload, or better availability is required?

Enter scale-OUT, or horizontal scaling. With scale-out, additional compute resources are provided by simply adding more servers, with the workload distributed between them (Figure 1). A scale-out architecture has the same characteristics and benefits as an MPP. In fact, an HPE NonStop server can be considered a scale-out system in a box. However, a scale-out architecture is unconstrained in terms of how many additional servers can be added, or the type of processors employed within each server (SMP or MPP). For example, a single HPE NonStop server can first scale-out by adding more CPUs, then by adding more NonStop servers to the network (up to a total of 255 servers). A scale-out architecture is able to meet much higher user demand levels than a scale-up architecture, because there essentially is no limit to the number of servers that can be incorporated.

Besides unlimited scalability, there are other benefits of the scale-out architecture over the scale-up architecture:

  Figure 1: Scale-up vs Scale-out

Figure 1: Scale-up vs Scale-out

Better capacity utilization

  • A scale-out configuration does not suffer the resource contention issues of an SMP; each additional processor delivers its full capacity. Hence, fewer system resources are required for a given workload than for a scale-up SMP system. A few smaller and cheaper servers can handle the same workload.
  • It is easier and more cost effective to add (and remove) additional systems as load increases (or decreases). With a scale-up architecture, the extra capacity is wasted when not in use.
  • Less overall capacity is lost when a failure occurs, because servers are smaller.

Lower cost

  • It is incrementally much cheaper to add additional server capacity than to replace a single server with a larger, faster one; the existing hardware investment is preserved.
  • Additional compute resources can be added via cloud service providers on demand as required, and released when no longer needed.

Excellent availability characteristics

  • Since multiple systems are employed, the failure of any one does not result in a total service outage.
  • Multiple servers can be geographically dispersed, which reduces outage risk from a localized incident.
  • Zero downtime migration – hardware and software can be upgraded without service interruption and at a much lower risk. If necessary, upgrades can be incrementally performed while existing servers are maintained and leveraged as a fallback.

No hardware or software vendor lock-in

  • Additional servers can be added to an existing scale-out architecture regardless of vendor.

But What About the Application?

Good question. Scale-up and scale-out architectures are very different from an application point-of-view. With a single large system (a vertical scaling model), like an SMP architecture, all applications run on that system and can access the same shared memory. This architecture tends to lead to monolithic application processes, which run multiple parallel threads and use shared memory as the primary method for sharing data and context/state between the threads/processes.

This type of application model is adequate for an SMP as the system is still able to handle user demand. But at some point, the system limits will be reached, and it will be necessary to move to a scale-out solution, migrate the application with much difficulty, and take advantage of the unlimited scalability provided. Therefore, from the outset, it is a best practice to write applications for a scale-out architecture and be prepared to scale (up or down) when needed.

Applications written for scale-out are easily spread across multiple systems, and workload can be distributed across any instance of the application process and on any system. As demand increases, it is simple to first instantiate more application processes across existing systems, and then meet demand across additional systems, if necessary.

This application scalability is primarily achieved by avoiding the use of shared resources (memory, etc.), and by not internally maintaining state (stateless servers). Ignoring either of these techniques limits the ability to distribute workload equally across all systems and application instances by forcing requests to be serviced by particular application instances/systems, which thereby limits scalability. Rather than offering all user services in a single monolithic application, scale-out applications provide them via many, smaller process instances. These instances are able to interoperate via inter-process communication (IPC), each offering a subset of the whole and grouping “like” services together (e.g., separating long-running requests from short-running requests), which enables the optimization of workload distribution, improving average response times, as well as scaling ability. Small footprint processes are also quick to spin up and down as user demand rises and falls in order to maintain desired throughput and application response times. Starting additional large processes to add capacity is not ideal if a system already is under heavy load.

The Elephant in the Room

Therefore, applications can be designed for scale-out, but there is an elephant in the room. As previously discussed, it is important that applications should be stateless and not use shared memory, but at some point, they have to access shared data. It does not significantly improve scalability/availability if workload can be distributed across multiple application server instances, yet still be forced to access a single database residing on a single server. Similarly, partitioning the data across multiple systems only provides partial relief. In order to maximize scalability/availability in a scale-out architecture, shared data must be locally available to all systems participating in the application. Each copy of the data must be kept consistent with all other copies as the data is being updated, regardless of on which system application updates are being executed. Enter real-time data replication.

With transactional real-time data replication implemented between all systems participating in the application, multiple copies of the database can be distributed across each system, which are kept consistent as data is changed on any system. This distribution optimizes scalability by, a) allowing user requests to be routed to any system based on load (the so-called “route anywhere” model), and b) by scaling the database and also the application (i.e., removing the database as a source of contention and hence a bottleneck). If any system fails, other systems have up-to-date copies of the database on which processing can continue, thereby maximizing application availability. This characteristic applies not only to unplanned outages, but also to planned system maintenance, which can be performed serially across systems so that no application outages ever need to occur. This characteristic even applies to system and software upgrades, allowing for zero downtime migrations (ZDM).

The highest levels of scalability (capacity utilization) and availability are obtained by using an active/active application architecture as described above, where user requests are distributed and executed on any system. The scale-out principle also may be applied to active/passive and sizzling-hot-takeover (SZT) configurations. In these configurations, all update transactions are executed on a single active system, but scalability can still be achieved via the use of data replication from the active system to multiple passive systems, which are then used for read-only or query type applications. A good example of such an architecture is a so-called “look-to-book” application. Multiple read-only nodes are used to look-up information (e.g., airline/hotel seat/room availability, or stock prices), while the active system is only used when an actual transaction is executed (e.g., an airline/hotel reservation, or a stock trade). It thereby offloads the active system and scales-out the workload across multiple systems without requiring the application to run fully active/active.2

Scale-out Example: Telco Phone Billing and Provisioning System

  Figure 2: Telco HLR Scale-out Architecture

Figure 2: Telco HLR Scale-out Architecture

An example of a scale-out architecture is shown in Figure 2, demonstrating the use of both active/active systems and multiple read-only nodes to achieve continuous availability and horizontal scaling. A major international telco realized that its Home Locator Register (HLR) application could no longer support requirements to provision and manage smart phones, since the management of smart phone features is far more complex than for older, simpler cell phones. Therefore, the company implemented a new distributed active/active HPE NonStop server system to provision smart phones and to manage its more complex billing and service requirements. In order to handle the ever-increasing load, as well as the active/active pair that serve as the continuously available “master” system, multiple scale-out read-only query (“subordinate”) nodes were also implemented, from which the HLRs obtain the smart phone provisioning information required to establish calls and verify/bill for services.

HPE Shadowbase technology provides the data replication infrastructure between these multiple nodes to support both the continuous availability of the active/active pair and to keep the data on the query nodes synchronized with the database of record. Both active NonStop nodes and all of the query nodes share exactly the same data. Though the master system load is relatively small, the query load is intensive as there must be an HLR query for each call being established. Since the master database is replicated to the query nodes, the master system is not burdened with query processing, and the architecture can easily scale to handle any load as the number of smart phones increases. The query nodes are distributed near population centers to improve query performance, which shortens call establishment time. In the initial deployment, the telco is using six query nodes. As activity increases, more query nodes can be easily added to scale-out the application without any interruption to existing service, which would not be possible with a scale-up architecture.

Summary

Keeping up with user demand is a significant challenge for IT departments. The traditional scale-up approach suffers from significant limitations and cost issues that prevent it from satisfying the ever-increasing workloads of a 24x7 online society. The use of MPP and scale-out architectures is the solution, since they can readily and non-disruptively apply additional compute resources to meet any demand, and at a much lower cost. The use of a data replication engine to share and maintain consistent data between multiple systems enables scale-out application and workload distributions across multiple compute nodes, which provides the necessary scalability and availability to meet the highest levels of user demand now and into the future.

¹The “big-bang” technique of migration refers to the classic (and outdated) approach of requiring an outage of the primary environment in order to load, start, and cutover to the new environment. There are now newer techniques available that reduce the inherent risk of the big-bang approach, by allowing the new environment to be built, tested, validated, loaded and then synchronized before the cutover occurs. These techniques eliminate or at least dramatically reduce application outage time for the migration at substantially reduced risk. For more information, see Using HPE Shadowbase Software to Eliminate Planned Downtime via Zero Downtime Migration.

²For additional information on active/active, sizzling-hot-takeover, and active/passive business continuity architectures, see the white paper, Choosing a Business Continuity Solution to Match Your Business Availability Requirements.


CoverStory.png
 
GaryHeader2.png
GaryThome_circle2.png

Gary Thome

Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise

One size fits all was never a good idea for consumers buying clothing, and it certainly doesn’t work well for purchasing IT. That’s because every business is different. Each enterprise has unique business processes, security concerns, customer demands, and financial limitations. Because of these differences, enterprises worldwide are turning to a better IT strategy--hybrid IT.

Hybrid IT complexity
slows digital transformation

Hybrid IT gives enterprises more options, letting them choose public cloud, private cloud, on-premises solutions – or a combination of all three. It also lets them blend CapEx, OpEx, as-a-service, and pay-per-use consumption models.

Sound complicated? Well, it can be. That’s part of the reason why businesses are accelerating their need to develop and implement a successful digital transformation strategy. Hybrid IT is helping enterprises pursue digital transformation, but as you would expect, the picture is not completely rosy.

In 2017 IDC conducted in-depth interviews at Fortune 1000 enterprises that had been disrupted by digital transformation. According to the report, The Future of Hybrid IT Made Simple, ninety percent of those interviewed said that their firms have deployed a hybrid IT strategy consisting of on-premises infrastructure and multiple cloud providers. IDC concluded that this multi-cloud strategy will eventually lead to unmanageable asset sprawl--unless a comprehensive hybrid IT strategy is implemented.

Enterprises are focused on two things: keeping their business running and supporting digital transformation initiatives that will keep them competitive in the market. Hybrid IT allows them to do that—but at a cost. For IT operations staff, hybrid IT is more complex in terms of deployment and ongoing management. Initial set up takes more time, and training people to use different portals is time-consuming and costly.  Application developers struggle with inconsistencies of services across private and public cloud. And line-of-business (LOB) executives are concerned with fragmented business controls and governance standards, as well as escalating costs.

Although a hybrid IT strategy increases options, it also increases complexity. An enterprise must decide the best place to run workloads and then efficiently manage and track hidden expenses for a variety of workload deployments. Because speed and agility are vital for today’s digital business, hybrid IT’s complexity problem must be solved.

Photo1.png

Innovating on behalf of customers

To speed digital transformation while simplifying the many IT options available, enterprises are turning to industry experts. One such expert is Optio Data, a Data Strategy company.

Optio Data helps customers do business faster and more efficiently by delivering state-of-the-art IT solutions. Achieving this goal requires an innovation mindset, so Optio Data is always exploring new ways to solve its customers’ unique challenges.

In the fall of 2017, Optio Data agreed to beta test HPE OneSphere, an as-a-service hybrid-cloud management platform that simplifies management of cloud environments and on-premises infrastructure. They were looking for a simple-to-deploy and easy-to-use solution that would help a wide array of customers move toward digital transformation, while evolving legacy environments.

Digital transformation for
legacy infrastructures

Optio Data wanted a solution that addressed the needs of customers who required not only the fast and flexible development options that a cloud-based solution can provide, but also the security and control of keeping their solution on premises. HPE OneSphere provided the answer.

HPE OneSphere’s open environment enables the use of public, private, and third-party service catalogs, providing customers with multiple options to both fulfill their user needs and meet their SLA and business obligations.

Simplifying IT management

Optio Data also needed to address the complexity of managing enterprise IT environments, especially those that incorporate hybrid IT capabilities. “A lot of our customers are surrounded by competing obligations. They’ve got so many things going on that the technology typically isn’t the issue,” continued the Optio Data engineer. “They’ve got their own internal pressure from management. They just need to have something that works.”

HPE OneSphere simplifies IT management because it brings multiple IT environments into a single view. The Optio Data engineer went on to explain, “HPE OneSphere definitely makes internal processes simpler. You no longer have five requests to get a new server stood up. You have one request to get access to a pool of resources and bang, you’re done.”

Consolidated resources save time, money

A unified view in HPE OneSphere means that an IT manager can set up secure environments that allow different teams more self-sufficiency in using their IT resources as needed without the aid of additional requests to IT. This functionality not only reduces requests, but also cuts overhead management, allowing people to work faster.

For example, HPE OneSphere enables the DevOps capabilities needed to support fast application delivery, including detailed usage and cost metrics, as well as self-service access to users across lines of business, development, and IT.

One feature in particular that differentiated HPE OneSphere for Optio Data was its fast setup and ease of use. From onboarding the environment to actually using the application, HPE OneSphere was up and running in a single day.

Photo2.png

 “My first impression of HPE OneSphere was that it’s very quick and easy to get up and running,” concluded the Optio Data engineer. “I’m traditionally used to this type of solution taking days, weeks, or even months to set up and be ready to use. HPE OneSphere was ready to use after roughly a day.”

Controlling they complexity of hybrid IT Digital transformation is key to helping businesses succeed in this rapidly changing world. And a comprehensive hybrid IT strategy is supporting that digital transformation – as long as complexity is controlled.  HPE is excited to partner with Optio Data, providing the expertise, software, and hardware that businesses need to simplify their hybrid IT strategy.

To listen to Optio Data talk about their HPE OneSphere beta test experience, watch this video:

Learn about HPE’s approach to controlling hybrid IT complexity by checking out the HPE website, HPE OneSphere.

 


 
GREENLAKE.png
 
HPEGreenLake.jpg
KittyAuthor.jpg

Edge Compute Made Easier with HPE GreenLake

by Kitty Chow
HPE Pointnext

Compute power is moving closer to where the action is. After years of becoming increasingly concentrated at centralized locations – in data centers or in the cloud – it’s moving to the geographically dispersed locations where data is generated. It’s moving to factories, branch offices, oil rigs, cargo bays, maintenance facilities … anywhere there’s a need to rapidly convert data into insight and action. In short, it’s moving to the Edge.

Everyone Is Crazy About the Edge

KITTYIMAGE1.jpg

Edge computing is helping businesses accelerate cycle times between harvesting data and acting on it. If you’re running safety-related telemetry on equipment on an oil rig, for example, you can’t afford the latency involved in relaying that data to the cloud or a datacenter. You want actionable information in real-time or near-real-time.

Edge infrastructure is also helping companies to cut costs by reducing bandwidth needs. Let’s say you need to analyze massive amounts of data from a video surveillance system. It makes sense to do that at a point as close as possible to the data source, bypassing the potentially prohibitive costs of connectivity to a central processing facility, and minimizing the attack surface for security threats at the same time.

Designing and standing up the infrastructure for an Edge solution will require an understanding of the highly complex Internet of Things product environment to capitalize on IoT-enabled benefits. By its nature, the Edge pushes compute infrastructure into new environments where expertise may not be available to monitor and manage it, or to intervene if something goes wrong. 

HPE GreenLake Edge Compute enables you to break through these barriers. It’s a proven, comprehensive solution that includes hardware, operating system and expertise in a per-device, pay-as-you-go model. HPE GreenLake Edge Compute enables companies to power initiatives that require edge compute with an end-to-end solution monitored, operated and managed by HPE. We can help you validate, design and implement your Edge solution, based on your business requirements. We take care of a host of management details – from monitoring bandwidth utilization, to troubleshooting folder access rights, to installing backups and antivirus updates – so you can concentrate on maximizing your business outcomes. With HPE GreenLake Edge Compute, you get all the benefits of HPE’s ability to run and operate enterprise-critical environments where data availability, security and up-time are important.

With HPE GreenLake Edge Compute, you can:

  • Sidestep large upfront investments by leveraging pay-as-you-go pricing. You pay one monthly fee, based on the number of devices you install, to cover hardware, operating system and services. And you can expand as you grow.
  • Accelerate your journey to the Edge. As a true end-to-end solution, HPE GreenLake Edge Compute quickly adapts to your implementation requirements. We can take you all the way from proof of concept, to pilot, to full rollout so you can minimize time-to-insight and get a fast start on reaping the benefits of the Edge.
  • Focus on your core business rather than your infrastructure. HPE Pointnext provides the expertise you need, so you don’t have to find new resources or train up your staff. HPE GreenLake Edge Compute takes on the nuts-and-bolts operational tasks so you can concentrate on getting the most business value from your solution.

If you’re looking for an easier, faster route to edge computing, you’ll want to investigate HPE GreenLake Edge Compute. Watch this video for a quick introduction: 

You can also get an overview from my interview with blogger Jake Ludington at HPE Discover 2017:

Additional Resources: 

Video: For a deeper dive on this topic, see this Coffee Talk presentation to an audience of industry influencers and bloggers. 

More on HPE GreenLake: Visit "HPE GreenLake: On-premises, pay-per-use solutions for your top workloads that enable consumption-based IT."

Article: for more from Kitty, see "3 Key Milestones to Building a Smarter, Digital Workplace

 

Kitty Chow

Worldwide Director for Intelligent Edge solutions. Kitty oversees the global solutions strategy and management within HPE’s Pointnext business for the Intelligent Edge.

 

XIC-Markeing-DiagramTitle.png

XYPRO is thrilled to announce our Identity+ Alliance partnership with SailPoint, a proven leader in enterprise Identity Management. 

Working closely with our customers and SailPoint, we are also excited to announce the general availability release of XYPRO’s newest solution, NonStop Identity Connector for SailPoint (XIC). 

callout.png

Using XIC, HPE NonStop customers can now integrate their NonStop servers with their SailPoint IdentityIQ, enabling seamless participation within the enterprise.

Controlling access to a company’s servers and applications are critical to security. Without centralized identity management, onboarding and off-boarding activities become a manual process, which is not only time consuming but introduces security risk and compliance concerns. NonStop Identity Connector provides you with complete control over who has access to your NonStop servers from a single enterprise location. 

Whether you need to provision users on one  or multiple HPE NonStop servers, XIC elegantly integrates your NonStop servers with your SailPoint Identity & Access Management (IAM) enterprise solution.  Achieve complete user governance, provisioning and reconciliation of HPE NonStop user accounts directly from SailPoint. 

SailPoint’s industry leading, powerful access certifications, governance controls and logical workflows allow NonStop customers to take full advantage of the capabilities provided by SailPoint that have long been available for other platforms.

XYPRO’s XIC solution simplifies requirements and compliance activities. When an identity is disabled through SailPoint IdentityIQ, the corresponding account is immediately disabled on all NonStop servers on which the identity was provisioned. When that identity is removed using IdentityIQ, the account is immediately removed

from all NonStop servers, ensuring the removal of stale accounts, improving your relationship with your auditors and strengthening your security procedures at the same time.

XIC comes packaged as a lightweight, easy to deploy, executable using a micro service framework. Simply configure the service XML with the specific HPE NonStop server properties and run the deployer. XYPRO’s NonStop Identity Connector deploys quickly in a JAVA Virtual Machine (JVM) on OSS. No other software is required. Installation is simple, quick and secure. 

We are excited to bring this new partnership and solution to you.  We welcome your feedback and as always, thank you for your business.

To learn more about XIC, please contact your XYPRO Account Executive or visit www.xypro.com/identity.

About XYPRO Technology Corporation

XYPRO offers 35 years of knowledge, experience and success in providing HPE NonStop information systems tools and services. Businesses  that manage and transport business-critical data on a large scale like payments processors, financial institutions, retailers and telcos turn to XYPRO for the very best solutions in Security, Risk Management, and Compliance. At XYPRO we believe that no data is more important than your data and we protect your data like it’s our own, because it is.

Steve Tcherchian, CISSP

CISO and Director
of Product Managment
XYPRO Technology
www.xypro.com
@SteveTcherchian
@XYPROTechnology

 
 

DIDYOUKNOW.png
 
SynergyBanner2.png

There are always budgetary pressures in the IT business, and customers who want certain kinds of equipment quickly often can't get the servers, storage, and switching that they need when they need it. Those shopping for gear made by Hewlett Packard Enterprise may not know it, but they can work with their HPE resellers and an upstream distributor called Synergy Associates to get equipment fast – and at a good price, too.

Gary Dean, who is president at Synergy, and Rodger Swanson, the company's executive vice president, started the company back in 1998 after spending many years at GE Capital IT Solutions. The dot-com boom was just getting going back then, and Compaq was the dominant supplier of X86 servers, followed by IBM, so Synergy initially started out as a traditional reseller offering products to end user customers with these two product lines. All server makers have products that comes off the factory line that end up being used for development and testing internally as well as for proofs of concept among customers and for trade show demonstrations among partners. There is another pool of machinery that comes back because deals change after machines are manufactured and shipped, and excess inventory comes back from the field untouched sometimes, too. 

Rather than inserting the products into the secondary market, where its full value – both technically and economically – could not be realized, Compaq cleverly started an in-factory remanufacturing program. They took the gear back, disassembled it down to the components, and reassembled them in more generic configurations and put the gear through the same manufacturing and test procedures as new equipment; factories, people, and processes.  Hence, the name Renew.

While this was a smart approach to getting the most value of out this gear, the company struggled to find the best route to market with their new Renew program. It took a while, but after Hewlett Packard bought Compaq in 2001 and the economy went through two more recessions, Hewlett Packard Enterprise, as it is called now, got it right.

"Initially, the program went only to the major distributors," recalls Swanson. "While intuitively sound, the problem with this approach was adding to their already vast product offerings. Renew products never received the mind-share required and as a result, the product languished on their shelves. So, this became a problem not solved."

"Compaq was all over the map with Renew, trying to sell to end users, trying to sell to partners," Dean chimes in. "Over time, they decided to streamline their distribution model to emulate their new product offerings. With Synergy's successes with their customer base, and in the channel, Compaq offered us the opportunity to evolve into a distributorship. We grabbed it and ran with it."

Synergy saw a big opening here to carve out its own niche, so it transitioned in 2008 from a VAR selling new HPE and IBM gear to selling Renew to HPE VARs (partners). This direction placed Synergy on top of the distribution channel as a Tier One distributor of HPE Renew with its own unique supply chain of machines. The 2008 recession put pressure on technology budgets bigtime back when Synergy was making this transition, so resellers had to get clever about doing deals that saved customers money. Gradually, they found Synergy's Renew inventory at substantially lower costs, which helped customers cut their budgets while at the same time giving Synergy and those downstream HPE reseller partners ways to conduct business in a downturned economy. 

Based about 15 miles west of Minneapolis in the Minnesota town of Medina, Synergy has grown to 30 employees and has an asset inventory of products that range from $5.5 million to $7 million at any given time. Depending on the state of the economy and the volume of gear that is pushed through proofs of concept, tradeshows, and test/dev environments and how much excess inventory comes in from deals that change, Andrew Wiese, senior enterprise account manager at Synergy, estimates that somewhere between $100 million and $150 million worth of servers, storage, and switches goes through the HPE Renew process each year.

Synergy is centrally located in the United States, which is convenient when it comes to serving the 26,000 and growing authorized HPE partners in the United States. The products from Synergy that pass through the Renew program compete head-to-head with the new products from the major distributors – Arrow, Tech Data, Avnet, Synnex, and Ingram – who push billions of dollars in HPE products each year. To be sure, these companies have a lot of downstream partners. But they can't always move fast, and they don't always have the ability to configure exactly what customers want.

"We are at the top as a tier one distributor for HPE Renew, but we are considerably smaller compared to the major distributors – which means we have to be versatile and fleet-of-foot," says Swanson. "We like to send resellers their quotes in less than an hour, and it is usually less than that. If we get an order before 4:30 PM, we ship that night. And we would ship later than that if the carriers didn't have a cutoff time."

That speed often matters more than anything else. Hurricane Harvey provides an example.

"We all know about the damage that Hurricane Harvey wreaked, and we are still feeling the effects," Swanson says. "We got calls from a number of resellers, one had a customer in Houston that had an entire datacenter underwater, and they, in turn, had hundreds of clients that were down as a result. The reseller was a long-term customer of ours, and they were hoping that we could pull a rabbit out of a hat to replace the datacenter, which had dozens and dozens of fully configured servers. And, they wanted to know the chances of getting the machines the next day or the day after. By the second day after that call, they had the machines, fully configured, and were installing them, and we followed up with a second shipment that they needed for redundancy. All of this happened within a five-day period, during this same time period the ecstatic Reseller commented to us that they were just starting to receive quotes back from the major distributors they reached out to as well.”

This fast response means that HPE resellers can depend on Synergy in a pinch. "Not everybody is getting the same attention in the channel," says Dean, "and we try to offer our customers opportunities to win deals in a competitive market. It is really a niche business with a tremendous value prop. Our challenge is when people haven't heard of this 20 plus year old program."

While it has been around for two decades, the Renew program is something that a lot of HPE customers – and their resellers, too – have not heard of. Once HPE ships out boxes for test/dev, proofs of concept, or trade show use, or gets some machinery back from customers who have changed their orders or from resellers who want to change their stock, it cannot distribute them as new anymore. By breaking down the machinery to its component parts and putting it through the same 23-step manufacturing and testing processes as are used for new equipment, it can be badged as Renew – not really used, and not quite new but very close. And because the Renew remanufacturing process is run by HPE itself, and the resulting machinery is given  reassigned a new serial number and is eligible for the same HPE Support Services, or what used to be Care Packs, it can be consumed like it was new even though it costs nowhere near as much as new gear. Equally importantly, resellers do not have to go through the deal registration process with HPE. They can just kick out a quote that is ready for a purchase order and push the inventory.

While time is money when it comes to getting infrastructure, money is also money, which is why HPE shops should insist that their reseller partner look upstream to the Renew program instead of just relying on new equipment that comes down from on high from the major distributors.

Here is an example. "One of our partners had a large end user out in California, and they were really tight on budget and they wanted to save 10 percent off of a $400,000 c-Class blade server deal," Wiese recalls of a recent deal. "We got the price down to $356,000, and HPE new could not really do it."

The c-Class blades are still popular with HPE customers because they are not yet sure if they are ready for the Synergy environment. (Meaning the HPE Synergy composable infrastructure line, not Synergy the company fronting the HPE Renew program.) With the c-Class, which illustrates how this works, there is substantial savings, according to Wiese. "With the interconnect on the back-end of the C-Class machines, our pricing is 65 percent lower than list. With this example, we can offer significant savings at the channel level. Dealers going to HPE and doing deal registration can on occasion see better prices from Synergy depending on the configuration. HPE customers then benefit from the VAR savings and with full HPE warranty, everybody wins."

The wonder is why everyone hasn't figured out this Renew gear exists and why Synergy is not out of stock all the time. (The more people know about Synergy and the HPE Renew program, the closer that day will come.)

The HPE Renew products are not used, and it is not old inventory, either, explains Wiese. "With HPE Renew – and this is why it can compete against new equipment –it is all new generation equipment. It follows behind HPE's new equipment by about three to six months. We currently carry the latest Gen10 and Gen9 products and are no longer stocking anything older. We are able to provide supplies of equipment for about a year and a half after the end of life of a previous generation, but after that, it is whenever HPE runs out of that stock, we can't sell it any longer because they are our source."

ClICK TO ENLARGE

About 90 percent of the business done by Synergy is for servers and their support contracts and, thanks to a deal with another distributor, Microsoft systems software licenses; the remainder is split across storage and networking including Aruba. The ProLiant DL360 and DL380 servers, which are the workhorse of the modern datacenter, are common in deals, but for every ten of these rack-mounted machines, Synergy sells a ProLiant tower server of some fashion to an SMB shop or large enterprises looking for remote office gear. With an average deal at Synergy of one, two, or three servers the savings can be as high as 20 percent to 40 percent for end users – with margin still left over for the reseller. 

 Synergy sells a lot of MSA 1040 and MSA 2040 arrays, and resells the full line of HPE Ethernet switches and is starting to get into Aruba edge networking, too. The company can also sell components parts, from memory sticks to disk and flash drives, to network interface cards to power supplies and cables.

"The trick is getting people to understand what Renew is and how it is different from traditional refurbished equipment," says Wiese. "People get confused. For instance, in the United States, refurbished HPE equipment comes from HPE Financial Services, and they may even have ProLiant Gen9 equipment, but it is off failed lease or has been used and it does not have the remanufactured process standing behind it. There may be somebody else's warranty behind it or Support Services attached to it. It has been in production, whereas the Renew product hasn't been. This is not like a recertified car. Renew equipment is very lightly used, and it is rebuilt to be as good as new."

Given all this, it is worth telling your HPE partner to give Synergy a call the next time you need hardware. The worst thing that can happen is you get the equipment you want quicker and for less money.