C2CoverFall2018_NEW3.png

ELTag.png
 
 
 

 “It’s harder to stay on top than it is to make the climb. Continue to seek new goals.”
- Pat Summitt

 
Stacie-Neall-mug-shot.jpg

Welcome to the Fall issue of Connect Converge

Recently, many of us took the journey to HPE Discover 2018. This year’s event was, without doubt, a hub of innovation with HPE President and CEO Antonio Neri delivering a message that redefines how HPE will innovate in the years to come. Neri’s boots on the ground approach to the future is an affirmation to partners, customers, and HPE employees. 

Once again, I believe I can speak on behalf of the Connect Community and say that our Connect Tech Forums held in the Connect booth were a great success. The user group forums are an integral piece of community collaboration - allowing influencers, experts, and practitioners to address current technology trends and disruptions.  A big shout out to all that shared their technical prowess and to all the old and new members that stopped by the booth. It was fun, and sometimes heart wrenching as we took a break between sessions to watch the FIFA World Cup! We are ramping up for Discover Madrid and if you have a forum topic you are interested in submitting please contact us. (info@connect-community.org)

Speaking of influencers, read Calvin Zito’s (aka HPE Storage Godfather), feature article, “HPE Nimble Storage – A year later.”  Hear what customers are saying and how HPE Storage is delivering winning outcomes to valued customers. Goal!!

Read on for more technology news, insights and how-to content.

See you in Madrid!

Stacie Neall
Managing Editor
@sjneall

PIN.png
 
 

PresidentsLetterTag.png
 
 
Navid Headshot.jpg

Greetings HPE Community,

I hope everyone thoroughly enjoyed their summertime and is refreshed going into the Fall!

We at Connect started out our summer in hot Las Vegas for the HPE Discover show. As usual, HPE presentations were in full force as we had our first official Vegas keynote from Antonio Neri as CEO. He wooed the crowd with customer stories and announced HPE’s 4 billion dollar investment in The Edge. Over on the show floor, HPE booths we’re popular as ever with a good number of attendees making their rounds.

Over at the Connect Booth there we’re multiple presentations being given, covering all parts of HPE. During the in between times (ok, during the presentation times as well) we had a big screen playing all the World Cup Matches of the day in our Connect Lounge. It proved to be quite the popular hangout spot and we got to know quite a few attendees that way. Overall, HPE Discover Vegas was a great success for both Connect and HPE!

Up next on the Connect Calendar is the 2nd ever Blockchain Community Forum. After a successful kickoff in London in June, we have decided to expand into New York City with an excellent agenda of presentations by HPE and their partners. It will be held on September 27th at the HPE Chelsea Office. Please visit the events page of the Connect website for more information and registration.

Please enjoy this issue of C2 and we look forward to running into you at an event soon!

Navid Khodayari
Idelji
Connect Worldwide President

 

 

FailbackTitle.png
ChrisP.png
Chris Purcell

Chris Purcell has 29+ years of experience working with technology within the datacenter. Currently focused on integrated systems (server, storage, networking and cloud which come wrapped with a complete set of integration consulting and integration services.)

You can find Chris on Twitter as @Chrispman01 and @HPE_ConvergedDI and his contribution to the HPE CI blog at www.hpe.com/info/ciblog

Hurricanes, floods, cyberattacks, and simple human error — I’m sure you’ve heard your share of these types of data center disaster stories. Some disasters are predictable and give you plenty of time to prepare. Businesses in the “tornado belt,” for example, experience a much higher probability of weather-related outages in certain months and can plan ahead. Other businesses, unfortunately, get completely blindsided. For the unprepared, recovery often proves to be impossible.

So why doesn’t every IT department have a bullet-proof disaster recovery (DR) plan in place? Typically, organizations have a myriad of perfectly good reasons: other projects take priority, current backups seem to be good enough, or staff is unavailable to work on a DR strategy. Even businesses with a DR plan are at risk if the plan is untested or complicated to operate, or if recovery doesn’t happen fast enough to mitigate damages.

 

Keep it simple

The reality is that preparing for disaster recovery can be daunting, but you can create a solid plan in a few short steps if you invest some time up front. To begin with, select a solution that is easy to deploy and can provide a simple recovery process. You’ll benefit most from a flexible infrastructure that is easy to maintain when future adjustments are needed. Work backwards from there to build out a recovery plan to protect your data. Once your plan is in place, test it regularly so it can be executed by almost anyone — if you’ve chosen a simple solution, testing should require very little time and effort. 

Below I summarize how four different businesses implemented simple, yet effective, DR plans using hyperconverged infrastructure. They represent different industries and range in size from small local businesses to midsize enterprises with multiple remote office sites around the globe. Yet, they all have one thing in common: When disaster struck, their data centers were back up and running quickly with minimal or no data loss.

 

Weathering a Florida hurricane

Florida is no stranger to natural disasters. Schroeder-Manatee Ranch, a land management and agri-business in Manatee and Sarasota counties, set up a resilient hyperconverged infrastructure in its data center shortly before it got hit by Hurricane Irma. Aaron Brosseau, system administrator at the ranch, was relieved to have his data protected by the new solution. With a DR plan in place, they were able to bring their data center back up quickly after the storm. “The current DR setup is a relief,” he said, “because I never have an issue with the backups now. I simply check them every week just to make sure everything still stays protected.” He went on to say, “I see no real downsides to hyperconvergence. It makes so much more sense than any other option we considered.” Read Brosseau’s story.
 

Surviving disaster thanks to ‘proof-of-concept’ system

When McCullough Robertson refreshed its storage devices, the Australian law firm re-examined its entire IT stack. The company decided to temporarily deploy a hyperconverged system to test the new technology in its production environment. Two weeks later, the company had a system power outage in Brisbane. According to IT Systems Engineer Brodon Hirst, “The entire building was turned off, and we had to failover to our Sydney data center. We were still only in the ‘proof-of-concept’ stage for our DR failover, which made it all the more nerve-wracking.” The hyperconverged system was instrumental in bringing their data back online fast. Hirst brought 50 mission-critical VMs up, “late on a Friday night, whereas our previous DR exercises took an entire day…. These systems were then failed back at the end of the weekend… with the only outage to the business taking place when the connection cutover occurred.”  Read the full story.
 

Recovering from a cyberattack

Worth & Co., full-service mechanical contractors in the eastern US, wanted to modernize its infrastructure, simplify management, and cut costs. CIO Woody Muth was not disappointed with the solution they chose. The firm replaced three full equipment racks of legacy gear with three 2U hyperconverged nodes in its data center, and three additional nodes in another location. The geographically distributed configuration helps to ensure continuous availability in the event of hardware failures or application mishaps. “The product’s built-in data protection capabilities were a major differentiator for us…. When hit with the CryptoWall virus, we were able to restore all of our critical applications to a known working state within a matter of hours.”  Read the whole story.
 

Running DR tests, fails back in seconds

Brigham Young University College of Life Sciences had a long list of requirements for its new infrastructure: minimize downtime and OPEX; reduce management hours; and provide single vendor accountability, easy implementation, and offsite disaster recovery. Its hyperconverged solution delivered on all of that, within budget, and fully guaranteed. The college tested the failover capabilities and watched it failback in seconds. In system administrator Danny Yeo’s words, “I was blown away.” Watch the video clip.

All of these customers have one thing in common: HPE SimpliVity.  The award-winning hyperconverged solution provides three powerful capabilities that help business prepare for disasters: a resilient hyperconverged infrastructure, built-in data protection features, and an extremely simple data recovery process. HPE SimpliVity also offers HPE RapidDR, an optional software program that guides you through disaster recovery planning, and provides a 1-click failback feature for recovery. You can learn more about the feature and how to combat cyberattacks in this whitepaper on mitigating ransomware risks.

Comprehensive backup and recovery plans are an essential part of business. HPE SimpliVity hyperconverged solutions are inherently simple, and built-in data protection can help reduce the risk of data loss through natural and human-caused disasters.  If a disaster does hit your data center, the benefits of a resilient DR solution extend far beyond simple peace of mind.

To learn more about how hyperconvergence can help your IT, download the free e-book: Hyperconverged Infrastructure for Dummies.

COend.png
AroundStorageFall2018_C2.png
CalvinBW.png
Calvin Zito
HPE Blogger
& Storage Evangelist

Calvin Zito is a 35 year veteran in the IT industry and has worked in storage for 27 years. He’s an 8-time VMware vExpert. As an early adopter of social media and active in communities, he has blogged for 10 years.

You can find his blog at
hpe.com/storage/blog

He started his “social persona” as HPStorageGuy, and after the HP separation manages an active community of storage fans on Twitter as @CalvinZito

You can also contact him via email at calvin.zito@hpe.com

Most of my articles here have been looking at a topic and I write about it. In this quarter’s article, I wanted to highlight a couple of the blog articles by other experts on Around the Storage Block that have attracted a lot of attention – in other words, these articles are getting a lot of views. I’ll pull a few of the highlights from them and give you a link where you can read the details.

 

First article: Update on new VMware plug-ins for HPE Storage

Eric Siebert is our VMware for HPE Storage Solutions Manager. He’s been doing that for about 7 years. You can find him on Twitter as @ericsiebert. Before HPE, he worked as VMware administrator and was one of the top contributors to the VMware community. You can find his article about the new VMware plug-ins for HPE Storage on ATSB but here are a few highlights from his article:

Recently we launched new versions of our VMware plug-ins. This release brings some new features and a licensing change that I would like to highlight. I would also like to take this opportunity to give you a brief overview of our plug-in portfolio that helps provide simplified management for VMware admins.

One important thing I would like to feature right off the bat is our entire storage plug-in portfolio for VMware is now 100% completely free and fully functional. Prior to this release, most of our plug-ins were freely available with the exception of the plug-in for vRealize Operations Manager which required a paid license. Now you can use and deploy any of our plug-ins as much as you want to gain the best possible management experience for VMware.

HPE OneView for VMware vCenter (OV4VC)

HPE OneView for VMware vCenter is a free plug-in for VMware's vCenter management console which enables vSphere administrators to quickly obtain context-aware information about HPE servers and HPE storage in their VMware vSphere environment directly from within vCenter. The OV4VC storage plug-in supports 3PAR, MSA and StoreVirtual arrays, Nimble arrays come with their own built-in plug-in for vCenter.

Eric writes about what’s new with OV4VC and continues by looking at:

·       HPE 3PAR Plug-in for VMware vRealize Orchestrator (vRO)

·       HPE Storage Plug-in for vRealize Operations Manager (vROPS) and Log Insight (vLI)

·       HPE 3PAR Storage Replication Adapter (SRA) Software for VMware Site Recovery Manager (SRM)

If you’re using 3PAR and VMware, you definitely need to read the blog article.

Next: The perfect union - NVM Express, Storage Class Memory and HPE Nimble Storage

We announced the next generation of HPE Nimble Storage on May 7. I talked about this at Discover in a Connect Discussion Forum. I also have a blog article that is one of the most read blog posts in the last 3 months that highlights what we announced, HPE Nimble Storage news: Bigger, faster, better. What I want to highlight now is an element of the announcement that we made – that the new HPE Nimble Storage platform is NVMe and Storage Class Memory ready.
Jeff Kimmel 260x320.jpg
To talk about that I have Jeff Kimmel, the CTO for HPE Nimble Storage. I highly recommend you read the entire article, The perfect union – NVM Express, Storage Class Memory on ATSB but here are a few highlights from it:

Which technologies will power the next wave in storage? And how can you prepare today for what is coming tomorrow? See what answers HPE Nimble Storage has in store.

First, SCM and NVMe explained

SCM and NVM Express (NVMe) are key technologies powering the next round of improvements in storage array performance. Of the two, the emergence of new Storage Class Memory (SCM) technology is perhaps more significant, with 3D XPoint and Z-NAND memory leading the way.

These SCM media enable faster solid-state drive (SSD) access times versus those possible with standard NAND flash memory. NVMe in turn provides a needed improvement in the protocols used to access storage devices, substantially reducing the overhead of accessing high-performance SSDs.

The promise of NVMe

The NVMe storage protocol offers benefits over SAS and SATA. NVMe runs directly over PCIe, eliminating conversion costs and queuing points inherent in bridging to legacy storage interconnects. Direct PCIe access also avoids serialization bottlenecks that limit interconnect utilization with SAS and SATA. The NVMe host interface is designed to maximize CPU efficiency in performing I/O. Avoiding translation steps improves device latency. Increasing efficiency and concurrency enables higher IOPS.

Although NVMe SSDs have been available for a few years, they’ve not yet been deployed in the most cost-effective storage arrays. That’s because high availability RAID groups of SAS-connected SATA SSDs deliver plentiful throughput with superior economics and scalability versus high availability solutions using NVMe SSDs. As NVMe economics and scalability catch up, storage arrays will shift to take advantage of its relative strengths.

How does SCM fit in?

The term Storage Class Memory encompasses a number of solid-state storage media types researched starting more than a decade ago. These media vary widely in their fundamental physics, structure, and properties, but all seek to improve in one or more characteristics on legacy flash memory and DRAM. A few SCM types have been commercialized. Two stand out due to superior density and cost versus DRAM, combined with faster access versus legacy NAND flash: 3D XPoint memory and Z-NAND flash memory.

3D XPoint memory developed by Micron and Intel is a novel media available in NVMe SSDs today, and in the future in Persistent Memory attached to CPU memory buses. Samsung’s Z-NAND flash memory is derived from its legacy 3D NAND flash media, but delivers SCM-class performance and endurance, available in NVMe SSDs.

We at HPE believe that future storage arrays will support SCM and NVMe to make the most of storage technology evolution. Relative to a typical flash SSD, reads from an SCM SSD will complete roughly 10x faster at low to moderate utilization and tolerate 10x more write cycles before wear out, but may be 10x more expensive per gigabyte. Given the large difference in price and performance, it is more beneficial in early generations to combine flash and SCM SSDs such that flash is used for durable storage and SCM is used to cache frequently accessed data and metadata. This reduces read access times for hybrid flash and SCM systems significantly versus pure flash systems. As the cost of SCM media falls, we expect the industry will increasingly use it for durable storage as well.

Jeff concludes his blog article by talking about the marriage of NVMe and SCM and what it means for HPE Nimble Storage. Let’s not forget 3PAR in this conversation – we were the first vendor to demonstrate SCM in an array; what we’re doing across our two flagship arrays for NVMe and SCM is very similar, but stay tuned to Around the Storage Block to read the latest!
COend.png
HPEIoTBanner.png
TB200x200.png
 
Tom Bradicich

Dr. Tom Bradicich is VP and GM for Servers and IoT Systems at Hewlett Packard Enterprise. He was named in Computer Reseller News’ (CRN) Top 25 Disruptors of 2016 and the Top 100 IT Executives of 2016. Tom is known for managing the introduction of innovative products and businesses, recently creating a new product category “Converged IoT Systems”, with HPE Edgeline Systems, expressly designed for the IoT edge. Tom's data center server products have received an InfoWorld 2015 Technology of the Year Award, the 2015 ARM TechCon Best of Show Award, a CRN 2015 Product of the Year Award, and swept all six categories of the 2016 IT Brand Pulse Leader Award.

This month marks the one-year anniversary of the grand opening of HPE’s Americas IoT Innovation Lab in Houston, Texas. This center, along with its sister APJ facility in Singapore and two new labs coming online in Geneva and Bangalore in the coming months, serve as collaborative environments where customers, HPE, and partners can develop, test, and assist in the deployment of advanced IoT and edge solutions. 

Collaborative innovation is at the heart of what takes place at these IoT Innovation Labs, with the ultimate goal of invention and innovative products and solutions that deliver material business outcomes. In the post below, I am going to give an overview of how the collaborative process works and describe some of the unique features of the IoT Innovation Labs.

The role of Edge Experience Zones in ideation 

The edge, by definition, is anything that’s not a data center or cloud. The edge could be an assembly line, a windfarm, a submarine, a smart car, a smart city, a power plant, a smart grid, an industrial oil refinery, a home, or hospital. 

These are often harsh environments, and require specialized equipment. Operational technologies (OT), such as control systems, data acquisition systems, and industrial networks reside at the edge. And, more and more enterprise class IT systems (compute, storage, systems management) are moving out to the edge. These two worlds, OT and IT, are comingling and converging at the edge, and has prompted the creation of a new class of systems and solutions to efficiently command the edge and IoT.  Specifically, we’re bridging this IT-OT divide with physical OT-IT convergence, combining both in single system box, we call Edgeline Converged Edge Systems. 

Visitors to one of HPE’s IoT Innovation Labs can see various technology solutions, from the edge to the cloud. But to get a better understanding of how things actually work on the edge, engaging with our Edge Experience Zones (EEZ) is instructive and enlightening. 

IoT_Innovation_Lab.PNG

The EEZs emulate the reality of being out on the edge, whether it’s a factory floor, city street, or hospital room. For instance, the manufacturing floor EEZ is a physical location within the Lab, where there are demonstrations of how the industrial IoT advances manufacturing processes. The zone for smart cities has smart parking lots, and a system that closes down roads automatically if a flood takes place. And the smart hospital experience zone includes a hospital room with healthcare equipment used to monitor patients and optimize health outcomes.  

What's profound about the Edge Experience Zones and the other parts of HPE’s IoT Innovation Labs is that they are not only places to demonstrate or test solutions. Rather, they are places to get people thinking about possibilities, and come together to ideate on the next generation of IoT and Intelligent Edge solutions. For HPE and its IoT ecosystem of partners (ABB, PTC, National Instruments, Schneider Electric, GE Digital, Deloitte, Intel, SparkCognition, Microsoft and many others), it’s an opportunity to listen to what customers have to say. But it’s also a chance to ask, "If you had this new invention (Edgeline systems) what would you do? And what would you want?" 

I call them “inspiration customers” – these colleagues get in early on a new technology or trend, share what their vision is, and articulate what they want to achieve. And, these customers really help us refine our own thinking and offerings. Such forward-thinking firms not only understand the importance of innovation to their own operations, they also have the ability to drive new trends and first-of-a-kind products that can transform entire industries.  

Screen Shot 2018-09-10 at 2.42.37 PM.png

Prototyping and testing

Dr Tom Bradicich, Global Vice President and General Manager of Servers and IoT systems, HPE.jpg

After ideation and conceptualization, we get down to the work of building out a solution, right in the IoT Innovation Lab. It may start with experimentation or a proof of concept to validate the use case, the things, and data at hand, all working together with customers and partners. Or, we can build the prototypes and assess different configurations either physically in the lab, or using secure VPN connections if software needs to be tested remotely. 

Prototypes and applications can leverage our HPE Edgeline systems, Aruba networking equipment, as well as OT equipment from partners such as National Instruments and Schneider Electric, and software and cloud platforms provided by OSISoft, SparkCognition, Microsoft, and others. If a customer needs to perform tests with specialized controllers, sensors, or cloud services hosted by other parties, they are welcome to bring them in. 

For fully developed solutions that are intended for production environments out on the edge, there’s a process for certification. This allows us to get a jump on actual deployment at customer sites. By the time the product, system, or service gets to the edges of an oil rig, or chemical plant, or manufacturing floor, it’s been rigorously tested affording great confidence. Deployment times and hence time-to-value is reduced. Customers have access to our Lab’s facilities and engineering services, which means they aren't iterating on their own time out on their own edges and facilities. 

 

Customers implementing real IoT solutions

There are real success stories coming out of HPE’s IoT Innovation Labs. You may have heard of the Refinery of the Future, which is the result of a partnership that was incubated in our Houston lab. We are also working with Murphy Oil, and Hirotec, a global auto parts manufacturer that has made the HPE Edgeline the standard on its production lines. As an outcome of a collaboration started in the lab, a Houston-local electricity supplier Centerpoint Energy was able to increase efficiency and customer satisfaction

 
HewlettLabs.jpg

You can also visit HPE IoT Labs 

Out IoT Innovation Labs are open to customers eager to learn more about IoT, converged OT, edge computing, and our ecosystem of IoT and edge partners. Visit this page to learn more and arrange your visit. 

COend.png

TitleNewColors.jpg
KellyB.png
Kelly Baig
Badging Program Manager
HPE Education Services
 

HPE is on a journey with our customers, to introduce digital learning and blended learning experiences. The responses have been overwhelmingly positive – and surprisingly varied. In this article, I will share some of these responses as examples of how Education Services are changing shape as our customers change their training requirements.

eLearning is not new to the world of technical training, having been offered in some form since CBTs in the 80’s. However, there are a combination of factors that make the current blended learning approach more effective and acceptable for IT training. In fact, the shift that we are experiencing as we respond to customer needs is that on-demand online training is fast becoming the preferred training mode. Some of the factors behind this trend include:

  • Social media and the presence of digital natives within the workforce – who grew up with this technology and who are highly adept at using it for learning

  • The growth and wide acceptance of virtual training delivery – complete with virtual lab access to ensure hands-on skills development which is nearly the same as being in the classroom physically

  • The need for more efficiency in learning – with students less able to sit through a review of content that they already know, to get to the new information

As the lead for developing the Digital Learner program at HPE, I have had the opportunity to speak with many hundreds of customers and partners around the globe about our service and the challenges that it is intended to address for our customers. I have heard similar challenges and reactions from them in the process.

 

“My customer wants custom training and modular, searchable content”

A commonly stated challenge that I hear from our reseller partners who work closely with HPE customers, is that they need training which is custom. When I probe into this statement, what I have understood is that customers need specific training, on the new technology component that they want to use, at the time and place that they are trying to use it.

Thinking about traditional technology training like instructor-led classes, you can understand the problem for the IT professionals trying to focus on their new technology components. First, this is going to be different – specific – for each customer. Second, the customer doesn’t necessarily have the ability to find the training that they want at the time that they receive the new technology; maybe it is 6 months or more before they can attend a training class.

This is one of the problems that HPE Digital Learner is designed to address: it provides access to the exact training module, at the exact time and place that an IT professional needs to receive it. The content is indexed, searchable and book-markable – enabling you to find exactly what you need, and to revisit it as many times as is required.

This is a better type of training experience for those that need a “custom” or personalized set of content, at point-of-need and at time-of-need.

 

“My organization can only afford to send one person to training at a time”

Typically, when an organization sends a person to training, that person is out of office – or at least not able to do normal work – for the entire period of the training. This puts more pressure on others in the team who have to pick up that workload. When the individual returns, the expectation is that he or she will do their best to share what they learned in the training for the benefit of the entire team.

This is not an ideal situation for anyone involved. In fact, the entire team needs training – for example, IDC tells us that it’s not until a team gets to the tipping point of ~50% of the people within the team being trained that productivity gains expected from training will start to show up. However, organizations cannot afford it – either in terms of the time and disruption, nor the actual cost of the training – for all of the people in the team who need it.

lighthouse.jpg

HPE Digital Learner is designed to enable teams of people, and to put training in their reach. Because the content is modular, people can more easily access the training when and how they need it to support their skills development – but without the challenges normally associated with technical training.

Using this type of an approach, the entire team has access to the training instead of just one individual person. Also, the business is more likely to see the expected benefits of the training – such as productivity gains, higher profitability and innovation.

 

“I need to know where to start my training –
I am overwhelmed with too many options”

“Where do I start when I need to become a cloud administrator?”, and other similar types of questions, are commonly voiced by our customers. Sorting through the many potential topics and ways of attending training often overwhelms individuals and leads to inaction. There are literally so many options out there, that finding the option that fits best and gets to the outcome needed most efficiently is too hard.

Figure 1: Cloud learning path examples organized by role (click to enlarge)

HPE Digital Learner organizes content into proven learning paths (see Figure 1 below), which are prescribed according to role and outcome. The training content provided is guided – and leverages the expertise of HPE technologists and learning specialists, to ensure that people are led efficiently through the process of developing new skills and shifting into new roles according to their preferences and the requirements of their organization.

 

“I cannot be sure that the person I sent to training really got anything out of it”

 Another advantage of the digital approach to skills development that is provided by HPE Digital Learner, is that the progress made by students in their learning journeys and as they use courses can be measured. If it is happening digitally, it can be collected in metrics. It can be reported on, it can be analyzed, it can be proven to deliver value and it can be remediated when there are challenges.

Picture3_Des.png

This is all a new world of digital delivery, and one that we see offering better access to more efficient hands-on learning for customer teams, with better ability to measure usage – and to know which content is better and worse for the people receiving the training.

Our Education Services team is proud to offer this type of solution to our customers, as we stand on the brink of the switch-over to digital delivery of technical training to our customer teams and communities. We look forward to learning more as we continue to engage with our customers as they progress their own journeys to receiving this type of technical training, as well.

Want to learn more?

Watch this video |  Read the brochure | Visit: hpe.com/ww/digitallearner
COend.png


 

ComforteGraphicwithTitle.png
JDPhoto2.png
Jonathan Deveaux

Jonathan has been at comForte Inc for 3 years, but has been associated with NonStop systems since the mid-90’s. He has worked on the customer side of NonStop systems while at Bank of America and First Data Resources, and on the vendor side at IR and comForte. Jonathan has held various positions in Sales, Management, Marketing, and Product Management.

Our Story

From connecting systems that never stop, to data protection

comforte was founded in 1998 by the creators of a connectivity solution for mission-critical systems.

Soon after becoming the most widely used terminal emulation solution for HPE NonStop systems, comforte realized the next logical step was to make sure that communication between systems and applications were securely connected as well.

Solving the need for simple to implement data encryption for data moving between systems and applications proved to be extremely necessary. In 2010, HPE realized the strength of the solution and worked with comforte to include our data encryption solution in every HPE NonStop operating system shipped.

Recognizing the need to secure data at rest as well, comforte decided to develop a solution providing rock-solid data protection. The first active customer went live in 2014 and a patent for the tokenization algorithm was received in 2015. At the time of this article’s publication, over 40 organizations worldwide have successfully implemented data-centric security with comforte in their production environments.

With more than 20 years of experience in unlocking more value from systems that never stop, comforte has evolved into a market leader for data protection on mission-critical systems. Today, comforte proudly serves more than 500 businesses in every vertical around the globe.

As our experience with data protection increases, all indications continue to show that the enterprise market as a whole needs to address data protection. This need is validated on multiple fronts:

  • Our own customers tell us they need data protection on enterprise systems as well

  • Data breaches and security incidents are happening more and more each year

  • Regulations continue to be released requiring companies to do more to protect sensitive data and maintain data privacy

As comforte has developed long-term commitments to many customers and partners throughout the years, a thoughtful approach is required to address the enterprise data protection market while maintaining customer loyalty and brand recognition in the mission-critical market.  

photo1.png

Setting a new strategy in an already established market is risky

After co-founder and CEO Dr. Michael Rossbach retired in 2016, the timing was right to bring in a new CEO who would understand the history, success, and customer-relationships of the existing market and who could set a strategy to address a new market. Michael Deissner was selected by the board in July 2016.

About Michael Deissner

Mr. Deissner has a long history of successfully growing organizations in demanding environments. Before joining comforte, he was Managing Director at Cytonet for fifteen years. He started his career as a managing director of a medium-sized services company and worked at SAP managing internal and external projects.

Along with looking at digital payments and securing comforte’s solutions on mission-critical systems, Mr. Deissner has committed to providing enterprises with data protection as a key strategic driver for comforte.

 

Where the Market is Going and Our Mission

High profile data breaches have recently become a reoccurring theme in the news. In 2017, Verizon, Equifax, Uber, Deloitte, and Alteryx, among many others, have all lost billions of sensitive data elements. In most cases, individuals whose data had been lost did not consent or know that their personally identifiable information (PII) was stored by these organizations. Upon closer examination of the Equifax and Alternyx breaches, it is possible that nearly every adult living in the United States has been impacted. Equifax lost 140 million records and Alteryx lost 123 million records, each of which contained personally identifiable Information about U.S. citizens. Compared to the U.S. Census Bureau’s estimate of approximately 248 million adults living in the United States, more than half were affected by the Equifax breach alone.

 

Taking a closer look at compliance challenges that organizations are facing

To help companies deal with these breaches, numerous standards have evolved over the last few years which describe how data should be protected. Legislators and industry leaders are constantly updating their standards and regulations as new threats and new counter measures emerge.

On May 25, 2018, a new set of rules took effect in the European Union that can carry significant financial consequences if organizations suffer a data breach without having taken the necessary preventative measures. These rules, called the General Data Protection Regulation (GDPR), define and strengthen the rights that EU residents have when they are impacted by a data breach. Most corporations limit the data fields they consider sensitive to data elements such as name, address, date of birth, Social Security number and driver’s license number. The GDPR Includes any data elements that can be traced to a specific person, including GPS data, genetic and biometric data, browser cookies, mobile identification identifiers (UDID and IMEI), IP addresses, MAC addresses, application user IDs, and many others.

The Payment Card Industry Data Security Standard (PCI DSS) is a standard for organizations that process, store, or transmit payment card data. The PCI standard is mandated by the card brands and administered by the Payment Card Industry Security Standards Council. The standard was created to increase controls around cardholder data to reduce credit card fraud. Requirements 3.3 and 3.4 are of particular interest as they directly discuss how payment card numbers, referred to as Primary Account Numbers (PAN), can be used.

The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) established standards to protect individuals’ medical and personal health information. It applies to health plans, healthcare clearinghouses and healthcare providers that conduct transactions electronically. HIPPA requires organizations that deal with personal health information to fully protect those records from unauthorized access while at rest and in motion.

Once compliance is addressed in your organization, is your business safe from data breaches?  Not necessarily – in the cybersecurity space, it is commonly said that “compliance does not equal security”.  Compliance does help identify potential security gaps and weaknesses and may reduce the risk of data breaches, but there’s still more that can be done.

 

Looking Beyond Compliance –
Why the Rate of Data Breaches is Increasing Globally

photo2.png

In most cases, data breaches are not a result of neglect on the part of the affected organization. Malicious actors are constantly devising new methods to gain unauthorized access to sensitive data and it is extremely difficult for risk analysts to detect every possible vulnerability and foresee which will be exploited and how.

Why do data breaches happen in spite of all the technology we have access to? Here are three reasons:

  1. Ubiquitous connectivity as a result of digital business, Internet of Things, and commercial micro-ecosystems. Since everything is connected, attackers only need one successful entry point to penetrate further than ever before.

  2. Digital Workplace initiatives are becoming more common: employees get access to company data from any device, from any location, and at any time. This poses a serious challenge to security professionals as the traditional means of perimeter security become less effective.

  3. Complex IT and application infrastructures with modular architecture and many different devices and sensors create new attack vectors for hackers.

Protecting data against these challenges may seem like a daunting task, but there is a way to improve your data security strategy right now…

The Right Way to Secure your Data

Data-centric security is an approach to security that emphasizes the security of data itself, rather than the security of the networks, servers, or applications where the data lives. There are two common methods used to protect data: tokenization and encryption. Tokenization replaces the sensitive data with tokens that are meaningless without compromising security. Encryption renders the data useless without the key that was used to encrypt it. The best approach is a layered defense with tokenization and encryption at its core. 

The vast and highly complex IT infrastructure of enterprises requires an extremely flexible deployment model. comforte’s data protection suite is a scalable and fault-tolerant enterprise tokenization and encryption solution enabling robust protection of sensitive data with minimal effort and with little to no impact on existing applications. Different elements of the solution can run fully distributed across your enterprise including on-premises, in the cloud, or in a hybrid fashion. This results in the perfect combination of the benefits of cloud deployment or “as a service” usage with the security and performance of on-premises deployment. Yes, tokenization as a Service (TaaS) has become a very feasible option.

Message from Michael Deissner

comforte has emerged at the forefront as having the best-in-class solution to address data-centric security. It allows organizations to achieve end-to-end data protection, lower compliance costs, and significantly reduce the impact and liability of data breaches

 

Leveraging our Special Sauce to satisfy multiple needs

Deploying data-centric security not only exceeds data protection requirements, it also benefits the enterprise as a whole.

IT security & operations teams benefit from the ease of implementation of comforte’s data protection suite. Passive integration capabilities eliminate the need to make code changes to existing business applications. Built-in capabilities for elasticity and a self-healing architecture help these teams to spend less time on managing and operating the system

For IT security & operations teams

  • Passive integration capabilities minimize implementation efforts and costs

  • Minimal impact to the applications that get protected means that they can just keep running – implement data-centric security without downtime

  • Elasticity & self-healing as fundamental architecture principles reduce the time needed for management and operations

For line of business and risk & compliance teams

  • Data-centric security significantly reduces the impact of data breaches

  • Compliance can be ensured and maintained without being dependent on compensating controls

  • Data protection is a competitive differentiator and can also be positioned as a value-added service to drive additional revenue

For your customers – it is all about trust

In an age where choice has never been bigger and where it has never been easier to simply switch to a different product or service, customers are looking for business partners they can trust. Data protection is the foundation to demonstrate to your customers that you care about their data and their privacy.

 

Conclusion

The increasing importance of compliance, the shift in technology, and the ever increasing amount of data breaches clearly show that companies need to look beyond traditional means of securing their data. Data-centric security has become a best practice and should be top-of-mind for risk, security, and compliance professionals. Don’t be one of those organizations that have waited too long to take the right measures to protect their data and who suddenly find themselves in the headlines as yet another company that has been breached.

COend.png

TitleDanaWanted.png
DanaGarderProfilePic.png
Dana Gardner

Analyst Dana Gardner hosts conversations with the doers and innovators—data scientists, developers, IT operations managers, chief information security officers, and startup founders—who use technology to improve the way we live, work, and play.

View an archive of his regular podcasts.

IT leaders face complex choices when it comes to their cloud options. Literally, hundreds of thousands of services are available — and each business must decide what choice is best in terms of performance and price points. For the enterprise, a hybrid cloud environment is now the norm, which increases complexity even more. Given all of this complexity, many businesses are finding that their multi-cloud deployments are out of control and waste is rampant.

In a recent BriefingsDirect podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, interviews William Fellows, Founder and Research Vice President at 451 Research. They discuss how new tools, processes, and methods are helping organizations save money by gaining control over hybrid IT sprawl. 
 

The average organization is wasting 30% of total public cloud costs

Gardner begins the interview by asking Fellows how much money a typical business is wasting by not optimizing their use of public cloud services. “Well, a lot,” says Fellows, “And it’s growing daily.”

Fellows explains that buyers are spending thousands, if not tens of thousands – and some even spend millions per month on cloud services. And it’s pretty much accepted across the industry that about 30% of that is waste. Of course, if a business is only spending 100 dollars a month, it’s not a big deal. But if a business is spending a million dollars a month, 30% is huge.

What exactly is causing this cloud waste, and what can be done to reign it in? Fellows explains that it’s primarily two things: decentralization and complexity.

“At a high level, there is massive organizational dysfunction around cloud and IT,” says Fellows. “This is driven primarily because cloud is usually decentralized purchases at large organizations. A variety of different groups and departments are using it—with no single, central, and logical way of controlling cost.”

He continues, “Secondly, there is the sheer number of available cloud services, and the resulting complexity of trying to deal with all of the different nuances with regard to different image sizes, keeping taps on who is doing what, and so on—which also underpins this resource wastage.”

Hyperscale cloud providers are actually trying to provide better tools for cost reporting on their services. Of course, they are only interested in managing the cost of their own services and not third-party services. As businesses transition to a hybrid world, a more comprehensive solution is needed that will manage the entire multi-cloud environment.

 

New approaches: cloud management platforms and services

New cloud management services and cloud management platforms are starting to appear. The industry seems to have settled on the term cloud governance to describe these types of tools/services. Cloud governance software and services not only help organizations optimize resources, infrastructure, and workloads in terms of economics and cost, but they also bring in security and compliance.

Cloud governance solutions do more than just review your monthly bill; they look at how services are performing in real time and then recommend actions that will optimize use from an economic point of view. Additionally, some tools are beginning to employ automation based on machine learning, so the tools themselves can learn what’s going on, and automatically make better decisions for the business.

 

In the wild, wild, west of cloud governance, who will take the lead?

According to Fellows, no single company or one approach has established a leadership position in the industry yet, which makes this point in time a bit risky for end users. “That is why we counsel enterprises to work with vendors who can offer a rich set of services. The more things that you have, the more you are going to be able to undertake and navigate this journey to the cloud — and then support the digital transformation.”

Fellows advises companies to work with vendors that have loosely coupled approaches because it allows businesses to take advantage of a core set of native services — but also gives them the flexibility to use their own tools or third-party services via application programming interfaces (APIs). He likes the direction Hewlett-Packard Enterprise (HPE) is going in terms of bringing all of the pieces together that allow enterprises to operate across their entire hybrid IT environment.

He mentions how HPE OneView is providing a software-defined way of provisioning infrastructure. Also, HPE OneSphere offers API-driven management for applications, services, workloads, and the whole workspace and developer piece as well. “So one is coming top-down and the other one bottom-up,” summarizes Fellows. “Once those things become integrated, they will offer a pretty rich way for organizations to manage their hybrid IT environments.”

Additionally, “HPE has a leading position in this new kind of hardware consumption model — for using new hardware services payment models — via its HPE GreenLake Hybrid Cloud offering.” Fellows concludes, “The HPE offering looks like it's coming together pretty well.”

To listen to the complete podcast, click here. To read a full transcript, click here. To learn more from HPE about optimizing your hybrid IT environment, follow this link.

Analyst Dana Gardner hosts conversations with the doers and innovators—data scientists, developers, IT operations managers, chief information security officers, and startup founders—who use technology to improve the way we live, work, and play. View an archive of his regular podcasts, or visit his blog.
COend.png
netiq-views-home.jpg
KentPury.jpg
Kent Purdy

25 years’ of experience with data center products and technologies, over 15 of which included working with Identity and Access Management solutions. Throughout much of his career, Kent has maintained a keen focus on IAM trends relevant to application and service delivery and security.

Micro Focus’s Advanced Authentication (AA) is an open framework that allows just about any authentication type to be plugged into it. With its open architecture, AA allows organizations to future proof their environment so they can always have the freedom to adopt the latest without the fear of vendor lock-in.

Troy Drewry, product manager for Advanced Authentication, I presented a BrightTalk webcast tilted, “The Five Most Creative Ways that Organizations are using Advanced Authentication,” that shares the transformations we’re seeing in authentication among our customers, both install base as well as in new implementations.

It has been interesting to see the adoption of advanced authentication technologies continue to accelerate as the dynamics behind that growth expand to new business models. Overall, we’re seeing an evolution in the approach to user verification across a broadening set of technologies and user assumptions. While core, typical security use cases like remote access continue to be a strong driver of authentication infrastructure investments, user convenience is becoming more table stakes.  The struggle is to make authentication strong without also creating an experience that is too cumbersome for users that ultimately blocks consumer interaction. In fact, if done right, strong authentication makes interactions and transaction more convenient than ever before.

 

Here are the first two creative ways

Title_5-Creative-Uses-of-Advanced-Authentication.png

Keeping the fridge stocked… case in point is the smart refrigerators that let you make purchases using a touchscreen that is embedded in its door. With these new refrigerators, you can place an order when you notice that someone has cleaned out all the yogurt or as you consumed that last portion of mayonnaise. These refrigerators can also be configured with a built-in internal camera that spots items to invoke purchases of items that need to be replenished. For either of these scenarios to be viable in the market consumer trust is essential. Think how essential security becomes to not only make transactions secure but also to keep consumer account and other personal information safe from exploitation or misuse. To ease potential customer apprehension about placing unwanted orders, say 100 chocolate bars by teen’s friends, these auto shopping appliances come equipped with workflows that allow the buyer, i.e., the person on the hook to pay the bill, to approve orders for purchase via an app on their smartphone.

Making your car smarter… taking a look at the evolution of the automotive industry. Among others, Jeep, Nissan, and Tesla have all learned their lessons on the perils of not securing their communications to their customer’s connected autos. It’s another reminder that hackers, criminals, and other outsiders are always on the lookout to exploit the unprotected. The issue of strong authentication to connected cars is paramount because the number of electronic components in vehicles will nearly double in the next five years. Not only is the use of electronic control units, telemetric control units, entertainment systems quickly expanding, they are increasingly continuously connected and electronically secured. Drivers are now able to unlock their cars, access remote music and get messages all from their smart apps. Vendors can get car use and health information remotely. Each year, the level of connectivity and control evolves in sophistication, raising the stakes of vulnerability. We currently have a major auto manufacturer implementing AA to secure their facilities as well as their connected automobiles that they sell and offer services to.

For the full list of creative uses of AA, catch the replay of the webcast Creative Ways that Organizations are using Advanced Authentication. Troy goes into ways AA is being used in healthcare, banking/retail, and ID systems. With all of the advancements we’re seeing in strong authentication we should probably make this webcast an annual update.
COend.png

Lastarticleheader.jpg
Ryan.png
Ryan Swango
Hybrid IT Specialist – HPE, Tech Data

 

With impressive cost savings and workload flexibility, public cloud offerings have become an alluring approach for many enterprises. But, is it the best solution for your customers? Business and technical considerations, including compliance, security, cost, and application and data analytics performance issues have caused many organizations to reevaluate public cloud usage. 

That is why we have seen the explosion in popularity of hybrid cloud solutions that include both public and private (on-prem) cloud. It is all about finding the proper balance for your customers. Hybrid cloud improves efficiency, accelerates delivery of apps and services, and enables the flexibility to combine preferred cloud workloads to refreshed on-premises infrastructure in the ratio best suited to your customer’s organization. 

 

A truly Hybrid Experience - public and private

Since 2008, Microsoft Azure public cloud services have enabled businesses to move faster and achieve more. Last year, Microsoft introduced Microsoft Azure Stack to provide the benefits of Azure public cloud – agility and scalability – with the control, performance, and security of the on-premises data center. However, perhaps the greatest benefit of the introduction was that it made hybrid even more attractive by allowing enterprises to use the same Azure platform for both public and private cloud.  

As an element of Microsoft Azure, Microsoft Azure Stack offers a consistent, flexible, and truly hybrid cloud environment. The MS Azure platform allows developers to leverage the same tools and processes to build apps and services, and then deploy these to either Azure public cloud or Azure Stack on-prem, selecting the optimal target platform that best meets workload requirements. Since Microsoft owns the hypervisor, the operating system, and other key elements of the solution, Azure Stack delivers a true Platform as a Service (PaaS) experience.

 

Maximizing the power of Azure Stack

HPE ProLiant for Microsoft Azure Stack provides a fast and straightforward way to get the most from Microsoft Azure Stack. It delivers an integrated Microsoft Azure hybrid cloud that incorporates compute, storage, and networking, enabling Azure services to run on-premises, for your clients or yourself.

 With HPE ProLiant for Microsoft Azure Stack, you can:  

  • Run Azure consistent services in the data center to meet security, data sovereignty, compliance, performance, and cost requirements (MS value-add)

  • Deliver on-premises services with enterprise-class reliability, scaling, and performance (MS value-add)

  • Have a consistent development environment for applications deployed to either on-premises Azure Stack and Azure public cloud (MS value-add)

  • Enjoy the most configurable solution available with choices for flexibility with solution sizing between 4-16 nodes, selecting processor, memory, and storage options (HPE value add)

  • Rely on a portfolio of professional services for planning and implementing Azure hybrid cloud projects including security, workload migration, identity management, backup and site recovery, and networking (HPE value-add)

  • Ensure on-going top performance availability with global, enterprise-class support, remote monitoring and management, and single point of support  (HPE value-add)

 

Now available with HPE – ProLiant Gen10

HPE ProLiant for Microsoft Azure Stack is now available with HPE ProLiant DL380 Gen 10 servers, bringing impressive configuration flexibility and unique new capabilities into the cloud world, including:

  • Higher storage capacity, supporting up to 120TB raw capacity per node

  • Higher cache capacity, supporting up to 19.2TB of cache per node

  • Scalable solution sizing, configuration sizing anywhere between 4 and 16 nodes

  • Higher workload performance, thanks to 66% boost in memory bandwidth and double the memory capacity, plus up to 28 cores and higher clock rates

  • Higher networking bandwidth with up to 25GbE, resulting in a 150% increase in networking bandwidth over solutions based on 10GbE

  • Reduced Costs thanks to HPE’s storage architecture that delivers an elegantly-balanced ratio of 2:1 capacity devices to SSD Flash devices, making it optimally designed to support Microsoft software-defined storage technology

  • Increased Security checked and guaranteed on 3 levels: Protect, Detect, and Recover from attacks. Only HPE offers industry standard servers with major firmware anchored directly into the silicon (HPE Silicon Root of Trust) and managed by HPE iLO5

If you are a Tech Data HPE Channel Partner, you have access to MAX, our HPE Partner Enablement platform. Check out this brief for more information on why HPE ProLiant for MS Azure Stack beats the competition: HPE ProLiant for Microsoft Azure Stack (Gen10) vs. Dell EMC Cloud for Azure Stack.

In addition, HPE GreenLake is a suite of consumption-based services, which deliver IT outcomes in a pay-per-use model in your customer’s own environment or in the cloud. If you are a Tech Data HPE Channel Partner, you may login to MAX to check out the HPE Greenlake Infographic and HPE GreenLake – IT as a Service August 22, 2018 webinar slide deck for more information.

For complete information, go to the HPE ProLiant for Microsoft Azure Stack site

COend.png

Title.jpg
TravisGreene.jpg
 
Travis Greene

Travis Greene, Identity Solutions Strategist at Micro Focus, possesses a blend of IT operations and security experience, process design, organizational leadership and technical skills. After a 10-year career as a US Naval Officer, he started in IT as a Data Center Manager for a hosting company. In early 2002, Travis joined a Managed Service Provider as the leader of the service level and continuous improvement team. Today, Travis conducts research with NetIQ customers, industry analysts, and partners to understand current Identity and Access Management challenges, with a focus on provisioning, governance and user activity monitoring solutions. Travis is Expert Certified in ITIL and holds a BS in Computer Science from the US Naval Academy.

The term “security operations” is often interpreted to be synonymous with a security operations center (SOC). In fact, a web search on security operations results mostly in links to SOC content. But that’s a narrow view. How you view security operations will make a difference in how fast your organization can deliver software and mitigate breach damage. A bigger-picture view that includes IT operations is necessary to address the agile threat environment that exists today.

 

The divide between security and operations

Let’s begin with a simple acknowledgement that conflict exists between most IT security and operations teams. Whether it simmers beneath the surface of thinly-veiled polite tolerance or involves the flipping of furniture, or something in-between, the difference in priorities for each team is bound to create tension. While IT security ultimately is concerned with the confidentiality, integrity and availability of IT services and information, IT operations focuses more on performance, efficiency and availability. 

Security_Opertations-DevSecOps.jpg

We might be tempted to find common ground over availability, but even this identical term is viewed through different lenses. The security perspective focuses on countering intentional sabotage, while operations seeks to mitigate accidental service disruption. The result of this divide is overlapping organizations and tools in many organizations, with conflict arising over the boundaries between them.

 

The three approaches to security operations

While the divide is almost universal, security must have an avenue to affect change in the infrastructure and applications of the organization in order to remediate vulnerabilities and respond to attacks. The challenge is a blurring of lines between the authority, responsibility and accountability for implementing change.

The approach taken by many organizations can be grossly organized into three categories.

1. Security administration – The everyday activities performed by IT security in support of its responsibilities. These can include the implementation and maintenance of policies and controls, threat analysis, compliance assessments, and security monitoring and incident investigation from a SOC or similar structure. These operational activities are clearly in the security domain, and while they will intersect with operations (for example, enabling log collection on a server) there is usually less grey area on who is the authority.

2. Secure ops frenemies – The necessary collaboration that must occur between IT security and operations. Every organization handles this a little differently, and ideally, there is documentation that clearly defines who provides management of credentials and access, who changes rules on the firewalls, and who patches servers to eliminate a vulnerability, as examples. Where things get contentious is when timeframe and priorities differ. If a security organization detects the exfiltration of data from a database, they often must rely on operations to shut it down. Operations may be reluctant to do so if that database supports a mission-critical service for the business. 

3. DevSecOps – As more enterprises adopt DevOps practices, there is a greater integration of developers and operations teams in planning, building, testing, deploying and maintaining code in production to accelerate release velocity. As bottlenecks or “constraints” are removed, security is gaining the spotlight, and often not in a good way. Security testing, when performed at the end of a development cycle, can identify code that is insecure, but is also then costly to change. So there is a movement to “shift left” security testing by including it earlier in the cycle, which is helpful for developers, but operations/security integration continues to be unaddressed. It remains to be seen if DevOps, which is developer focused, shifts its center of gravity more towards operations, and in doing so, helps to bridge security and operations.

 

Which approach is correct?

The correct answer, of course, is the one that supports the business need for speed of software delivery, and the confidentiality, integrity and availability of services and data. That means that all three approaches must be covered, but they need improvement. The greatest potential for improvement comes from the interaction between security and operations teams.

One of the keys to the success of DevOps is the automation of handoffs between steps in the toolchain that allows for the continuous delivery of code. That kind of orchestration is sorely needed to bridge the divide between security and operations tools. The political and budgetary walls that exist between these organizations are unlikely to be dissolved, and there is no good reason to force full integration or cross-use of all tools. But connections and automation made for specific activities can address the most pressing concerns.

For example, your SIEM platform may be able to initiate tickets in a service desk tool. Automated processes in the service desk can then be triggered to perform a remediation action that IT operations has approved. This reduces the workload on both the security and operations teams and can enable a feedback loop for continuous improvement that will also support mutual trust.

That trust, leading to cooperation, is sorely needed in a time when security threats are innovating faster than the enterprise can keep pace. The greater the partnership between security and operations, the better the chance your organization can deliver software faster and minimize breach damage.

COend.png

Dana2_Title.png
DanaGarderProfilePic.png
Dana Gardner

Analyst Dana Gardner hosts conversations with the doers and innovators—data scientists, developers, IT operations managers, chief information security officers, and startup founders—who use technology to improve the way we live, work, and play.

View an archive of his regular podcasts.
 

Businesses are embracing hybrid cloud in record numbers because it lets them choose a mix of applications, services and platforms -- all tailored to their needs. Yet, many struggle with the complexity of operating different private and public clouds in conjunction with traditional infrastructure. Often, they don’t have the right skills to oversee and manage their cloud implementations and that can lead to unrestrained cost and risk.

Businesses are embracing hybrid cloud in record numbers because it lets them choose a mix of applications, services and platforms -- all tailored to their needs. Yet, many struggle with the complexity of operating different private and public clouds in conjunction with traditional infrastructure. Often, they don’t have the right skills to oversee and manage their cloud implementations and that can lead to unrestrained cost and risk.

Earlier this year, almost a thousand professionals were asked about their adoption of cloud computing, and the results were compiled in the 2018 State of Cloud Survey. The survey reveals that cloud adoption continues to grow and 81% of respondents have a multi-cloud strategy. And the top cloud challenges these users face? Spend and security.
 
According to the survey, cloud users are aware that they are wasting money in the cloud – they estimate 30% waste. To combat this issue, 58% of cloud users rate cloud optimization efforts as their top initiative for the coming year. And according to Gartner, by 2020, organizations that lack cost optimization processes will average 40% overspend in public cloud.

Additionally, security continues to weigh on users’ minds. A whopping 77% of respondents in the survey see security as a challenge, while 29 % see it as a significant challenge.

Many businesses that struggle with spend and security issues wonder if these problems can be solved. The answer is yes – with the right tools and expertise.

Microsoft Azure Stack: Minimizing security and regulatory concerns

Enter Microsoft Azure. Customers all over the world are choosing Microsoft Azure for their public cloud needs, making it one of the fastest-growing cloud platforms available today. And TheStreet.com reports that its growth shows no signs of slowing down, as “…72% of Azure customers see themselves deploying workloads in Azure Stack over the next three years.” So why is there so much interest in Azure Stack and how can it help businesses conquer security concerns?

Microsoft Azure Stack is an extension of Azure that lets business build and deploy hybrid applications anywhere. It lets DevOps leverage the same tools and processes they are familiar with in Microsoft Azure to build either private or public cloud instances of Azure, and then deploy them to the cloud that best meets their business, regulatory, and technical needs. Microsoft Azure Stack also allows businesses to speed development by using pre-built solutions from the Azure Marketplace, including many open-source tools and technologies.

In terms of meeting security needs, Azure Stack enables businesses to deliver Azure-consistent services within their own data center. That capability gives them the power and flexibility of Azure public cloud services — completely under their own governance.

Consumption-based pricing:
A better way to implement and consume hybrid cloud resources

Another concern businesses have when using the cloud is overspending. One of the main reasons enterprises overspend is because they lack automation and simple tools that enhance the agility of the cloud to continuously monitor compliance and cost. And most businesses overprovision their infrastructure on-premises to be ready to handle unpredictable growth, which further adds to overspending.

Managed consumption for hybrid cloud is an operating model that lets businesses consume the exact cloud resources they need, wherever their workloads live -- while also driving improved performance, cost, security and compliance. Some of these models also eliminate the need for staff to manage the hybrid environment day-to-day, which helps reduce human error and enables staff to focus on innovation.

If deployed correctly, this type of model lets businesses see who is using their cloud, what the costs are, and whether policies are followed. And with the right partner and tools to show usage, track cost, and monitor compliance and security, the business can be confident that they’re getting the most from their Azure hybrid cloud.

What’s the best way to implement a Microsoft Azure hybrid cloud environment?

A new service offered by Hewlett Packard Enterprise (HPE) meets these needs, letting the enterprise better manage both spend and security concerns of hybrid clouds on and off premises. Using services from Cloud Technology Partners (CTP, a Hewlett Packard Enterprise company), processes that manage cloud resources are set up in a customer’s environment of choice. After that, CTP services establish specific cost, security, and compliance controls. Coming soon, HPE GreenLake Hybrid Cloud will manage those resources on behalf of the customer. And unlike a traditional managed service, HPE GreenLake Hybrid Cloud will offer an automated, cloud-native model that is designed to eliminate the need for organizations to hire or train new staff to oversee and manage cloud implementations.

For Microsoft Azure Stack on-premises, HPE offers HPE ProLiant for Microsoft Azure Stack using HPE GreenLake Flex Capacity. This deployment model lets customers gain a pay-per-use experience, not only for the Azure Stack services, but for the underlying infrastructure. And by only paying for the capacity used, businesses can save more on IT cost – up to 30% of the infrastructure cost. 

To learn more about HPE GreenLake Hybrid Cloud for Microsoft Azure Stack, watch this short video. For more information about HPE ProLiant for Microsoft Azure Stack, watch this on-demand video.

COend.png
HPEEnergizeIT.png
GerryNolan.png
Gerry Nolan

Gerry Nolan is a Worldwide Senior Director for HPE Pointnext Support Services. In this role his goal is to enable HPE’s customer support experience, which in turn drives their business outcomes and enables their digital transformation journey. Gerry brings to his position a well-established background in information technology and professional services, where he has worked for over 30 years. Prior to taking on his current role he held other positions in HPE including leadership for HPE’s Hyperscale Support Services, HPE’s Mission Critical Services business and HPE’s Customer Technical Training business.

With the economy firing on all cylinders and innovative business models common, companies are taking aim at lucrative new market opportunities. To succeed, they need to unlock a crucial resource – the ingenuity of their in-house IT talent. Many of the business leaders I talk to would love to provide more ways for IT staffers to mobilize their skills and develop new ones, especially since companies need every ounce of that expertise to capitalize on hybrid IT, next-gen apps, AI, big data, and a host of other high-impact initiatives. But the reality is, IT’s hands are still tied by routine IT management and maintenance tasks. A quick glance at the chart below, from an IDC white paper The Business Value of HPE Datacenter Care, shows the extent of the problem.

scottsportsblog.png
action-adventure-cold-298008 smaller.jpg

Nearly 14 percent of IT staff time, on average, spent just on managing SLAs … and almost as much on monitoring and troubleshooting! That’s a real eye-opener.

Been there, done that, got the results

Surveys like this are helpful for understanding the big picture, and the white paper does a great job of explaining how companies are tackling this challenge. Still, nothing beats hearing it first-hand from IT leaders who have been there, done that, and achieved major successes.

So I was thrilled to find myself onstage at HPE Discover in June with one such leader: José Rodriguez, Information System Manager with Switzerland-based SCOTT Sports SA. We were there to present a session called “How HPE Pointnext helps you deliver the experience your users need and simplify IT operations with HPE Datacenter Care.”

SCOTT Sports, by the way, is an international sports brand with a global approach, developing and distributing bike, winter, running and motosport products. R&D, Sales, Marketing and IT are managed from the SCOTT Headquarters in Givisiez, Switzerland. SCOTT employs more than 1000 people worldwide. They sell products in 80 countries. “Innovation – Technology – Design” is their Mission Statement, expressing exactly what SCOTT Sports‘ products stand for. They’re a longtime HPE customer (you can read how HPE Pointnext helped SCOTT Sports migrate from an Oracle data warehouse to a SAP HANA system here.) José and I focused on the company’s recent decision to invest in HPE Datacenter Care Service in order to:

  • Deliver the experience users need. SCOTT Sports’ has an uncompromising commitment to keeping its infrastructure available 24/7. Every minute that its systems are down, José pointed out, the company can’t process new orders, or process, deliver and ship its current orders. With the ongoing growth of the business, including its web-based arm, SCOTT Sports’ internal IT team was starting to feel the pinch in providing round-the-clock support. Hiring and training new staff wasn’t an attractive option; they needed expert help, and they needed it right away.

  • Simplify IT operations. This is one of the major benefits that the company gained by implementing HPE Datacenter Care. HPE staff were assigned to SCOTT Sports: an Assigned Account Support Team, including an Account Support Manager, and Assigned Engineers. These assigned resources were especially important and helpful, José reported – the engineers had in-depth knowledge of the infrastructure because they installed it. The Service also included proactive monitoring and proactive support; SAP HANA expertise and support; and multi-vendor support, including multiple OSs such as HP-UX and SUSE Linux.

 

Saving time, saving money

HPE Datacenter Care has now been in place for about eighteen months. José was able to report that the service had delivered significant cost savings for his company and freed up time for its IT team to focus on key projects and innovation. Here are the points that he emphasized. HPE Datacenter Care:

 

Delivered proactive monitoring and support

In other words, enabling IT to get ahead of potential issues. SCOTT Sports’ team held regular meetings with the HPE Pointnext Account Support Manager and Assigned Engineers to go through reports and insights they provided – proactive monitoring reports, network updates, compliance health check tables and more – and decide what actions to take. The HPE resources became a trusted partner to augment SCOTT Sports’ team.

 

Improved the overall operational efficiency of the IT department.

HPE Datacenter Care saved time by providing simplified updates and patches. It also reduced the cost and loss of revenue associated with unplanned downtime and outages. Before applying a Bios, driver or firmware update, the HPE team would analyze the consequences for the rest of the infrastructure. For example, if a software update was needed for the Fiber Channel switches, how would that impact the HPE 3PAR StorServ Storage arrays? If needed, the HPE team would prepare an installation plan to ensure effective execution of the work.

 

Provided access to HPE Pointnext’s IT best practices.

HPE Datacenter Care delivered OS and SAP HANA support in addition to hardware support. For example, if an OS update was needed, the HPE team provided the expertise to ensure that both the hardware and the SAP-HANA application would continue to run effectively. 

Those are the kind of results that can turn IT into a real engine of growth in any company. As I wrote in my blog Rx for Business Vitality: Choosing the Right IT Support, CIOs these days are looking to do “a lot more than routine patching, fixing problems and keeping the lights on. They want to free up resources to energize the digital transformation and accelerate the initiatives that drive the success of the business at large.” HPE Datacenter Care is a great way to do that.

For more on HPE Datacenter Care, see Kelly Haviland’s post Rethinking Support for Hybrid IT: An Inclusive, Relationship-Based, Tailored Approach.

COend.png