C2CoverFinal.png

ELTag.png
 
 
 

 “It’s harder to stay on top than it is to make the climb. Continue to seek new goals.”
- Pat Summitt

 
Stacie-Neall-mug-shot.jpg

Welcome to the Summer issue of Connect Converge

Recently, many of us took the journey to HPE Discover 2018. This year’s event was, without doubt, a hub of innovation with HPE President and CEO Antonio Neri delivering a message that redefines how HPE will innovate in the years to come. Neri’s boots on the ground approach to the future is an affirmation to partners, customers, and HPE employees. 

Once again, I believe I can speak on behalf of the Connect Community and say that our Connect Tech Forums held in the Connect booth were a great success. The user group forums are an integral piece of community collaboration - allowing influencers, experts, and practitioners to address current technology trends and disruptions.  A big shout out to all that shared their technical prowess and to all the old and new members that stopped by the booth. It was fun, and sometimes heart wrenching as we took a break between sessions to watch the FIFA World Cup! We are ramping up for Discover Madrid and if you have a forum topic you are interested in submitting please contact us. (info@connect-community.org)

Speaking of influencers, read Calvin Zito’s (aka HPE Storage Godfather), feature article, “HPE Nimble Storage – A year later.”  Hear what customers are saying and how HPE Storage is delivering winning outcomes to valued customers. Goal!!

Read on for more technology news, insights and how-to content.

See you in Madrid!

Stacie Neall
Managing Editor
@sjneall

PIN.png
 
 


PresidentsLetterTag.png
 
 
Navid Headshot.jpg

Greetings HPE Community,

This is the first letter I’ve written for Connect Converge Magazine and just wanted to introduce myself and tell you about the mission of Connect Worldwide.

I’ve been involved with HP/HPE since 2001 when the Compaq Merger took place, as part of (to this day) the Mission Critical Server division via an ISV. Throughout this time I have been active in the user group community on a local level (heading the Southern California NonStop User Group), a social level (founded and presided over NonStop Under 40 to help new talent in the community get acquainted), and now a global level (as Board President of Connect Worldwide). To say it’s been a wild ride would be quite the understatement.

Watching a massive corporation such as HP split in two, and most recently witnessing all the mergers and selloffs of the company has created quite the story. However, the User Group Community is still as active and relevant as ever. We at Connect are here to provide the community with all the resources it needs to find the information our members are looking for within the still massive HPE spectrum and, more importantly, create an umbrella above all content, & within easy reach. Through events and publications, we have brought the community together to share information, find solutions, and at times just shoot the breeze over a beer or two.

The Connect team and I are excited for the future of our user group, and will always be there to make sure it stays at the leading edge of what HPE has to offer. Most recently, we held a Blockchain Forum in London with the goal of bringing together Customers and Partners from different parts of the HPE spectrum to learn and share information on the new and fascinating technology. We are excited to repeat this forum in New York in September, and  we are even more excited to find new technologies and groups within HPE to host future events for. Events like these bring out the best in community.

I hope this gives everyone a quick intro to myself and the mission of Connect Worldwide. If you see me or any other Connect team members out and about, please say hi and let us know how we can be an even better user group for you going forward. We’re all about Community!

Navid Khodayari
Idelji
Connect Worldwide President

^
TABLE OF CONTENTS

 

AdvocacyBanner.jpg
megaphone.jpg
Sheltered Harbor Provides Protection for Financial Institutions’ Data

billhighwayman-150x150.png

Dr. Bill Highleyman  
Managing Editor
Availability Digest

Dr. Bill Highleyman brings years of experience to the design and implementation of mission-critical computer systems. As Chairman of Sombers Associates, he has been responsible for implementing dozens of real-time, mission-critical systems - Amtrak, Dow Jones, Federal Express, and others.  He also serves as the Managing Editor of The Availability Digest (availabilitydigest.com). Dr. Highleyman is the holder of numerous U.S. patents and has published extensively on a variety of technical topics. He also consults and teaches a variety of onsite and online seminars.  Find his books on Amazon.  Contact him at billh@sombers.com.


Two years ago, dozens of U.S. banks, including Citigroup, JPMorgan Chase, and Bank of America, began working on a secret, ultrasecure data bunker called Sheltered Harbor. The data bunker holds a copy of all bank transaction data to protect it from a devastating cyberattack.

What is Sheltered Harbor?
Sheltered Harbor is an initiative undertaken by the financial services sector. It provides an extra layer of protection against potential cyber risks. Sheltered Harbor is designed to provide enhanced protection for the customer accounts and data of financial institutions. Its goal is to securely store account data and to recover it even in the event of the loss of operational capability of a bank or brokerage.

Multiple industry associations collaborated to develop and deliver Sheltered Harbor. They include:

American Bankers Association

Credit Union National Association

Independent Community Bankers of America

Financial Services Forum

Financial Services Information Sharing and Analysis Center (FS-ISAC)

Financial Services Roundtable

National Association of Federal Credit Unions

Security Industry and Financial Markets Association

The Clearing House

These financial services industry trade groups have established new resiliency capabilities to ensure that consumers will be able to access their financial accounts even if their banks or brokerages go out of business.

Large banks pay $50,000 to become members of Sheltered Harbor. Smaller banks pay less. Members receive access to the full set of Sheltered Harbor specifications to ensure secure storage and recovery of their account data.

Sheltered Harbor Provides Data Security
Sheltered Harbor provides data security through multiple mechanisms:

  • It is physically isolated from unsecured networks. It has no connection to the Internet (it is air-gapped).
  • It is redundant and decentralized.
  • It can survive any attack or disaster because the vaults that store the banking transactions are distributed geographically. Any disaster will leave at least one vault operational.
  • It prevents data stored in its vaults from being changed by hackers or other unauthorized personnel.
  • It is owned by each participant.

Customer data stored in a Sheltered Harbor data vault is encrypted and kept private by the institution owning that data. Extracted data is decrypted, validated, formatted, and re-encrypted before it is transmitted to the requesting party via industry-established file formats.

Sheltered Harbor establishes standards to increase the resiliency of participating institutions so that they can reliably access their data. It promotes the adoption of these standards and monitors the adherence of financial institutions to these standards so that consumers benefit from the added protections.

A Backup Buddy System
Sheltered Harbor provides a backup buddy system. Banks choose ‘restoration’ partners that store a vault of one another’s core data, which is updated each night. If one bank goes down, the other can restore accounts from its buddy vault and make customers whole. Thus, redundant backup vaults eliminate the risk of a single point of failure.

Each day, participating banks and brokerage houses convert customer data into a standardized format, encrypt it, save it in air-gapped storage, and put it in the air-gapped storage medium of their restoration partners. Thus, the data is archived in secure vaults that are protected from alteration or deletion.

Sheltered Harbor is Complementary to FS-ISAC
FS-ISAC (Financial Services – Information Sharing and Analysis Center) is a U.S. industry trade group representing securities firms, banks, and asset management companies. It is the global financial industry’s resource for cyber and physical threat intelligence analysis and sharing.

FS-ISAC is a member-owned, non-profit organization. It was created by and for the financial services industry to help assure the resilience and continuity of the global financial services infrastructure against acts that could significantly impact the sector’s ability to provide services critical to the orderly function of the global financial system and economy. Founded in 1999, FS-ISAC has over 7,000 members worldwide.

FS-ISAC enables financial institutions to securely store and rapidly reconstitute account information should the data become lost or corrupted. FS-ISAC makes account information available to customers in the event that an institution appears unable to recover from a cyber incident. In this respect, FS-ISAC performs functions similar to that of Sheltered Harbor and adds to the capabilities of Sheltered Harbor.

Summary
Sheltered Harbor was created to provide secure and resilient storage for the financial transactions of banks and brokerages. It is unique in that it is owned by the participating financial institutions.
Will Sheltered Harbor ever use blockchain technology to increase its security and resilience? A blockchain model has been created based on the Ethereum block chain. However, it has yet to gain approval by the participating financial institutions.

Acknowledgements
Information for this article was taken from the following sources:
FS-ISAC and Sheltered Harbor; November 23, 2016.
Banks’ underground data vault is evolving – will it use blockchain next?, American Banker; February 16, 2018.
www.shelteredharbor.org

 

^
TABLE OF CONTENTS

 

TOPTHINKINGBanner.jpg
 

Teach your children well. Hyperconverged can help.


ChrisP.jpeg

Chris Purcell

Chris Purcell has 29+ years of experience working with technology within the datacenter. Currently focused on integrated systems (server, storage, networking and cloud which come wrapped with a complete set of integration consulting and integration services.)

You can find Chris on Twitter as @Chrispman01 and @HPE_ConvergedDI and his contribution to the HPE CI blog at www.hpe.com/info/ciblog


Acr354306366963231143.jpg

“Teach your children well. Their IT purgatory will surely go by…”  I’ve stretched the lyrics a bit, but not the meaning. Crosby, Stills & Nash were making a good point in that classic song: In order to prepare our children to be the innovators of the future, it’s really important how we teach them, not just what we teach them. High tech education requires infrastructure that can keep up with the latest technology advances.

The reality is, any datacenter can experience an outage. When servers are down in a college or school district, the applications that students and teachers rely on are down. In legacy IT environments, downed servers can take hours or days to recover. High performance hyperconverged solutions, on the other hand, can get apps back online in seconds. That kind of resiliency and responsiveness in the datacenter helps teachers perform at their best in the classroom, which helps students learn faster.

Technology can be a bottleneck for learning
I have great respect for teachers. They need to simplify the world for their students, keeping up with the latest developments in their specific field and presenting information in simple, clear terms. In a way, they are always on stage, always needing to perform at the highest level in the classroom.

The technology that school districts rely on should be no different: simple, highly available, and continuously performing at the highest levels. In many cases, that’s easier said than done. The latest high tech devices are not always an option for educational facilities with limited resources and tight budgets. School districts that boast cutting-edge education for the students often have to settle for older technology in the IT environment:

  • Siloed IT devices and apps that are complex to manage and difficult to scale
  • Outdated equipment that has all but outgrown the allocated space in the IT closet
  • Slow data recovery capabilities and poor performance that cannot accommodate innovative teaching methods and programs

Hyper simple, hyper cost-effective. Hyperconverged.
For IT administrators, hyperconverged infrastructure (HCI) offers a relatively easy way out of old school technology for three reasons. The first is that HCI is simple. Storage, compute, networking and data services are all together in a single compact building-block this is easy to deploy, manage and scale. Second, HCI offers high performance through all-flash technology and built-in dedupe and compression. And, HCI is cost-effective. In a 2017 study, Forrester Consulting found that HCI reduces total cost of ownership (TCO) by 69% on average compared to traditional IT.

Hyperconverged Infrastructure in today’s top schools
An increasing number of customers have turned aging infrastructures around with HCI. For many of them, their only regret is that they didn’t adopt hyperconverged technology sooner. For example, the Janesville school district budget was extremely limited, but the systems administrator could no longer avoid the IT system’s poor performance issues. She admits their IT department worked in a reactive state, simply putting out fires rather than focusing on more important projects. If a problem arose, the amount of time for a server to reboot, come back up, and get services back online was unacceptable. They chose HPE SimpliVity powered by Intel® to move off of their old SAN and server infrastructure, and to simplify management. The new HCI platform lets them back up and restore 500 GB VMs within seconds, and demonstrates cost savings of over 3x compared to their legacy solution. “That’s a vast difference from the way we used to operate. We’ve gone from sitting around biting our nails, waiting for something to reboot, to an instant, stress-free fix.”

In a recent article, Luther College attests to the performance and simplicity of HPE SimpliVity hyperconverged infrastructure. Students there explore topics like artificial intelligence, virtual reality, and augmented reality. Last year, students sent their experiments to the International Space Station. Their chief technology officer knows: Traditional approaches to learning can stifle education. When it was time to update an aging IT system, they turned to HPE SimpliVity to transform operations. The new HCI was simple to deploy and manage, and it helped them cut costs and save datacenter space. Thanks to hyperconvergence, they’re seeing “incredible performance for our end users. We have countless people bringing in new requirements, which is why we need to be flexible. We must view these new challenges as a chance to help students learn better… The only way we can welcome these demands is because we know we can rely on HPE SimpliVity.”

Tulare school district’s fractured IT environment was becoming increasingly risky and costly to operate and scale. The IT organization relied on an outdated tape-based backup solution that was slow and inefficient. Equipment failures or disasters had the potential to disrupt IT services for hours while servers were rebuilt and data restored. The district deployed HPE SimpliVity infrastructure and reduced backup times from 60 hours to about a minute. HPE SimpliVity’s data efficiencies also help conserve storage capacity and save money. “HPE SimpliVity improved our performance, reliability and data protection,” says the director of technology. “It is probably the best purchase I’ve made.”

To learn more about how hyperconvergence can drive simplicity and efficiency in your datacenter, download the free e-book, Hyperconverged Infrastructure for Dummies.

 

^
TABLE OF CONTENTS

 

bannerFLIT.jpg
FutureLeadersTag.png
 

Connect Future Leaders in Technology 2017

The Connect Future Leaders in Technology Scholarship is dedicated to the memory of Savannah Buik.

 
savannah buik.jpg

Savannah Buik
2018 GRADUATE
DEPAUL UNIVERSITY
BS MATHEMATICS

"Live your truth."
- Savannah Buik

Congratulations to 2017’s Recipients

Caroline Anderson.jpg

Caroline Anderson

Morgantown, Pennsylvania
West Virginia Penn State University, Computer Science

 

Caroline Anderson is a first year computer science major at Pennsylvania State University Schreyers Honors College. Caroline’s interest in computer science sparked as a sophomore in high school while enrolled in an AP Computer Science class. Since then, her love for computer science has only continued to grow.

Caroline completed two summer internships at NASA’s Independent Verification and Validation facility creating a database to categorize potential hazards for critical NASA missions. During her junior and senior years of high school, she was a researcher at West Virginia University’s Human Computer Lab under Dr. Saiph Savage. In this lab, she volunteered for e-NABLE, a nonprofit organization that builds 3D printed prosthetic hands for children in need, where she guided high school campers to build 20 prosthetic devices. She also created the West Virginia University Statler College twitter bot @WVUStatlerBot that answers commonly asked questions for engineering students at West Virginia University.

In high school, Caroline served as the treasurer of both the National Honor Society and Interact Club, and has logged over 200 hours of community service. This summer (2018), she plans to research at Carnegie Mellon University's Human Computer Interaction Institute. Caroline is very excited to learn more about computer science and what area will allow her to best serve others. You can learn even more about Caroline at her website: carolineganderson.com.

 
afe addeh.jpg

Afe Addeh

Greenbelt, Maryland
University of Maryland College Park
Computer Science

Afe Addeh is a first-generation, African American student who has achieved many goals and accomplishments since she began her exploration into the field of computer science. In high school, she led her school's Cyber Patriot team to the “Top Three” in state competition for two years in a row. Afe also participated in the Congressional App Challenge since 2014 and won 3rd place for her app, Career Finder, in 2017.
During high school, Afe sought to inspire other female minority students to excel in STEM courses and activities. In 2016, Afe launched the first girl-prioritized club and the first computer programing club in her school: Girls Who Code.  Afe will begin her first year in college at the University of Maryland at College Park, where she will major in computer science with a minor in computer graphics/animation.

 

^
TABLE OF CONTENTS

COend.png


 
EDUCornerTag.jpg
 
KellyBaig-HPE_13_FK9C7191.jpg

Kelly Baig

Badging Program Manager
HPE Education Services

KBTitle.jpg

In the Spring of 2017, Deloitte published their annual Human Capital Trends survey. In it, they reported that:

  • Skills development is the #2 topic on the minds of CEOs and HR leaders
  • 83% of organizations rate this topic as  important
  • 54% of organizations rate this topic as urgent

In contrast to this need, however, many organizations report having challenges with obtaining all of the technology training that their IT teams and technical professionals require. Some of the primary inhibitors to sufficient training include the fast pace of changes in technology, along with the lack of ability for organizations to tolerate time out of office.

In addition, the workforce demographic has changed; millennials and digital natives learn differently and with different motivation than other generations. They value flexibility, collaboration and community. They expect learning to deliver high quality, engaging experiences, and to utilize technology and social platforms as part of the experience.

So what is needed? A combination of new blended with traditional. New content which is modular and can be served on-demand, at the point of need for learning alongside traditional learning methods. Digital communities of learners who can interact with each other and collaborate to accelerate and reinforce the learning experiences. Digital badging credentials which enable professionals with earned skills, to show their capabilities and find each other online for assistance and coaching.

Announcing: HPE Digital Learner

HPE has introduced a modern digital learning platform to provide our customers with deeper and broader access to all types of technical training. This platform provides:

  • An online Digital Learner portal, enabling people to increase their technical skills with hands-on virtual learning experiences presented in tailored learning paths, with supporting materials, videos, labs, and interactive communities of professionals all in virtual digital teams
  • Content Packs, covering the core HPE technology areas like servers, storage, networks, cloud and security along with other adjacent technology
  • Premium Seats, with access to traditional classroom training for licensed seat-holders enabling progression of training without having to budget for each course separately
  • Discussion Forum communities, for interactive collaboration, mentoring and information sharing as part of the training experience
  • Digital Badging credentials, awarded to individual students who complete training journeys and upskill missions
  • Metrics and reporting, to measure team and individual learning progress
  • Education Consulting experts, to help monitor and guide learning journeys, suggesting content and appropriate next steps according to progress and reporting

Learn more

COend.png


 

DanaG.jpg

Dana Gardner

Analyst Dana Gardner hosts conversations with the doers and innovators—data scientists, developers, IT operations managers, chief information security officers, and startup founders—who use technology to improve the way we live, work, and play.
View an archive of his regular podcasts.


How an insurance innovator built a modern hyperconverged infrastructure environment that rapidly replicates databases to accelerate developer agility.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next developer productivity insights interview explores how a South African insurance innovator has built a modern hyperconverged infrastructure (HCI) IT environment that replicates databases so fast that developers can test and re-test to their hearts’ content.

We’ll now learn how King Price in Pretoria also gained data efficiencies and heightened disaster recovery benefits from their HCI-enabled architecture. Here to help us explore the myriad benefits of a data transfer intensive environment is Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. Welcome.

Jacobus Steyn: Thank you so much for having me.

Gardner: What have been the top trends driving your interest in modernizing your data replication capabilities?

Steyn: One of the challenges we had was the business was really flying blind. We had to create a platform and the ability to get data out of the production environment as quickly as possible to allow the business to make informed decisions -- literally in almost real-time.

Gardner: What were some of the impediments to moving data and creating these new environments for your developers and your operators?

Steyn: We literally had to copy databases across the network and onto new environments, and that was very time consuming. It literally took us two to three days to get a new environment up and running for the developers. You would think that this would be easy -- like replication. It proved to be quite a challenge for us because there are vast amounts of data. But the whole HCI approach just eliminated all of those challenges.

Gardner: One of the benefits of going at the infrastructure level for such a solution is not only do you solve one problem but you probably solve multiple ones, things like replication and deduplication become integrated into the environment. What were some of the extended benefits you got when you went to a hyperconverged environment?

 

Time, Storage Savings

Steyn: Deduplication was definitely one of our bigger gains. We have had six to eight development teams, and I literally had an identical copy of our production environment for each of them that they used for testing, user acceptance testing (UAT), and things like that.

At any point in time, we had at least 10 copies of our production environment all over the place. And if you don’t dedupe at that level, you need vast amounts of storage. So that really was a concern for us in terms of storage.

Gardner: Of course, business agility often hinges on your developers’ productivity. When you can tell your developers, “Go ahead, spin up; do what you want,” that can be a great productivity benefit.


How to solve key challenges With HPE SimpliVity HCI


Steyn: We literally had daily fights between the IT operations and infrastructure guys and the developers because they were needed resources and we just couldn’t provide them with those resources. And it was not because we didn’t have resources at hand, but it was just the time to spin it up, to get to the guys to configure their environments, and things like that.

It was literally a three- to four-day exercise to get an environment up and running. For those guys who are trying to push the agile development methodology, in a two-week sprint, you can’t afford to lose two or three days.

Gardner: You don’t want to be in a scrum where they are saying, “You have to wait three or four days.” It doesn’t work.

Steyn: No, it doesn’t, definitely not.

Gardner: Tell us about King Price. What is your organization like for those who are not familiar with it?

Steyn: King Price initially started off as a short-term insurance company about five years ago in Pretoria. We have a unique, one-of-a-kind business model. The short of it is that as your vehicle’s value depreciates, so does your monthly insurance premium. That has been our biggest selling point.

We see ourselves as disruptive. But there are also a lot of other things disrupting the short-term insurance industry in South Africa -- things like Uber and self-driving cars. These are definitely a threat in the long term for us.

It’s also a very competitive industry in South Africa. So we have been rapidly launching new businesses. We launched commercial insurance recently. We launched cyber insurance. So we are really adopting new business ventures.

Gardner: And, of course, in any competitive business environment, your margins are thin; you have to do things efficiently. Were there any other economic benefits to adopting a hyperconverged environment, other than developer productivity?

PullQuote1.png

Steyn: On the data center itself, the amount of floor space that you need, the footprint, is much less with hyperconverged. It eliminates a lot of requirements in terms of networking, switching, and storage. The ease of deployment in and of itself makes it a lot simpler.

On the business side, we gained the ability to have more data at-hand for the guys in the analytics environment and the ratings environment. They can make much more informed decisions, literally on the fly, if they need to gear-up for a call center, or to take on a new marketing strategy, or something like that.

Gardner: It’s not difficult to rationalize the investment to go to hyperconverged.

 

Worth the HCI Investment

Steyn: No, it was actually quite easy. I can’t imagine life or IT without the investment that we’ve made. I can’t see how we could have moved forward without it.

Gardner: Give our audience a sense of the scale of your development organization. How many developers do you have? How many teams? What numbers of builds do you have going on at any given time?

Steyn: It’s about 50 developers, or six to eight teams, depending on the scale of the projects they are working on. Each development team is focused on a specific unit within the business. They do two-week sprints, and some of the releases are quite big.

It means getting the product out to the market as quickly as possible, to bring new functionality to the business. We can’t afford to have a piece of product stuck in a development hold for six to eight weeks because, by that time, you are too late.

Gardner: Let’s drill down into the actual hyperconverged infrastructure you have in place. What did you look at? How did you make a decision? What did you end up doing?

Steyn: We had initially invested in Hewlett Packard Enterprise (HPE) SimpliVity 3400 cubes for our development space, and we thought that would pretty much meet our needs. Prior to that, we had invested in traditional blades and storage infrastructure. We were thinking that we would stay with that for the production environment, and the SimpliVity systems would be used for just the development environments.

But the gains we saw in the development environment were just so big that we very quickly made a decision to get additional cubes and deploy them as the production environment, too. And it just grew from there. So we now have the entire environment running on SimpliVity cubes.

We still have some traditional storage that we use for archiving purposes, but other than that, it’s 100 percent HPE SimpliVity.

Gardner: What storage environment do you associate with that to get the best benefits?

 

Keep Storage Simple

Steyn: We are currently using the HPE 3PAR storage, and it’s working quite well. We have some production environments running there; a lot of archiving uses for that. It’s still very complementary to our environment.

PullQuote2.png

Gardner: A lot of organizations will start with HCI in something like development, move it toward production, but then they also extend it into things like data warehouses, supporting their data infrastructure and analytics infrastructure. Has that been the case at King Price?

Steyn: Yes, definitely. We initially began with the development environment, and we thought that’s going to be it. We very soon adopted HCI into the production environments. And it was at that point where we literally had an entire cube dedicated to the enterprise data warehouse guys. Those are the teams running all of the modeling, pricing structures, and things like that. HCI is proving to be very helpful for them as well, because those guys, they demand extreme data performance, it’s scary.

Gardner: I have also seen organizations on a slippery slope, that once they have a certain critical mass of HCI, they begin thinking about an entire software-defined data center (SDDC). They gain the opportunity to entirely mirror data centers for disaster recovery, and for fast backup and recovery security and risk avoidance benefits. Are you moving along that path as well?

Steyn: That’s a project that we launched just a few months ago. We are redesigning our entire infrastructure. We are going to build in the ease of failover, the WAN optimization, and the compression. It just makes a lot more sense to just build a second active data center. So that’s what we are busy doing now, and we are going to deploy the next-generation technology in that data center.

Gardner: Is there any point in time where you are going to be experimenting more with cloud, multi-cloud, and then dealing with a hybrid IT environment where you are going to want to manage all of that? We’ve recently heard news from HPE about OneSphere. Any thoughts about how that might relate to your organization?

 

Cloud Common Sense

Steyn: Yes, in our engagement with Microsoft, for example, in terms of licensing of products, this is definitely something we have been talking about. Solutions like HPE OneSphere are definitely going to make a lot of sense in our environment.

There are a lot of workloads that we can just pass onto the cloud that we don’t need to have on-premises, at least on a permanent basis. Even the guys from our enterprise data warehouse, there are a lot of jobs that every now and then they can just pass off to the cloud. Something like HPE OneSphere is definitely going to make that a lot easier for us.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how an insurance innovator has built a modern hyperconverged infrastructure that replicates databases very fast for their developers and has also led to a wholesale modernization of their IT environment. We have learned how King Price has also gained data efficiencies and heightened disaster-recovery benefits as a result of their HCI-enabled architecture.

So please join me in thanking our guest, Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. Thank you so much.

Steyn: Thank you for having me.

Gardner: And a big thank you to our audience as well for joining us for this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this content along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how an insurance innovator built a modern hyperconverged infrastructure environment that replicates databases to accelerate developer agility. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.

You may also be interested in:

 

^
TABLE OF CONTENTS

 
COend.png


bannerSpotlight.jpg
JANICE-3.jpg
 

“You don’t know what you don’t know; you don’t always know what you’re missing.”

For Janice Zdankus, diversifying the Science, Technology, Engineering and Math (STEM) fields is essential to our world’s growth. Without these voices, industry and research lack insight and perspective on how to solve many of the world’s problems. This issue is becoming increasingly prevalent as technology employers are experiencing a shortage of qualified employees; however, many see helping underprivileged students to enter STEM professions as a way to combat this problem.

Zdankus knows what she’s talking about. A thirty-two year industry veteran, she is the vice president of Quality on Hewlett Packard Enterprise’s (HPE) Customer Experience and Quality team. After graduating with Bachelor of Science degrees in Computer Science and Industrial Management from Purdue University, and later earning a MBA degree from Santa Clara University, she has developed a keen skill set in several fields, including software engineering and development, product marketing, strategic planning and customer support.

Advocating for a Growing Field

Ever since she began her path in computing, Zdankus has always been passionate about encouraging others to pursue a career in computer science and computer engineering. For women in particular, she sees herself as a good example of how they can succeed in a field where gendered numbers are slowly shrinking.

“The most effective ambassador for a role in engineering and computing can be a woman or minority who is in that role today, because it’s our job to be effective role models who are there for people looking for those who ‘look like them,’ to be able to validate that the field has successful options for them,” she is quick to explain.

Since she graduated from Purdue University and started working for Hewlett-Packard (now split into two separate companies, Hewlett Packard Enterprise and HP Inc.), Zdankus has been encouraging men and women in these underrepresented groups to pursue STEM professions. In 2015, she became a Board Member for the National Center for Women and Information Technology (NCWIT), a nonprofit that supports women to succeed in computing-related careers. She also acted as HPE’s Board Liaison for the National Action Council for Minorities in Engineering (NACME) for over nine years, another nonprofit which awards scholarships to highly qualified, underrepresented minority engineering students. That led to her involvement at ABET, years later.

janice_2.jpg

Her relationship with NACME is special, as one of Hewlett-Packard’s founders, David Packard, was a board member of NACME at the beginning of the organization’s development. Zdankus feels proud of her company’s efforts to combat inequalities in engineering fields. “It feels good to know we are a pioneer in the industry that placed a priority on inclusion and diversity from the get-go,” she ponders.

Zdankus’ work with NACME paved the way for her to found a new nonprofit, Curated Pathways™, which is housed under a subsidiary of the YWCA Silicon Valley and is in partnership with her alma maters Santa Clara University and Purdue University. Through the program, students use an application to “curate” their interests, ultimately guiding them to a career they may pursue. Using this technology lessens the chance for human bias and focuses their attention on activities and programs that truly produce outcomes, allowing students to find suitable career choices regardless of race, class or gender norms.

janice3.jpg

Zdankus is currently kicking off Curated Pathways™ at a middle school in the San Francisco Bay Area, where pressure to hire diverse employees is especially high. At the small school of just under 600, 74 percent of the students are Latino, and 84 percent qualify for free or reduced cost lunch, low socio-economic status. She is enthusiastic about the potential of changing these students’ lives, and, if successful, she hopes to have other schools eventually implement it nationwide.

 

Defining Support

In addition to working with these organizations, Zdankus also coaches young students. Of course, she believes careers in STEM fields are how millennials can make the most significant impact, and she lets her subjects know the value of these professions.

“I remember mentoring a young lady who said, ‘I want to be a doctor, I want to go to medical school, because I can save a life,’” she pauses. “And the kind of feedback I gave her was you can save a life at a time, but we’re talking about saving thousands of lives at a time. You can save thousands of lives at a time with a technology change. Not everybody quite internalizes that.”

janice4.jpg

Although she may not be in the frontline saving lives, Zdankus’ role in support technology saves HPE’s customers many headaches. As the Vice President of Quality on HPE’s Customer Experience and Quality Team, Zdankus’ team ensures optimal customer experiences with HPE’s product, designing their information and support technology so customers have a better chance at quickly solving a problem on their own.

She made her way to this position after several roles within the company, including her start as a software development engineer and then making her way as a product marketing manager, support manager and director of engineering, among other positions. She says her roles as support manager and director of engineering were positions that prepared her most for her current role, as she was able to hear directly from HPE’s customers and understand what features they wanted in their products.

With an impressive track record, she became known in the company as a change agent, someone who has strategic thinking but who also has the ability to drive an organization forward to adopt and accept new technologies and new ways of doing business.

Her belief in using technology to enhance business, life and educational experiences reflects her ambition to create a more sustainable, efficient and advanced society. To Zdankus, it also shows how the quality of ABET-accredited STEM programs is increasingly more important, as graduating students will be using scientific advancements to change tomorrow’s world.

Janice Zdankus serves on the Industry Advisory Council (IAC) and sits on the Board of Directors.

janic5.jpg

About ABET

ABET is a forward-thinking, purpose-driven organization recognized by the Council for Higher Education Accreditation. All over the world, ABET accredits college and university technical programs committed to the quality of the education they provide their students.

Based in Baltimore, we are a global company, with over 3,700 programs in 30 countries in the areas of applied science, computing, engineering and engineering technology at the associate, bachelor and master degree levels.

This article first appeared on Accreditation Board for Engineering and Technology (ABET) website

 

^
TABLE OF CONTENTS

 
COend.png

bannerXypro.jpg
 

SZ.jpg

Steve Tcherchian, CISSP
Chief Information Security Officer, XYPRO Technology
@SteveTcherchian @XYPROTechnology


Steve Tcherchian, CISSP, PCI-ISA, PCIP is the Chief Information Security Officer and the Director of Product Management for XYPRO Technology.  Steve is on the ISSA CISO Advisory Board, the NonStop Under 40 executive board and part of the ANSI X9 Security Standards Committee.  A dynamic tech visionary with over 15 years in the cyber security field, Steve is responsible for strategy and innovation of XYPRO’s security product line as well as overseeing XYPRO’s risk, compliance and security to ensure the best experience to customers in the Mission-Critical computing marketplace.


There is quite a large disconnect in the way breaches are evolving versus how security solutions are keeping up to address them. Virtualization adds an entire new layer of complexity to the puzzle. As a security strategist, I’m constantly evaluating what is possible to help identify gaps and opportunities. The one thing I have learned over the course of my career:

The only thing constant in cyber security is that attackers’ methods will continue to evolve.  They get smarter, more resourceful and are impressively ever patient.

The HPE Integrity NonStop server is not only a foundation of the HPE Server business, it is also central to countless mission-critical environments globally.  For the longest time, security of these powerful systems and the “Mission Critical” applications they run remained mostly static and under the radar while high profile attacks on other platforms have taken the spotlight.  That hasn’t lessened the risk to the NonStop server. It’s actually created a gap. With globalization, virtualization and introduction of new technologies like IoT, this security gap will only increase if not addressed.

Interestingly enough, the NonStop server isn’t the only mission critical enterprise solution in this situation. There are some colorful parallels that can be drawn between applications running on the NonStop server and those running in SAP environments. Both are in highly mission-critical environments and vital to the revenue generation of an organization, and they frequently run payments applications like ACI’s BASE24 and other homegrown applications. This creates some interesting security challenges. In a recent The Connection Magazine Article, Jason Kazarian, Senior Architect at HPE described legacy systems as “complex information systems initially developed well in the past that remain critical to the business in spite of being more difficult or expensive to maintain than modern systems”. His article went on to point out the security challenges of legacy applications.  In summary, some of these types of applications can tend to be unsupported, security patches aren’t readily available and if they are, they aren’t applied in a timely fashion because of fear of disruption, and they don’t have a lot of the security features modern applications would have. This makes detecting and addressing security risk and anomalies a greater challenge than it already is.

 

Mind The Gap
How can this problem be addressed? Protect what you can.  As a first step, be it system, application or data – push the risk down the stack to an area that is more controllable by security controls. For example, tokenizing data used by a legacy application will send an attacker to go search for that data through alternate methods, preferably one better suited for detection.

Have a risk based, layered approach.  This will swing the odds in your favor. Perhaps not entirely in your favor, but this approach will provide you with the arsenal you previously did not have:  It will create those choke points, provide the visibility needed and help reduce mean time to detection and response.

With the way threats are evolving, those of us responsible for security need to constantly evaluate and assess our capabilities. Let’s take a dive into each layer to explore the benefits they provide in an overall security strategy.

IMAGE1.png

Protection/prevention is the first and most critical layer of any security framework. Without a proper protection layer in place, none of the other layers can be relied upon. Think of the protection layer as the traditional defensive strategy – “the wall built around assets“. This includes defining and implementing a security policy as well as hardening of the network, the system and applications. The protection layer is also where users, roles, access control and audits are set up. Key fundamentals to consider as part of the protection layer.

  • Authentication – Allows a system to verify that someone is who they claim to be. In a HPE NonStop server environment, this can be done using Safeguard, XYGATE User Authentication, or through application authentication.
  • Authorization – Determines what a user can and cannot do on a system. Authorization defines roles and access to resources.  
  • Access Control – Enforces the required security for a resource or object.
  • Logging and Auditing – Ensures that all security events are captured for analysis, reporting and forensics.
  • Encryption and Tokenization – Secures communication and data both in flight and at rest. Examples of products which protect data include VLE, TLS, SSH, Tokenization and more.

Vulnerability and Patch Management – Ensure timely installation of all RVUs, SPRs and application updates. Prioritize and take recommended action on HPE Hotstuff notices.
These types of preventative controls are necessary and intended to prevent unauthorized access to resources and data, but they cannot solely be relied on as a sustainable security strategy. Attackers’ motivations and sophistication are changing, therefore when prevention fails, detection should kick in while there is still time to respond and prevent damage.

Detect
In testimony given before the Senate Subcommittee on Science, Technology and Space, famed cryptographer and cyber security specialist Bruce Schneier said:

“Prevention systems are never perfect. No bank ever says: “Our safe is so good, we don’t need an alarm system.” No museum ever says: “Our door and window locks are so good, we don’t need night watchmen.  Detection and response are how we get security in the real world… “

Schneier gave this testimony back in July of 2001, yet in 2018 where organizations are getting hit by incidents they can’t detect, this premise is still valid and critical. In the previous section we discussed hardening systems and building a wall around assets as the first layer of security strategy. I’m surprised by the number of conversations I have with IT and Security folks who still carry the mindset that this degree of protection and compliance is good enough. No matter what level of protection a system has, given enough time, an attacker will find a way through. The faster you can detect, the faster you can respond, preventing or limiting the amount of damage a security breach can cause.

Detection is not a simple task. The traditional method of detection is through setting up distinct rules or thresholds. For example, if a user fails 3 logons in a span of 5 minutes, detect it and send an alert. In most cases that rule is explicit. If the failed logon events spanned 20 minutes, or worse yet, 10 days, it would not be detected. The limitation with relying on rules for detection is they will not alert on what they don’t know about. Those low and slow incidents and unknown unknowns – activity not normal on a given system -will fly under the radar and no one would be the wiser until you get a call from the FBI.
The other challenge is correlating events from multiple data sources. Let’s look at the incident diagram below.

GRAPH1.png

In this incident pattern, we have events from EMS, Safeguard and XYGATE. The NonStop server could send each individual data source to a Security Incident and Event Management (SIEM) solution, but the SIEM would not have any context to detect the incident pattern as suspicious behavior. A security analyst could create rules to detect the incident pattern, but that’s just one use case. The traditional method is to scour through event audit records, try to put the pieces together and then create a rule to detect that pattern in the future. The weakness in that thinking is the incident has already occurred. You’re putting a rule together on the off chance it will happen again. However, it’s not reasonable or possible to anticipate and define every possible incident pattern before it happens.

A third area of concern is profiling a system and its behavior to understand what is normal behavior for users, applications and the system to be able to recognize when activity is not normal. This can be accomplished through evaluating the system and its configuration, profiling the system over a period of time, profiling user behavior, highlighting risk management and a variety of other intelligence methods. This is where machine learning has a significant advantage. No human could possibly evaluate the volume of data needed to make these types of determinations at the speed required by today’s standards. Machine learning is a type of artificial intelligence that enables the system to teach itself. Explicit rules are no longer the lone method of detection. Machine learning can profile a system or network over a given amount of time to determine what is normal to isolate what is not normal. Inserting machine learning as part of a solution process significantly increases abilities to stay on top of what is going on with a given system, user, network or enterprise.

IMAGE2.png

Alert
The third layer relies on alerting. The challenge most environments have as they grow and their infrastructure becomes more chaotic with more tools, more users, more data and more events is they alert too much or too little. How does one know what to act on and what is just noise? There are solutions that position themselves as being able to do data and analytics, but that ends up generating more data from existing data. Someone needs to determine if the newly formed alert is actionable or just noise.

Going back to our previous failed logon example, if we were to receive 15 different alerts for the same rule, how can one know which alert to pay attention to and which to safely ignore? If you’ve ever been responsible for responding to security alerts, you know this creates alert fatigue. Back in my early days, mass deleting emails of similar types of alerts was one of my favorite things to do.

Contextualization allows the system itself to determine what is actionable and what is just noise. A solution like XYGATE SecurityOne can evaluate each potential alert and, based on activity that happened previously for that that user, IP, system etc…, determine whether the reported activity is business as usual or a serious issue that needs to be paid attention to. Creating new data and new alerts from existing data doesn’t solve the problem. Applying context to the new incidents generated helps focus efforts on those incidents that truly need attention. Once an account changes hands, it will behave slightly differently.

Context is Key.

IMAGE3.png

Respond: Deploy your army
For any of the first three layers to produce value, there needs to be a proper incident response plan. Responding will allow you to deploy your countermeasures, cut off access, send the attacker to a mousetrap or other actions that will assist in minimizing the impacts and recovery of a breach.

Containing the breach and quickly recovering from it are the most important steps of this layer. Response and containment comprise of a number of simultaneous activities to assist in minimizing the impact of a breach. These may include but not limited to:

  • Disabling accounts
  • Blocking IPs and Ports
  • Stopping applications or services
  • Changing administrator credentials
  • Additional firewalling or null routing
  • Isolating systems

This is necessary to slow down or stop an attack as well as the preservation of evidence. Evidence of the attack is generally gathered from audit logs, but coupled with detection and analytics tools can provide access to information in a much quicker and more granular fashion. Being able to preserve evidence is key is forensic investigations of the breach as well as important for prosecution.

Once all the pieces fall into place and there is an incident alert that requires response, how will your organization deal with the issue? Breach incidents are hardly ever the same. There needs to be a level of categorization and prioritization on how to deal with specific incidents. In some cases, you may want to slowly stalk your attacker, where in others, the sledgehammer approach may be the only thing that can preserve data. Does everyone understand their assigned roles and responsibilities? Is there someone in charge? Is there a documented plan? All of these are considerations that need to be accounted for as part of response. This can be summarized in two words – BE PREPARED.

Resources
On the HPE NonStop server – the protection layer can be addressed with properly configuring Safeguard, implementing protection of data in flight and data at rest and deploying third party security tools available for the system. For alerting and detection, XYGATE Merged Audit with HPE Arcsight can provide the tripwires and alarms necessary for proper detection. For further detail on how to properly protect a NonStop server, HPE has published the HPE NonStop Security Hardening Guide. XYPRO has also published a 10 part blog series on how to properly protect a NonStop server.

For the next generation of detection and alerting, XYPRO’s  newest offering, XYGATE SecurityOne (XS1), bringing risk management and visibility into real time. XS1’s Patented technology correlates data from multiple HPE Integrity NonStop server sources, detects anomalies using intelligence and analytics algorithms to recognize event patterns that are deemed out of the ordinary and suspicious for users, the system and environment. Coupled with SIEM solutions, XS1 can provide a constant, real time and intelligent view of actionable data in a way that was never been seen before.

Strong technology and process is important, but people are paramount to any successful security strategy. Regular security training and development on industry best practices, security trends and attack evolution should be factored into any security program. Without ongoing training and reinforcement of people, the gap only has an opportunity to widen. An organization’s  most valuable resource is the people hired to provide security and close the gap. Use them wisely and ensure they have the tools and training to provide the layers of defense required.

Cyber criminals don’t sit around waiting for solutions to catch up. Security complacency ends up being the Achilles Heel of most organizations. Because of its unique attributes, security on the NonStop server needs to be addressed in a layered approach and Risk Management is a big part of the process. Putting the layers in place to allow us to highlight risk as early as possible to address it is key in dealing with upcoming challenges. This will hopefully help bridge the gap between attacks and security.

We need to recognize the paradigm shift in how we approach security, especially in a virtual word and understand an attackers’ ability to stay one step ahead of most defenses is central to their strategy. As the NonStop platform evolves and becomes more interconnected, what was put in place previously to address security will not be sustainable going forward. No matter how vendors position their solutions, security is hard, doing the right thing is hard, but that doesn’t mean security professionals need to work harder.

From a security professional’s perspective, cyber criminals will always be viewed as war-like.  Relentlessly driving to break into systems, get to data, wreak havoc and cause disruption to fulfill their malicious objectives. Meanwhile, cyber security staff need to act more cautiously and deliberately to avoid being seen while following the enemy.   With the proper security layers in place, the enemy will be thwarted by deliberate masking, redirection and detection that hides where the data really is and alerts when the enemy is near. We continue to get smarter by blocking, hiding and redirecting things away in response to attacks. We just have to keep it up and evolve with the technology around us.


Learn more about XYGATE Security

 
COend.png

 

bannerFeature.jpg
 

calvinzito.jpg

Calvin HPE Blogger & Storage Evangelist

Calvin Zito is a 35 year veteran in the IT industry and has worked in storage for 27 years. He’s an 8-time VMware vExpert. As an early adopter of social media and active in communities, he has blogged for 10 years.
You can find his blog at hpe.com/storage/blog

He started his “social persona” as HPStorageGuy, and after the HP separation manages an active community of storage fans on Twitter as @CalvinZito

You can also contact him via email at calvin.zito@hpe.com


 
 
P1060627.JPG

It’s been just over a year since we acquired Nimble Storage. I was thinking about it recently as we had an HPE Nimble Storage related announcement at the beginning of May. I’ll talk more about that in a bit but I wanted to share some thoughts about the last year.

No one was more surprised than me when I got an email from my manager the night before we were going to announce the acquisition. I had guessed we were going to acquire someone because he was asking me questions about how long it would take to post an article on Around the Storage Block but Nimble Storage wasn’t who I would have guessed.

In hindsight, I really didn’t understand Nimble Storage. I had heard people talk about InfoSight and Predictive Analytics but I did not take time to understand the value.

HPE InfoSight by the numbers

It’s pretty easy to understand the value of InfoSight. A few numbers tell the story:

  • 86%: That’s the number of customer cases that are opened, resolved, and closed before the customer is notified. 86%! That’s huge.
  • 79%: 79% lower storage operation costs. Enterprise Strategy Group quantified the impact HPE InfoSight has on IT costs and time spent managing storage. Research is based on an unbiased study of HPE and non-HPE customers.
  • 45 minutes: When there is a problem that needs attention from HPE Support, those issues go to a Level 3 (L3) engineer and that expert solves the issue on average in 45 minutes.

We recently announced the availability of InfoSight for 3PAR. And with many 3PAR systems taking advantage of InfoSight, we are seeing results there too:

  • A customer was going to spend nearly a million dollars on more 3PAR and compute because they were having performance issues with their VDI environment with over 1500 VMs. They started using InfoSight on 3PAR and found they had a few VMs that were hammering 3PAR with IOs after users logged off because of a virus scan issue. With InfoSight cross stack analytics, they found that they didn’t need more storage and compute and they averted spending a lot of budget on infrastructure.
  • The 3PAR InfoSight team has found a number of signatures that they can proactively address problems. With this information, 85% of over 1000 cases have been automatically opened and resolved. 
  • A customer with multiple 3PAR arrays now using InfoSight improved their performance by over 10X. InfoSight provided performance insights that allowed the customer to rebalance their workload and see the performance dramatically improve.

The foundation has been established to enable predictive support for 3PAR and I expect we’ll have more to say about our vision of autonomous infrastructure at Discover in Las Vegas.

WATCH! Bill Philbin, HPE CTO, discussing future of HPE InfoSight 

Customers speak out about InfoSight

IFDS-logo-Portrait-Corporate-Blue-1.png

International Financial Data Services (IFDS) is a financial services company in Toronto, Canada. They have both HPE Nimble Storage and 3PAR in their environment. Kent Pollard is a Sr. IT Architect at IFDS. We asked him about his experience using InfoSight. He said, “At International Financial Data System, we rely on HPE InfoSight to manage and monitor all of our HPE Nimble Storage and 3PAR arrays. We are really pleased to see that HPE is doing what they said they would by bringing the power of InfoSight to 3PAR”.

I wanted to see if Kent had any tips for 3PAR customers thinking about using HPE InfoSight. Kent said, “Don't be afraid to use InfoSight. No company data is collected by InfoSight, so you do not need to worry about your data being stored offsite. It has a lot of good information and as new HPE hardware is added to it to collect data, you will have more information you will easily access about your infrastructure.”

3tyuRlW__400x400.jpg

Wade Lahr, System Administrator from Children’s Hospital Association said, “We have greater insight into our usage and which servers and applications impact SAN performance. Having encryption on the (HPE Nimble Storage) array has been cost-saving by reducing the number of software encryption licenses needed on our SQL serves.”  Wade concluded by saying, “I would recommend HPE Nimble AF (all flash) arrays solely based on performance, ease of administration, and superior support model.” 2

pratt-regional-medical-center-logo.jpg

Dustin Newby, IT Director at Pratt Regional Medical Center also had very good things to say about how InfoSight has improved their infrastructure. “By simply moving our server and VDI workloads to our HPE Nimble All Flash Arrays, we were able to reduce help desk incidents by 67% including after-hours callbacks due to performance related issues. The array has paid for itself in reduced overtime as well as user and support staff satisfaction. It’s quite simply the most satisfying purchase I have ever made.”  

When Dustin was asked if he would recommend HPE Nimble Storage, he said, “Yes, it is the only array I have ever purchased that has lived up to what was sold. I was completely blown away at how easy it was to install and we saw immediate performance improvements. InfoSight offers a much better window into our environment than I have experienced from other vendors. …. I can’t say enough about support either. It is so nice talking with a knowledgeable person … that actually fixes your problem.” 2

para.jpg

Brock Griffin is the IT Director at Parametric Portolio Associates and his response to if he would recommend HPE Nimble Storage was, “Absolutely. The ease of use, interoperability, and the InfoSight data makes this array a no-brainer for any company looking for an all-flash solution.” 2

ECS.jpg

Joyce Lim, Senior Manager at ECS Pericomp sums it up nicely. “Zero management effort needed with the help of InfoSight.” 2

 

 

The HPE Nimble Storage news: Bigger, faster, better

I mentioned we had an announcement in early May. I’ll briefly summarize it and point you to Around the Storage Block where you can read all the details. There were four parts to the announcement:

  • HPE Store More Guarantee: Lots of all-flash vendors are quoting what their deduplication and compression ratios are. Honestly, everyone is going to have pretty similar numbers. What is key to getting the most capacity out of an all-flash array is minimizing overhead like the array’s OS, RAID, sparing, garbage collection, and other things that take up flash capacity before you start to load your data. With HPE Nimble Storage, we’re so confident that you’ll have more effective capacity than our competition that we guarantee it. Check out the post on ATSB that dives deeper into the HPE Store More Guarantee.
  • Storage Class Memory (or SCM) and NVMe ready: SCM and NVMe are both important technologies to the future of all-flash. And with our HPE Timeless program, the new Nimble Storage arrays can take advantage of both these technologies. For more, read the post on ATSB from the HPE Nimble Storage CTO.
  • New Nimble Storage Hardware:  There’s too much here to cover it briefly but we announced new Adaptive Flash (hybrid arrays) and All Flash hardware. I talk about the new models and some of the highlights in my latest What’s new with Nimble Storage ChalkTalk.
  • Inline always-on deduplication: The last bit of news here that I’ll share is that we now have inline always-on deduplication across the Adaptive Flash Array family, except the HF20C which is cost optimized for compressible data and doesn't support deduplication.

Get more info

Here are some links that you can check out:

You'll be able to see all of our blogs related to the news on ATSB with the label Storage News. We use this label on all of our storage news blog posts but the latest will be on the top of this link so check it out. 

2 Source: TechValidate research findings from surveys of HPE customers and users.

 

^
TABLE OF CONTENTS

 
COend.png


 

Marty.jpg

Marty Edelman Creative System Software, Inc. - CTO

Since leaving The Home Depot Marty Edelman has provided strategic guidance to organizations wishing to modernize their IT infrastructures. While at Home Depot he was responsible for the interconnected payments team which has responsible for all payment processing.

Edelman has been involved in the IT field for more than 30 years. As an independent consultant, he founded a small consultancy firm that specialized in developing high-volume mission-critical solutions for Fortune 500 companies. He and his team helped to build the UPS Tracking System, the NYSE Consolidated Trade and Quote systems, and the S.W.I.F.T. next-generation computing platform.



If I was writing this in the early 2000s, I would have started with the line “open any newspaper and the headline will be screaming about…” but since this is 2018, and newspapers have pretty much gone the way of the dodo bird, I will start with: open any web browser or news app…” My point is that things have evolved very quickly over the last 20 years thanks to the internet. We are all so fully connected that sometimes we forget how much of our lives are lived online. None of us can live without the internet or our connected gadgets. The convenience of having a small computer in our pocket is unmeasurable. Kids born in the last 30 years don’t know what an encyclopedia is, if they want to explore a topic they just fire up Wikipedia or Google and voilà they have the combined knowledge of the world at their fingertips.

 
 

Everything online is hackable; if information is on a computer connected to the Internet, it is vulnerable

All this knowledge and access has a downside that you can’t ignore, which is the security implications of living on the web. Open any web browser or app these days and there is sure to be a story about how some company has experienced a data breach. Everything online is hackable; if information is on a computer connected to the Internet, it is vulnerable. In the last few months the TSA, Verizon, Equifax, NSA, Uber, CIA, US Air Force, Deloitte, and Alteryx just to name a few, have all lost billions of sensitive data elements. If you just examine the Equifax and Alteryx breaches you will quickly determine that pretty much every person that lives in the United States has been impacted. Equifax lost 140 million records, and Alteryx lost 123 million records, each of which contained Personally Identifiable Information (PII) about US citizens. In most cases the folks whose data has been lost didn’t consent or know that their PII was stored by these companies.


Sometimes, it’s not your fault!

The latest threats in the security arms race, Meltdown, Spectre, Ryzenfall, Masterkey, Fallout, and Chimera – enable hackers to steal sensitive information from a computer’s memory or install malware during startup. These are an entirely new class of attacks and are probably just the tip of the spear for this type of vulnerability. These flaws are seismic events because even though there are software patches being deployed by the CPU manufacturers, they aren’t perfect. To totally eradicate these flaws will require a new generation of computer processing chips.


Every second of every day 59 records are lost or stolen. Juniper Research predicts that by 2020, the average cost of a data breach will reach $150M! (The Future of Cybercrime & Security). Since 2013 ~10 billion data records have been lost or stolen, of those records only about 4% were encrypted/tokenized which rendered them useless – the rest most likely are for sale on the dark web. To help companies deal with these breaches, numerous standards have evolved over the last few years which describe how data should be protected. In response to this hostile environment, legislators and industry leaders have developed and are constantly updating their standards and regulations for data security.

Chart1.png

Standards and Regulations Driving Change

Image2.jpg

GDPR
Starting May 25, 2018, a new set of rules takes effect in the European Union that makes having a data breach much more than a bad public relations event. These rules, called the General Data Protection Regulation (GDPR), define and strengthen the rights that consumers have when they are impacted by a data breach.

Most corporations limit the data fields they consider sensitive to things such as name, address, date of birth, Social Security number and driver’s license number; the GDPR adds things that can be used to track a person, including GPS data, genetic and biometric data, browser cookies, mobile identification identifiers (UDID and IMEI), IP addresses, MAC addresses (a unique number that is part of your network adapter) and application user IDs just to name a few.

Click chart to enlarge

Additionally, the GDPR will require corporations to provide information to their users about what personal data they collect on them and how it is processed. Any data they collect must have controls to ensure the privacy of that data. Perhaps the most interesting component of the GDPR is that any company with over 250 employees will be required to have a Data Protection Officer (DPO) who will be responsible for securing a corporation’s data assets. The GDPR has some real consequences for companies that experience a data breach with minimum fines of €10 million or 2% of its gross sales worldwide, whichever is higher!

Once GDPR takes effect, companies will need to either encrypt or tokenize almost all their data to be compliant (Data protection by Design and by Default). They will need to be able to remove a user’s data upon request, known as the right to erasure, or face fines and public backlash. While the GDPR requires companies to do an enormous amount of work, it will make consumers much safer.

GDPR requires companies to document their security controls and to demonstrate that they are compliant with them. Corporations will need to proactively monitor, detect, and defend their data assets.


It is not a question of if you will you be attacked, but when.


HIPAA
The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule established standards to protect individuals’ medical and personal health information. It applies to health plans, health care clearinghouses, and health care providers that conduct transactions electronically. HIPPA requires companies that deal with personal health information to fully protect those records from unauthorized access while at rest and in motion. Since 2015 over 200M patient records have been lost.

PCI DSS
The Payment Card Industry Data Security Standard (PCI DSS) is an information security standard for organizations that process, store, or transmit credit cards. The PCI standard is mandated by the card brands and administered by the Payment Card Industry Security Standards Council. The standard was created to increase controls around cardholder data to reduce credit card fraud. Requirements 3.3 and 3.4 are of particular interest as they directly discuss how credit card numbers, referred to as Primary Account Numbers (PAN), can be used.

Requirement 3.3 states: Mask PAN when displayed (the first six and last four digits are the maximum number of digits to be displayed), such that only personnel with a legitimate business need can see more than the first six/last four digits of the of the PAN.

Requirement 3.4 states: Render PAN unreadable anywhere it is stored (including on portable digital media, backup media, and in logs).
Industry best practice is to tokenize the PAN which allows a business to perform the tasks that it deems necessary while protecting the card number.


NIST 800-53
In the United States the National Institute of Standards and Technology (NIST) has issued standard 800-53 – Security and Privacy Controls for Federal Information Systems and Organizations, a 450-page document that has a simple and logical framework to help prioritize and address key risks.


The 5 main points of this framework are:

  1. Identify – Asset, Governance, and Risk Management
  2. Protect – Access Controls, Training, Processes, and Policies
  3. Detect – Monitoring, Event Management, and Detection Processes
  4. Respond – Analysis, Communications, and Mitigation
  5. Recover – Improvements, Communications, and Planning

While all 5 points of the framework are important, number 2 – Protect, is the one that most companies seem to struggle with. If the data is properly protected, the consequences of a data breach are greatly reduced.

Having lived through a data breach I can tell you that these companies didn’t want to lose their customers data, they didn’t skimp on security, they were just the losers of the latest arms race – the race between bad actors and corporations. In the newspaper age, it wasn’t feasible to steal data at the massive scale that we are seeing today. Data was in paper form or contained within a private network that had very limited outside access. Today everything is interconnected to everything else. It makes our lives better, but it also creates a whole host of new attack vectors. Security professionals and military professionals talk about defense in depth where the attacker must get through many layers of defense before they can get to their target objective. In the military when the attackers get through the outer defenses they still must get past the folks making the last stand, in the IT world that last stand must be tokenization and encryption!

Tokenization replaces sensitive data with random characters and preserves the format of the original data element. The token has no value, if a data breach does occur the tokenized data elements are worthless to the thief.


No matter how many cyber-attacks you manage to prevent, you can never assume you’re stopping them all.


Data-Centric Security
Data-centric security is an approach to security that emphasizes the security of the data, rather than the security of the networks, servers, or applications where the data lives. There are two common methods used to protect data: tokenization and encryption. Tokenization replaces the sensitive data with tokens that are meaningless without compromising its security. Encryption renders the data useless without the key that was used to encrypt it. Companies should use both tokenization and encryption to protect their digital assets.

Tokenization
Tokenization allows for the preserving of the characteristics of the data such as the type (numeric, alpha, alphanumeric) and length which makes the implementation easier for companies.

Image3.jpg

Encryption
Encryption doesn’t preserve the format of the data, so it requires more work to implement (field size changes, encrypt/decrypt when the data is used, a hash added to be able to search the data).

Image4.jpg

Data is a critical asset that crosses traditional boundaries (on-premises, hybrid, and cloud) and requires a scalable, fault-tolerant solution that can both tokenize and encrypt it to ensure that it stays protected. Once data has been properly protected, a corporation maintains its regulatory compliance while protecting it from hacking, fraud, and ransomware. Gartner Research has said that tokenization has emerged as a best practice for protecting sensitive fields or columns in databases during the past few years.

With both the PCI DSS and GDPR requiring security measures such as tokenization, one company, comforte, has emerged as having best in class solutions. They have products that provide integration to business applications without having to rewrite existing applications while also providing intelligent APIs for new applications. Their sophisticated and flexible framework allows multiple layers of data protection for new and existing applications. In many cases, data protection can be achieved without any application changes. You read this success story which provides a real-world example of how their products work.

 

^
TABLE OF CONTENTS

 
COend.png


bannerDianaSuperdome.jpg
 

DianaCortes.jpg

Diana Cortes Marketing Manager, HPE Mission Critical Solutions

Diana Cortes is Marketing Manager for Mission Critical Solutions at Hewlett Packard Enterprise. She has spent the past 20 years working with the technology that powers the world´s most demanding and critical environments including HPE Integrity NonStop, HPE Integrity with HP-UX and the HPE Integrity Superdome X and HPE Integrity MC990 X platforms.

 


Last December, HPE announced the world’s most scalable and modular in-memory computing platform, HPE Superdome Flex—a compute breakthrough to power critical applications, enable real-time analytics and tackle data-intensive high performance computing (HPC) workloads.

In this article, I’ll be taking an in-depth look at the HPE Superdome Flex modular, scalable architecture and the capabilities that make it unique in the industry.

Scaling beyond the capabilities of Intel
Like most other x86 server vendors, HPE uses the latest Intel® Xeon® Scalable processor—codename Skylake—in its latest-generation servers, including HPE Superdome Flex. Intel’s reference design for these processors uses the new UltraPath Interconnect (UPI) that limits scaling to 8 sockets. Most vendors using these processors base their server designs on this “glueless” interconnect method, but unlike them, HPE Superdome Flex uses a unique modular architecture that can scale beyond the capabilities of Intel—from 4 to 32-sockets in a single system.

We did this because we recognized the market need for platforms able to scale beyond Intel’s 8-socket limit, especially today when data sets are growing at an unprecedented pace and customer need scale up capacity to support growing workloads. In addition, because Intel focuses the UPI on 2- and 4-socket servers, the 8-socket “glueless” servers become bandwidth challenged. The HPE Superdome Flex design delivers high-bandwidth even when the system grows to the largest configurations.

Price/performance advantages over other systems
The HPE Superdome Flex modular architecture is based on a 4-socket chassis that can scale to 8 chassis for a total of 32 sockets in a single-system compute powerhouse. There are many different processor options to choose, from the cost-efficient Gold to the high-end Platinum “flavors” of the Xeon Scalable processor family.

This choice of Gold and Platinum processors delivers great price/performance advantages over smaller systems. For example, in a typical 6TB memory configuration, HPE Superdome Flex can deliver a lower-cost, higher-performance solution than competitive 4-socket offerings. Why? Because of their design, other 4-socket systems are forced to use 128GB DIMMs, which are a lot more expensive than the 64GB DIMMS an 8-socket HPE Superdome Flex can utilize. At this socket count, an 8-socket/6TB HPE Superdome Flex will deliver double the compute power, double the memory bandwidth and double the IO capability—and it will still be more cost effective than a 4-socket/6TB competitive product.

Similarly, for a competitive 8-socket/6TB configuration, HPE Superdome Flex can deliver a lower-cost, higher-performance 8-socket solution. How? While others are forced to use more expensive Platinum processors because of their design, an 8-socket HPE Superdome Flex can use lower-cost Gold processors to give you the same memory capacity.

In fact, of the platforms based on Intel Xeon Scalable processors, Superdome Flex is the only one able to deliver 8-sockets using the cost-effective Gold variant (as Intel´s “glueless” design supports 8-sockets only through the more expensive Platinum type). HPE Superdome Flex also comes with a variety of core count choices, enabling you to map the number of cores per processor to your workload requirements, with variations starting as low as 4 cores to as high as 28 cores per processor.

Scaling up: why it matters
The ability to scale as a single system, or scale up, delivers several advantages for those vital workloads and databases HPE Superdome Flex is best suited for. These include traditional and in-memory databases, real-time analytics, ERP, CRM and other OLTP workloads. For these types of workloads, a scale-up environment is simpler and cheaper to manage than a scale-out cluster, and it also reduces latency, increasing performance.

You can read this blog post on the transaction speed when scaling up or out with SAP S/4HANA to understand why scaling up is a much better alternative than scaling out/clustering for these types of workloads. It’s all about speed and the ability to perform at the level required for these critical applications. For a short video on when to scale-up versus out, you can click here.

Consistent high performance, even at the largest configurations
The HPE Superdome Flex extreme scale is achieved via the unique HPE Superdome Flex ASIC chipset, connecting the individual 4-socket chassis to one another in a point-to-point fashion, as shown in Figures 1 and 2. The HPE Superdome Flex ASIC technology enables adaptive routing, which load-balances the fabric and optimizes latency and bandwidth, increasing performance and system availability. The ASIC connects the chassis together in a cache-coherent fabric and maintains coherency by tracking cache line state and ownership across all the processor sockets inside a directory cache built into the ASIC itself. This coherency scheme is a critical factor in the ability of HPE Superdome Flex to perform at near linear scaling from 4-sockets all the way up to 32-sockets. Typical “glueless” architecture designs already see limited performance when scaling to as low as 4- to 8-sockets, because of broadcast snooping.

Shared memory
In a similar fashion to compute, memory capacity can grow as more chassis are added to the system. With support for 48 DDR4 DIMM slots per chassis, accommodating either 32 GB RDIMMs, 64 GB LRDIMMs, or even 128 GB 3DS LRDIMMs, the maximum per-chassis memory capacity is 6 TB. This gives a fully scaled 32-socket HPE Superdome Flex a whopping total memory capacity of 48 TB of shared memory to support the most demanding in-memory applications.

Extreme I/O flexibility
As for I/O, each HPE Superdome Flex chassis can be equipped with either a 16-slot or 12-slot I/O bulkhead to provide numerous stand-up PCIe 3.0 card options, giving you plenty of flexibility to support a wide variety of workloads. With either I/O bulkhead selection, the I/O design provides direct connections between the processors and the card slots—with no need for bus repeaters or retimers that can add latency or reduce bandwidth. This gives you the best per card performance possible.

Ultra-low latency
Low latency is a key factor driving the high performance of HPE Superdome Flex. Although data exists in local (directly connected to processor) or remote (across chassis) memory, copies of the data can exist in various processor caches throughout the system. Cache coherency keeps the cached copies consistent in the event an operation changes the data. The round trip latency between a processor and local memory is about 100ns. Latency of a processor accessing data from memory connected to another processor over UPI is ~130ns.

Processors accessing data residing in memory in another chassis will travel between two Flex ASICs (always a single “hop”) for a roundtrip latency of under 400ns—no matter if a processor at the top of the rack is accessing data from memory at the bottom. As for bandwidth, HPE Superdome Flex provides more than 210 GB/s of bi-sectioned crossbar bandwidth at 8-sockets, more than 425 GB/s at 16-sockets and over 850 GB/s at 32-sockets. That’s plenty to power the most demanding workloads.

Why does this extreme modular scalability matter?
It’s no secret data is growing at an unprecedented pace–which means infrastructure strains to handle increasingly demanding requests to process and analyze critical, ever-growing data sets. But growth rates can be unpredictable.

To support the business, IT teams need systems that respond effectively and promptly to their requests, regardless of the amount of data or how fast it grows. Having a platform that keeps pace with the demands of your business will give you peace of mind—so you’ll know that you won’t run out of room to grow, but neither will you need to overprovision.

When you deploy memory-intensive workloads, you might ask: What will my next TB of memory capability cost? With Superdome Flex, you can scale memory capacity without a forklift upgrade, as you’re not limited to the DIMM slots in a single chassis. Also, as the number of users increase, mission-critical applications require a high performing environment regardless of size.

In closing, today’s in-memory databases demand low-latency/high-bandwidth systems. Thanks to its innovative architecture, HPE Superdome Flex delivers extreme performance, high bandwidth and consistent low latency, even at the largest configurations. What’s more, you can get all this for your critical workloads and databases at better price performance than on smaller systems. And, the platform gives you the room for growth and availability expected in a mission-critical environment.

For more information on HPE Superdome Flex, visit www.hpe.com/superdome

 

^
TABLE OF CONTENTS

 
COend.png

bannerCIO.jpg
 

Feisal Albert Hall (2).jpg

Feisal Mosleh

Feisal (Fas) Mosleh is a veteran Silicon Valley executive who has developed technology products and business strategy at IBM, HP, Agilent, Kodak and Avago with operational experience spanning enterprise IT, software, imaging, mobile and consumer electronics. Fas runs HPE’s UNIX modernization program worldwide. Follow him on Twitter: @Feisal2020


 
 
1.jpg

Introduction

Many organizations are starting to utilize the cloud for their enterprise applications. Before doing so it is prudent to consider the pros and cons of the private or public cloud environments and which applications may best be suited to make the most of the cloud.

Those applications that are very sensitive and delicate, or risky to move to a private cloud may be better left in the datacenter until an appropriately planned move with safeguards can be executed. Applications that the business depends on and which are very old but rock solid in the current environment may not be worth moving even into a private cloud but instead may require a planned obsolescence with a replacement made over a measured period of time.

Core, mission critical business applications that need high security or high control and governance are best kept in a private cloud environment. Mission support applications that may contain more sensitive information would also be candidates for the private cloud, for example, HR, Finance and email.

Applications that may utilize already public or non sensitive information may be candidates for the public cloud. This would include existing applications like web sites, or training applications and also new, rapidly deployed apps like mobile ‘branding’ apps. Applications that are not sensitive, but that need a lot of scaling or need to scale specifically at peak usage times, or require widespread geographic usage may also be best managed in a public cloud.

 

Business Needs:

The business needs the CIO to focus on providing a seamless, drama less business operations experience to all, while giving some departments and functions the ability to be more innovative, respond faster to threats and opportunities and all while keeping costs under control and possibly maintaining a tight cost reduction trajectory. These are seemingly conflicting, tough objectives to meet, but cloud computing offers some much needed assistance to the CIO.

In the typical datacenter, the IT team must take care of delivering the whole vertically integrated service of IT, from setting up and managing hardware and software, including compute, network and storage, to managing upgrades, deploying new applications and updating software and rendering other support.

In addition there are many service delivery operations and maintenance tasks, like monitoring performance, latencies, security and availability, which are expensive in terms of time and opportunity cost. The typical datacenter uses a vast range of experts that must be hired, trained, managed and deployed plus the technology that must be bought, managed, operated, updated and eventually managed to obsolescence in an efficient manner.

This incurs massive capital costs as well as operating expenses for the business, simply to deliver IT services. In a managed cloud environment, much of the infrastructure is managed by the cloud provider. They build the infrastructure and hire and train and deploy IT teams and sell the computing power to customers much as a utility and charge based upon what is used. Thus, converting these capital and operating costs into variable expense costs is a financial advantage. In addition, when a department (e.g. marketing, sales) must respond to a market change, such as a competitive threat, they need a new application or IT service to be put into service ASAP.

Conventional datacenter environments can be relatively slow in delivering such strategic applications. This is because there is such a large overhead of operational responsibilities that the strategic areas take a back seat due to lack of resources and time. It's the requirement of driving the car on a highway while changing the oil and getting to your destination by the deadline - leaves little time or energy to add a rear spoiler or a turbocharger, or low profile wheels. After all, its possible the destination gets altered en route by the CEO, and the air filter needs changing right after the oil has been replaced.

Cloud computing environments offer a more flexible, on-demand, software driven infrastructure where different layers of the vertical stack can be managed more easily through software that can orchestrate and manage the workloads and applications in an intelligent manner delivering an agile, scalable, adaptive, sensing, and even self healing environment. For example, an application that is failing is constantly monitored via telemetrics and before it fails, leading indicators prompt the creation of a duplicate instance of the application in another server. As the first instance truly fails, the new instance can completely take over the load. In an old fashioned datacenter, such an app may have needed a duplicate server with application running constantly and be permanently on guard for a failure. This resulted in a completely redundant system with its concomitant costs, operating and maintenance resource requirements.

So now that we understand the advantages of the cloud, which applications should we put there?

2.jpg

The public cloud offers some distinct advantages like scalability, lower cost and some disadvantages like less control, lower security and privacy around the network and your application data. Thus, here are a few questions to ask before deciding whether to use the cloud or not for certain applications:

  • How secure must the application be? Including privacy of sensitive data, network, storage.
  • How much control or governance is needed over the application?
  • Are there governance requirements such as Sarbanes-Oxley, or HIPAA that must be observed?

What are the application needs in terms of:

  • Scalability – over time, as well as peak usage times
  • Dispersion – Is the application needed by users spanning a global or a large geographic region?
  • High availability – Is it required to be up and running 24x7 such as some ERP applications which are mission critical
  • Disaster recovery and business continuity – is it a highly mission critical application?
  • With which data and applications does the application communicate?
  • What are the storage and compute loading requirements over time, and is latency a factor?
  • Is there a short term project where a big data crunch exercise needs to be run? A super compute-intensive task may

Other factors include the business time pressure, cost pressure and willingness to take a risk. Some companies have tasked their CIOs with “Cloud First” initiatives where they expect the application to be first hosted in a pure (public) cloud environment. No training wheels. No safety nets. 'Just Do It' style.

 

Typical apps that could be managed in the cloud include:

Public cloud: Website, HR, non sensitive development (e.g. wire framing) and test systems, non sensitive customer facing apps, non-sensitive archivals, anti-malware or anti-virus applications

Private cloud: 24x7 ERP, Email, APS, SCM, Finance, CRM, SFA

Some of the following scenarios may also contribute to deciding when to place an app within a cloud. Is it a onetime big project that might need a lot of servers to process a huge task? For example, converting a whole library of research papers or books into PDFs might need hundreds of servers that an IT datacenter shouldn’t have to buy or support but should be able to ‘rent’ cost-effectively through a public cloud. Big data projects that require intense calculations (and therefore short bursts of extremely potent processing power, which is economically unjustifiable for the enterprise to buy)  for extracting meaningful information from huge reams of data are obvious candidates if the data is not sensitive.

4.jpg

Bottom Line:

Depending on the kind of application and its security, control, regulatory constraints, scalability and uptime requirements you could consider hosting it in the private cloud or in the public cloud.

Mission critical applications should be hosted in the private cloud. Some mission support apps and those apps that do not have high security or control or governance requirements could be hosted in a public cloud.

Make your list of applications and group them by security/sensitivity/control/governance and then understand the scalability and global requirements, separating out the applications that can be hosted in the private and public cloud.

Above all, understand the business objectives, needs and drivers. There may be hard requirements from above, to prioritize some new, innovative, competitive applications way above everything else. In that case, focusing on getting them into the cloud first and 'making them rock' so the company grabs some extra business or beats some dangerous threats, may become the CIO's highest priority. At least until its time to change that oil again.

 

^
TABLE OF CONTENTS

COend.png
 

bannerJill.jpg
 

headshotiny.jpg

Jill Sweeney Hewlett Packard Enterprise

Jill Sweeney leads technical Knowledge management for volume servers, composable systems, high performance computing and Artificial Intelligence (AI) at Hewlett Packard Enterprise.  Jill and her team are transforming the experiences customers and partners have with HPE's products, solutions and support information to foster positive customer business outcomes.

No stranger to change management and transformation, Jill has held technology focused and marketing roles at HPE including launching both the Internet of Things (IoT) and mobility go-to-market programs as well as managing global brand programs for Hewlett Packard's Starbucks Alliance and employee communication engagement. 

Prior to the HP/Compaq merger, Jill drove alliances for a Compaq owned start-up, B2E Solutions.  Jill is a champion for Inclusion and Diversity as well as STEM Careers.  She actively support HPE Code Wars and University recruiting.

This year, Jill has taken a new challenge, addressing a societal problem of human trafficking.  She is working with a local organization to give female victims of human trafficking career coaching and referrals to coding camps to break the economic cycle, supporting dignity and sharing hope.

An inspirational and motivational speaker, Jill has recently given industry keynotes on topics including IoT trends, diversity, employee engagement and work-life transformation.  Jill has served on the Anita.Borg Partner Forum to select technical topics and source industry leading speakers for the Grace Hopper Celebration panel submissions.

Follow Jill on Twitter and on LinkedIn


 
 
WEF.PNG

Stop world hunger, really? According to the World Economic Forum, the global population will increase to an estimated ten billion people by the year 2050 and will demand 70% more food than is produced today. That’s a lot of people to feed and will put greater stress on natural resources like water. Feeding this expanded population nutritiously and sustainably will require substantial improvements to the global food system – where nearly 1/3 of food produced is lost due to waste or supply chain inefficiencies. In order to meet the increasing demand, researchers predict agricultural output must double. But how? One way is by maximizing output and driving more efficient use of resources, especially in very remote and traditionally technology-free locations. One key to achieving abundant crop yields is by targeting microclimates and measuring water usage and nutrient levels. That means deciding what are the best seeds to plant in what types of soil and making sure enough fertilizer is used.  All these activities must be done sustainably to prevent a global food shortage.

Purdue AG Image 1.jpg

Here’s an example of how Purdue University is using Hewlett Packard Enterprise technology to develop a data pipeline that will enable new discoveries in agricultural research and food production for a growing, global population. These topics are being piloted through HPE’s strategic partnership at Purdue University, a world-renowned research university, where together we are using and connecting disruptive technologies to create an IoT testbed for the research, development, experimentation and testing of ag-based IoT solutions to drive sustainable ag and food security innovation.

Purdue AG IMage 2.jpg

It all starts at the edge, and in the case, at the edge of a field in West Lafayette, Indiana. Not just any field, Purdue University’s Agronomy Center for Research and Education, a 1,408-acre research farm.  Through the use of digital agriculture, Purdue students and faculty members can gather, transmit, analyze and respond to conditions in the field not previously possible. How does this work? Well through a wireless and IoT architecture that sends sensor data through the Aruba wireless network to HPE’s Edgeline Systems and then to a high-performance computing data center for analysis and AI development. This project has created new innovations including solar-powered mobile Wi-Fi hotspots for recording field data and next-generation adaptive wireless equipment for farm-scale wireless connectivity. These technologies enable Purdue to gather, transmit and analyze field data more effectively and efficiently and play a key role in reducing the time it takes to translate science into discovery that will make a difference.

Check out this video to see the technology in action

 
COend.png

bannerBlockchain.jpg
 

Khodi.png

Khody Khodayari CEO, Idelji

First exposed to NonStop right out of college, at his first job at CitiBank, and later founding Idelji, Khody has been a fixture in the NonStop world for many years. He is passionate about innovation & analytics.  His latest work has been in Machine learning / A.I., Cloud analytics, and Blockchain / DLT.


This is part one of a two-part series.  In this issue, we cover the journey from Blockchain to Crypto to DLT (Distributed Ledger Technology).  We will review why blockchain matters, where we are now, and what future may hold. In the next issue, we will review the current DLT architectures, who the main players are, and most importantly what this all means to you, your customers, and your business.  

I am hoping you are reading this online, or otherwise are near a connected device.  You will see references to “Search” throughout this article, along with references to a site or keywords.  You are encouraged to pause from time to time, do some online checks, and come back here.

You already know about the rapid rise and recent volatility of Bitcoin & other like-Cryptos.  “Satoshi Nakamoto’s” White Paper is online, and there are many articles online you can search for and review (for a quick introduction, Search blockchain after locating 3Blue1Brown on YouTube), to learn the bits & bytes on blockchain.  I’ve also covered technical details at my Blockchain virtual lab presentations before.  You should be able to find them online.  Write me khody@idelji.com if needed.  

Some basics:

  • Blockchain is the technology behind Crypto currencies and ICOs (Initial Coin Offerings).
  • Blockchain is a chain of digital blocks, each linked to the one before.  Each block represents a set of completed transactions.  Blocks & content within them are immutable.
  • Pure Blockchain implementations require Miners who verify the validity of a block’s content before its introduction to the chain.  They use simple & publicly available cryptographic algorithms (Search SHA256) to obtain their “proof of work”.  
  • Blockchain is based on borderless & public consensus among active nodes (open to anyone), who verify the Miners’ “proof of work”.  This allows for the Block’s acceptance & entry into the chain.  All nodes participating in consensus have a full ledger of all transactions (chain) since the birth of the first block of that chain.  
  • Distributed Ledger Technology (DLT) is a variation of Blockchain, where access to ledger (full or partial) is open only to members of its consortium. Depending on the implementation, consensus may come from all members, or by those with a need-to-know (e.g. parties to the transaction), or a trusted party such as a Notary (R3 Corda), or a Validator (Ripple).  There are no Miners in DLT.

There are more details behind each of the above line items which you can find online.  Various implementations of blockchain are on GitHub. You can download the code to use, or even create your own fork.  

Here instead, let us focus on possible use cases, and why it is hailed at the next internet.  You need to know the immense effect it may have on businesses and long established worldwide socioeconomic structures.  

There are different schools of thought on why blockchain matters:

  • It promotes Democracy – Brock Pierce is the Chairman of bitcoin foundation, an early adopter of Crypto, serial entrepreneur, and… (Search his name).  At 37, he is worth more than everyone alive in my family tree, combined!  Go to one his presentations (next one is in March 2018 in Puerto Rico – Sorry, sold out), and he will most likely play Charlie Chaplin’s The Great Dictator Speech (it’s on YouTube), and talk about peace, love, and harmony which are now possible, thanks to Blockchain technology.  The message is that people around the planet can establish global and direct commerce, promoting personal freedom & choice.  Simple, since, and this is already forming, a new economy which bypasses regional and national governments, opens direct one-to-one trade & new commerce opportunities. Elon Musk’s upcoming network of 12,000 internet satellites will pave the global highway in the cloud; call it the blockchain SilkRoad.  Brock and a network of his friends are buying up 250,000 acres of land in Puerto Rico (where there is no federal income tax) to setup their own CryptoWorld City (Search Crypto Utopia).  For the record, Brock is not the only one; nearly everyone I’ve talked to, is of the same belief that direct global commerce, which bypasses the middlemen and, in most cases, government regulations, is the way to go.  In my opinion, in many instances, it is ideal and can benefit the mankind.  However, and as we review later in this article, it may also lead to some major and most likely disruptive & unpleasant events.
  • It’s about Time and Money – Open slide sets of most Blockchain presentations.  There is diagram after diagram showing the flow of money from one person to another (Alice and Bob are usually the main characters), and how the transaction is weaved through several private (banks, exchanges, credit card companies) and public (SWIFT, ACH) agencies before completion.  Argument is, and it is a valid one, that each party needs to collect a fee adding to the total transaction cost, and requires time to do whatever it does, which adds to the total transaction time. This in fact is a very convenient and lucrative arrangement for all the middle parties involved in the transaction (at the expense of Alice & Bob).  Interesting to note, is that most of investment in defining & implementing blockchain (actually DLT) use cases are coming from the same middlemen / companies.  I suppose it is a recognition of the inevitable, and an attempt to join in and ride the blockchain wave.  I agree with them; the alternative will not be pleasant. Search for “Threat of Cryptocurrencies” on cnbc.com (mind the ads).
  • It improves Productivity & lowers cost – Yes, absolutely.  At the advent of commercial computing, every company which bought an early generation computer developed its own applications for their own use cases.  We called them in-house applications.  Later, software vendors developed and marketed more generic forms of application software for different use cases.  This was a game changer. A company could now simply employ a solution at a far lower cost, compared to in-house development (This reminds me.  Remember Y2K when we could not find the source code for programs decades in production-use).  Of course, there were multiple vendors offering similar apps.  Each business purchased and deployed the app which closely matched their requirements and price point.  This is how Software Silos were created.  Finance, Retail, Manufacturing, and others, all ended up with a hodgepodge of software which could not communicate with software at other companies in the same trade.  Enter middlemen.  For a fee, they would take a transaction, do the protocol translation, insure the content (nearly all companies signed up for this service, and of course passed on the fee to Alice & Bob), and take it to its destination.  Of course, one middleman will not do.  More parties got involved, adding more fees and time.  Search “Credit card transaction flow”.  Blockchain can fix this. Objectives: Efficiency, direct point-to-point transactions, lower costs.
  • Blockchain also offers Smart Contracts.  These are pieces of code that are common across the consortium.  No more translation, no more Software Silos.  What makes them smart? A contract can incorporate, enforce, and log (ok, add to the chain), all steps of a complex transaction.  My (and many others’) favorite example: buying a house.  It represents what happens from the time a buyer makes an offer, to counter offers / acceptance, to appraisal, inspections, removal of contingencies, funding, mutual close, titles, and many other steps along the way.  A Smart Contract on the chain implements all these steps in one immutable chain of transactions, and can enforce or reject any step based on any number of factors, such as time (deadlines), authorization (signature of parties involved), proof of funds, … That transaction in its entirety is there for the eternity, where each step is fully recorded.  All participants in that Consortium (think a group of Brokers, Banks, Notaries, etc.) will use the same Smart Contract for all like transactions (one application for all, and a smart one at that).  Improved productivity: Easy to follow, implement, and record one transaction set, compared to mountains of papers which would otherwise need to be passed along different parties. Lower cost due to improved productivity and the fact that there are no Software Silos.  One use case, one App.  BTW, what if you wanted to buy your house using Bitcoin?  It’s already being done (Search “Real State Bitcoin”).
  • Better to be a Disruptor than a Disruptee (word not in dictionary, yet).  How many of you ride your horse to work?  Modern (really?) automobile was invented in 1886 (Carl Benz, Germany), and in very short order, ponies & their handlers were disrupted. How many of you still carry paper currency in your pocket (please don’t read further if you carry coins)?  Fiat currency goes back to 11th Century (Yuan dynasty, China).  Isn’t it about time?  We’ll cover later as to why that is still so.  But currency is just one use-case.  Nearly everything we do today, can be done cheaper and faster with Blockchain, especially if computers and middle men are involved (ok, everything).  Search “Blockchain Future Thinkers” and within it, Search “Blockchain” on futurethinkers.org.  Unfortunately, and this may apply to you: just adopting the blockchain technology may not be enough.  In some cases, blockchain allows for a new way of conducting business that completely bypasses established methods.  Horse of a different color won’t do.

Where we are today
Early stages & noted issues
Speed & Scale: To start, Bitcoin network was able to only do 7 TPS, and Ethereum 15.  Forks of the same protocols offered larger blocks, and significantly improved throughput.  Ripple (ripple.com) started with 1,000 TPS and is already exceeding its goals.  Stella (stella.io) does over 1 billion transactions per day.  In 2017, Red Belly blockchain (redbellyblockchain.io) network in a worldwide test-run did 660,000 TPS. Not an issue anymore.

Lost wallets / Stolen coins – Does this not happen to thousands of people in Fiat currency world every day?  It becomes news when it is Crypto.  1.3 million people die in car crashes every year.  Commercial air travel casualties in 2017: zero.  Yet, every time there is a crash it will make the news nearly everywhere for days.  Crypto coin owners decide whether to keep their wallets on their own devices or trusted it with an Exchange.  They can also do both, where multiple wallets are kept, and coins can be transferred and distributed as needed.  Crypto wallets are safer than currency wallets.

Security, anonymity & Fraud: Security It is inherent in the blockchain architecture.  Users are anonymous (public key is known but no link to user), but not to government agencies when needed (Search “Kathryn Hahn” on YouTube for her Tedx talk).  Can it become a cover to launder money? KYC and AML checks are already in place at main Exchanges.  I suppose there are exceptions. However, by far, Blockchain offers a superior trace of transaction activity back to its source.  Imagine knowing where your currency bill has been for every second of every day, since it was first printed.  Blockchain can tell you in an instant.  Compare that to the serial number on your currency note.  What valuable information can it give you or anyone else?  Case closed.

Pillars of our societies:
Banking. Currencies. Governments.
Banking began around 2000 BC in Assyria & Babylonia (source Wikipedia).  Now, some 4,000 years later, there are an estimated 20,000 financial institutions on our planet.  There are 195 countries, using 180 currencies. Now Search “WorldDebtClock”.  Go there. Stay a while & click away.  Take your time. Focus and think.  Financial health of a nation is of immense importance.  In the world of Fiat currencies, trust is placed in governments and their balance sheets.  After visiting this site, do you feel your trust is earned?  Do you look at the currency in your pocket & your bank balance, the same way you did before?

Crypto’s first currency, Bitcoin, came to be in 2009.  Less than 10 years later, there are about 1,500 crypto Coins with a $500 Billion USD cap (Search “Coinmarketcap”), traded through 8,700+ markets (exchanges), dispersed around the world, and mostly unregulated. You can argue that Crypto came to make the direct global commerce possible, to create an alternative to Fiat currencies, and perhaps to put governments to task.  One thing is certain; the dust has not yet settled.  Daily fluctuations are finding their limits.  Consider a bouncing ball that will eventually come to rest and starts rolling in a direction yet unknown.  

This is an excerpt from Bank of England filing (Search “a blueprint for a new RTGS service for the United Kingdom”) in May 2017: This is not me; it’s Bank of England, one of the most conservative & largest global banking centers. IMF managing director, Christine Lagarde, at Davos 2018: “I think we are about to see massive disruptions”. She further commented that government regulation of Cryptocurrencies is inevitable: “It is clearly a domain where we need international regulation and proper supervision.” Exactly how that is possible remains to be seen.   IMF is considering creating its own Crypto / Digital currency.  Estonia tried to issue one and was stopped by Mario Draghi (European Central Bank president), in favor of Euro and perhaps a unified European digital currency.   Several other countries are taking independent moves towards crypto to meet their own policy requirements.  Point is this:  Crypto in one form or another is here to stay.  Its effect can be, and I use Ms. Lagarde’s word: massive.  Governments need to come together, and devise policies which hold the populous whole, and use technology to benefit all.  Stage of Georgia in U.S. is considering receiving bitcoin for tax payment, creating yet another use case.  The day may come when Alice uses Crypto through Binex (a leading crypto exchange in Asia) to pay Bob for her cup of coffee.  Should that happen, and it is possible & likely, we will be living on a different planet.

Want to start a company?

Bank of England, May 2017:
“The world of payments is changing rapidly.  Households, companies and individual intermediaries are demanding faster, simpler, cheaper and more flexible ways to pay.  In response, new technologies are being developed, some by existing market participants, and some by new service providers, to meet these needs.  At the same time, these technologies, and broader developments can create new treats to users of the payments system, and to the stability of that system, which require ever stronger protections and more resilient infrastructure.  Balancing the need to safeguard stability whilst enabling innovation is the challenge facing everyone involved in providing payment services.”

Traditional methods may involve the following steps:

1. Ask grandparents for initial funds.
2. Form a professional business plan, or if possible start your business offering a product or service.  Show promise.
3. Look for Angel investors.
4. Attract Series A investors.
5. Once or more, look for Series B funding.  
6. Still there (less than 3% reach this step)? Continue your business, hoping to survive, and some day issue an IPO (Initial Public Offering) on one of the main exchanges.  This last step can be quite expensive due to SEC regulations and liability costs.

Here is a better idea:  Develop a PowerPoint presentation of less than 10 slides; use the word Blockchain in 3 or more places.  Create a “team” of “trusted” people in the blockchain industry.  Give no more than 5 presentations in the investment communities worldwide (There is one every month in Santa Monica, you can listen to three 10-minute presentations.  It’s on meetup.  Join us.).  Now fork the code off git and create an ICO (Initial Coin Offering).  Go to market.  Offer discount to early birds and set (extend if necessary) your public offering date.  Receive Ethereum for your Coins & convert to Fiat of currency of your choice.   You are in business, along with the other 1,500 who came to existence in the past 3 years.  Did we mention?  Your offerings are not regulated by SEC in the U.S. or any other major government entity elsewhere.  Your ICO is most likely registered in an Island some place remote, more open to the “new pace of innovation”.

While this has opened doors to many legitimate entrepreneurs, it has also facilitated for shadowy figures to walk away with investments from unsuspecting investors worldwide.  Search TokenMarket.net for eye opening information and stats.  Remember, most businesses here are trying to disrupt an established business model.  Is your business in their sight?

Internet of Things & IOTA.  
Smart Cities.
No need to explain IOT here; my guess is you’ve sat through multiple presentations and have read many articles on the topic.  It’s blockchain that brings a new dimension here.  There are over 17 billion active sensors around the planet (world population today: 7.6 billion) collecting, recording, and emitting data about their surroundings.  Blockchain can bring authentication, connectivity, and scaled security to these silos of data, at low cost.  Use SpaceX’s Internet satellites to securely transmit data from anywhere to Super Computers (cloud or otherwise) which can compile the massive data, and use A.I. for immediate action (e.g. emergencies, asking Amazon to deliver milk on Tuesday), or future planning.  This data on blockchain offers a full record of nearly everything everywhere which gives info on trends, exceptions, and interconnectivity of events.  

This is exactly what could be most useful or harmful to us, depending on who uses the data and for what purpose.  I am optimistic.  Smart Cities are an example of IOT and Blockchain benefiting citizens.  Middle East (UAE specifically), and Asia (Singapore & China are good examples) are leading the way.  Perhaps a review of their progress and goals could be covered in another article.

IOTA is just one blockchain implementation, hoping to “enable companies to explore new b2b models by making every technological resource a potential service to be traded on an open market in real time, with no fees (Search iota.org).  The “no fees” comment makes me skeptical. Microsoft is considering this technology.

HealthCare. Insurance.
Music & all digital content.
Supply management.
Quality & source control. & …

These are but a few other blockchain use cases.  It is applicable anywhere data matters, meaning everywhere.   For more info on anything do a Search on “blockchain” + its name (e.g. Blockchain Healthcare) Blockchain is rapidly bringing changes to our societies.  We all understand its potential for saving time and money at our businesses.  It can help us offer more to our customers at lower cost, and in many cases open new lines of business.  

One thing that we must not overlook is its potential disruptive force, especially in the world of finance (Search “Goldman Sachs crypto”).  Again from FutureThinkers.org: “Cryptocurrency is a revolutionary force for a reason that people often miss: it enables people to print and distribute money without a central authority.”.  Here, we made several references to how this is already happening, and the profound changes it can bring.

We will continue this topic in the next issue, where we focus on DLT and its use cases for Enterprise.  You will learn about popular architectures & protocols, where they are most useful.  Objective is to help you become familiar with the current technology landscape and assist you in setting your forward path.

 

^
TABLE OF CONTENTS

 
COend.png
bannerBBM.jpg
 

Paul J. Holenstein Executive Vice President, Gravic, Inc.
Dr. Bruce Holenstein President and CEO, Gravic, Inc.
Dr. Bill Highleyman Managing Editor, Availability Digest


The need to back up data has existed as long as data itself. Sometimes the backup is needed for historical purposes, for example, to preserve a snapshot of the information in time. In other cases, it is used to maintain an accurate and up-to-date copy of the information if the primary copy is lost or corrupted.

 

Magnetic tape is the oldest backup medium still in use. It was introduced in 1951, but tape sales began to fall with the introduction of high-speed and high-capacity hard disks, DVD’s, CD’s, and other innovations such as cloud storage. However, utilizing magnetic tape is on the rise again. With so much big data created by mobile devices and IoT sensors, there is a growing need for an economical and efficient way to back up this data. Many companies are returning to tape to fill this need.

 

Physical Tape, Virtual Tape, and the Backup Problem

Magnetic tape, however, has its disadvantages. Physical tapes are bulky and handling large numbers of them is a cumbersome and time-consuming process, including shipping offsite or retrieving from storage. Tape is primarily a streaming medium, and it is relatively slow to access an arbitrary position to write or read the data stored on it. Inserting additional information often has to be appended at the end instead of in the middle where other related data may be stored.

To solve the most problematic of these issues, tape was virtualized to allow disks and other storage media to archive the information, thereby allowing automatic processing for recording or retrieving the information with high-speed supporting networks to more easily transfer the information offsite or onsite. Despite these advances, classic backup and restore methodologies using tape, virtual tape, or other technologies still suffer from numerous inefficiencies that must be overcome to allow backups and restores to function in the new big data environments. This article will discuss advances to address these issues.

In current systems, the volume of data being generated that needs to be backed up can easily overwhelm even the fastest virtual tape methods. The problem compounds itself if the body of data grows or quickly changes (big data volumes), and/or the database is constantly and actively being accessed to provide a critical service. We call these mission-critical databases and mission-critical services. In this article, we primarily focus on backing up and restoring transactional mission-critical databases, since these databases support most companies and organizations’ critical applications.

Most mission-critical databases cannot be taken offline, even briefly; therefore, enterprises must create backups of actively updated databases. Unfortunately, since transaction processing is active while the backup occurs, some of the data changes that are being backed up may abort and subsequently come undone, which means the backup has “dirty” data in it. Additionally, as the database is being backed up, the data that was previously backed up is being changed, causing an inconsistent backup.
Fortunately, methods have evolved over time to not only back up the database, but to also capture the subsequent change data that has occurred since the backup started (or completed) so that the inconsistent and stale copy can be made consistent and brought current when retrieved and restored.

How is an online backup process accomplished, and how can it be improved and made more efficient? Doing so would lead to faster backup and recovery methods, use less storage, and provide more consistent and current information when the backup copy is maintained and eventually restored.

 

Online Backup of an Active Database

A method to back up an active (“online”) database is needed to ensure that the backup is current, consistent, and complete:

  • Current means that the backup is up-to-date and not stale. A snapshot of the data means that all of the data that was backed up is kept current to a specific point in time.
  • Consistent means that the backup is accurate, (e.g., referential integrity is preserved; the so-called dirty data is removed).
  • Complete means that the backup represents the entire database (or a specific/important subset of the data).
  • Additionally, the backup should not consume more resources (such as disk or other persistent storage) than is needed to reconstruct the database – either to a point-in-time or to the current state.

 

The Traditional Backup Method

It is common practice (the Traditional Backup Method) periodically to back up a database onto a medium such as magnetic tape, virtual tape, cloud infrastructure, solid-state storage, or other persistent storage as shown in Figure 1 1. Throughout this article, the use of the phrase tape for the backup copy medium is meant to include all of these storage medium locations and technologies and is not meant to limit the reference to just classic electronic tape technologies2. The use of the word tape or phrase backup medium implies a persistent storage device.


1 For more information on the rise and fall and rise again of tape, please see the Availability Digest article,
http://www.availabilitydigest.com/public_articles/1210/mag_tape_comeback.pdf.

2 The recent advances in tape density, writing and reading speeds, and the longevity of tape media over other storage technologies has reinvigorated the use of tried-and-true physical tape for saving copies of information for long periods of time.

  Figure 1: The Traditional Backup Method

Figure 1: The Traditional Backup Method

 

As shown in Figure 1, a backup is taken of a source database (2) while it is actively supporting transaction processing (3). Thus, the source database is changing as the backup takes place. This is known as an online backup (4).

The problem with an online backup is that it takes time to complete, and changes are occurring to the database during this time. Data written to the backup could be changing, and if the transaction aborts, the changes will be undone. Data written early in the backup phase is missing subsequent changes, but data written later in the backup contains more of the application’s changes. Therefore, the data in the backup is inconsistent. The classic method to resolve this issue is to capture all changes made to the database while the backup occurs, and eventually to replay them over a subsequently restored copy of the database to “roll” it forward to make it consistent and current.

More specifically, in order to restore a consistent (e.g., from a relational perspective, logically complete and usable to applications) database on a target system, the changes that are occurring during and following the backup must be written to a persistent change log such as an audit trail, a redo log, a journal, or equivalent data structure. In Figure 1, the oldest changes were written to Change Log 1 (5) and the newest changes to Change Log 4 (6).

The restore process then typically involves marking the persistent change log via various methods to note the time or relative position in the change log at which the backup began (7). The database is restored onto the target system by loading the backup copy onto it, and the pertinent change logs are sequentially rolled forward (8) to apply the changes that occurred after the backup started in order to make the target database current, consistent, and complete.

In Figure 1, the pertinent change logs are Change Logs 2, 3, and 4. (Change Log 1 was created before the backup began, and its changes are already reflected in the source database and were captured by the backup operation at the time the backup began.) Therefore, in Figure 1, once the backup copy has been loaded onto to the target database, the changes in Change Logs 2, 3, and 4 must be applied to the target database to bring it current and to a consistent state. It at least must be brought current to the time that the backup operation ended, since additional changes were likely made to the source database after the backup ended.
A problem with this technique is that several change logs may be required to hold the changes that occurred during the backup. For a very active source application with many changes occurring per second, there may be many such change logs required to hold all of the changes that occurred during the backup. These change logs all must be saved and made available (typically very quickly) if a restore sequence is needed.

For instance, as shown in Figure 1, Account 374 initially is backed up with an account value of $10. This change was made in log file 1, which occurred before the backup began. Account 374 subsequently is updated by the application to $74, then $38, and finally to $92; this sequence is reflected in the log files. These values are applied to Account 374 as the roll forward takes place. More specifically, the restore writes the initial value of account 374 from when the original backup occurred ($10). The log files then replay in succession, starting with log file 2, then log file 3, then log file 4 as shown in Figure 1. Unfortunately, the old values for this account replay before ultimately ending at the correct account value of $92. Besides being a lengthy process, which also requires a lot of storage for the log files, any access to the database during this time experiences old and inconsistent information while the replay of the data occurs. If the original database fails, denying users access to this information during this time will prolong an outage.

Furthermore, as shown in Figure 2, many of the changes that occur during the backup operation already may have been captured by the backup if they occurred after the backup operation started, but before those particular data objects (or part of the database) were copied to the backup medium. Thus, these changes are a duplicate of data that already was backed up. Worse, there could be a series of changes to the same data that occurred after the backup began, but before that data was subsequently backed up, and rolling forward through those changes will actually cause the restored data to reflect older (and inconsistent) values while it is being rolled forward, as shown in Figure 2. Account 374 starts off at $10 (when the backup starts), is updated to $74, then $38, and finally to $92; however, it is not backed up until it is $38, as represented by the change captured in log file 3. Using this method of restore and roll forward, Account 374 is initially restored from the backup to $38, but then is updated to old account values ($74 in log file 2, then $38 in log file 3, then $92 in log file 4) while all of the log files are processed and the changes are rolled forward.

  Figure 2: Backing Up Duplicate Data

Figure 2: Backing Up Duplicate Data

 

Consequently, restoring a backup requires rolling forward through several change logs, which may take a great deal of time and consume a great deal of storage medium resources for all of the change log files. Furthermore, rolling forward through all of the changes that occurred during the backup makes the restored data out-of-date and inconsistent until the final set of changes are replayed from the log file(s). Additionally, during this process, the source database is still being updated; these changes must be logged and rolled forward to update the restored backup to a current and consistent state to when the backup operation ended. All of this processing takes a considerable amount of time to accomplish.

 

The Better Backup Method

The Better Backup Method is shown in Figure 3. It is similar to the Traditional Backup Method shown in Figure 1 in that the contents of the source database (2) are written to a backup medium (1).

  Figure 3: The Better Backup Method    

Figure 3: The Better Backup Method

 

 

The Better Backup Method – Change Logs

Since the source database is actively being updated, restoring it from the backup medium does not provide a consistent database, because some of the data may be dirty, and changes made to that portion of the source database that were previously backed up are not included in the backup copy. These changes must be captured in a change log and applied to the restored version in order to make it consistent, current, and complete.

The Better Backup Method recognizes that changes for data that are not backed up yet do not have to be written to a change log. These changes were made to the data in the source database and will be carried to the backup medium when they are written to that medium as part of the backup operation. Thus, the consistency of the backup database is preserved without having to roll forward these changes.

The Better Backup Method – Database Restore

During the restore process, the captured changes in the change logs must be rolled forward to the restored copy of the backed-up database. In Figure 3, Change Log 1 (3) contains changes that were made to the source database before the backup began (4). Therefore, its contents do not have to be rolled forward to the backup copy of the database when it is restored. However, Change Log 2 (5) contains some changes that were made to the source database following the initiation of the backup; and these changes must be rolled forward to the restored backup copy to make the database consistent. Once the changes have caught up with the online backup, there is no further need to log changes and to roll them forward. All changes to the source database will be included in the online backup data stream (6), guaranteeing the consistency of the backup database. Therefore, Change Logs 3 and 4 (and perhaps some changes in Change Log 2) do not have to be saved nor applied to the backup when it is restored.

Note that during the restore process, the database is not in a consistent state; it is made consistent once all of the changes in the change log are rolled forward to it. Thus, the restored database eventually is consistent, current, and complete, which is also known as eventual consistency.
Also, note that the data being restored is not going to revert to previous values during the restore process. For instance, assume that the backup begins at time T1, and data D1 is changed after T1 to D2, then to D3, then to D4. This data object backs up at time T2 when its value is D2. The classic approach backs up D2, then rolls forward changes and sets it back to D1 (as that is the first change restored), then D2, D3, and finally D4. Therefore, the database is very inconsistent during the restore process and in fact, is rolled back to a previous value when D1 is applied.

One alternative approach is to capture the database at D2 and not replay the D1 or D2 changes, and only replay the D3 and D4 changes. Over time, the database is consistent; it resets to older values than the final value, but not older than the initial value. Another alternative approach is to capture D2 and then overlay it with D3 and later D4 (either in the change log or the backup copy itself) before beginning the restore process.
To resolve backed up dirty data, either aborted information is removed from the logs during replay, or the dirty data is overwritten by the eventual “backout” data that is written when a transaction aborts. Removing the aborted information is a simple process if the logs are read in reverse, as discussed later, or if a list of aborted transactions is maintained along with the change logs so that when the change logs are applied (rolled forward), any aborted transactions can be skipped.

Only portions of the change logs that are required under the Traditional Backup Method are needed in the Better Backup Method. The fewer the change logs, the less processing is required to create them and the less storage is required to save them. Perhaps even more importantly, the fewer the change logs, the less time is required to roll them forward, and the online backup/restore processing becomes much faster and more efficient. Additionally, the restored data goes through fewer data consistency issues (and in some implementations no issues) while it is being restored to a current and complete value.

 

Performance and Efficiency Improvements

An improvement in performance and efficiency can be achieved by saving only the last change to a specific data object that is being modified multiple times, as shown in Figure 4. In the figure, only the most recent change to a particular data item is shown; previous changes to that same data item are removed. More specifically, if a change is made to a data object that was previously changed, the first change can be located in the change log and replaced with the new change. If the first change previously was backed up, it can be located on the backup medium and replaced with the new change.

  Figure 4: Roll Forward an Existing Change

Figure 4: Roll Forward an Existing Change

 

Alternatively, changes to previously backed up data directly can be made to the backup medium as shown in Figure 5. This method eliminates the need for change logs and roll-forward operations.

  Figure 5: Modify Existing Changes on Tape with New Changes

Figure 5: Modify Existing Changes on Tape with New Changes

 

Another potential performance improvement can be achieved by reading the log files in reverse during the backup, and eliminating any data for transactions that abort as well as only saving the most recent (committed) change for each data item encountered.

In a similar manner, the backup operation can physically process the source database, block by block, rather than logically processing it by ascending (or descending) key path or some other logical or physical order (as mandated by the technology being used). This physical process can make the determination of whether to save a change that has occurred, since the backup is much faster. More specifically, using a physical path (such as the physical order the blocks appear in the file) to access the data is often much faster than using a logical path (such as an index tree) to access the data when the backup is initially taken.

 

The Continuous Backup Method

The Continuous Backup Method provides the capability to continuously save further changes made to the source database after the backup is taken in a persistent change log. As the backup copy is initially copied, any changes that were made to the previously copied portion are written to the continuous backup change log. Thereafter, all further changes to the source database also are written to the continuous backup change log. The backup copy becomes consistent, current, and complete at that (and every) point in time by continuously rolling forward the changes in the continuous backup change log to the backup copy.3 When it is time to restore the database, the backup copy simply is written to the target database to bring it consistent, current, and complete.


3Of course, performing a continuous backup starts to approach the availability and consistency/completeness of using a classic data replication engine to create and maintain the backup copy. While we advocate using data replication techniques to provide a viable backup copy of your production database (visit www.ShadowbaseSoftware.com/solutions/business-continuity/ for such a data replication engine implementation), we understand that some customers will continue to require backup copies via the more traditional methods, especially for creating snapshot point-in-time copies of data. We hope that the new methods discussed in this article will help improve state-of-the-art solutions for such backups.

 

^
TABLE OF CONTENTS

 
bannerFD.jpg
 

TCJanes.png

T.C. Janes
Client Chief Technologist- Cerner
EG AMS Presales Global Accounts
Hewlett Packard Enterprise

T.C. Janes is the Hewlett Packard Enterprise Client Chief Technologist for Cerner Corporation.  His role is to understand and quantify Cerner’s business-technology requirements and strategize with HPE portfolio organizations across all Global Business Units to address Cerner’s business-technology challenges. His role aligns current and future HPE solutions to Cerner’s strategies, and advocates for Cerner’s needs/interests within Hewlett Packard Enterprise.


I am a nerd.  My nerdiness embraces characters such as Marvel’s “the Avengers”.  Not the “Infinity War” kind but, the “Age of Ultron” kind.  In the “Age of Ultron”, the headliner is an architype for artificial intelligence gone bad.  But then, what could one expect when he Official Handbook of the Marvel Universe lists his occupation as “would-be conqueror, enslaver of men”.  Thus, Ultron represents the ultimate example of artificial intelligence applications gone wrong: intelligence that seeks to overthrow the humans who created it.  Imagine though, taking Ultron’s positives (genius intelligence, stamina, reflexes, subsonic flight speed, and demi-godlike durability) and applying them to a new occupation as “a qualified practitioner of medicine”?

Today, we have boundless information; limitless connections between organizations, people and things; pervasive technology; and infinite opportunities to generate many kinds of value, for our organizations, societies, our families and ourselves.  Technology enhanced with artificial intelligence is all around us. You might have a robot vacuum cleaner ready to leap into action to clean up your kitchen floor. Maybe you asked Siri or Google—two apps using decent examples of artificial intelligence technology—for some help already today. Or as recently documented, Siri sent an email its owner did not want sent because it “thought” it heard certain keywords.  The continual enhancement of AI and its increased presence in our world speak to achievements in science and engineering that have tremendous potential to improve our lives.  

What Is Artificial Intelligence, Really?
AI has been around in some incarnation since the 1950’s and its promise to revolutionize our lives has been frequently raised, with many of the promises remaining unfulfilled.  Fueled by the growth of capabilities in computational hardware and associated algorithm development, as well as some degree of hype, AI research programs have ebbed and flowed.

Confusion surrounding AI – its applications in healthcare and even its definition – remains widespread in popular media. Today, AI is shorthand for any task a computer can perform just as well as, if not better than, humans.  AI is not defined by a single technology. Rather, it includes many areas of study and technologies behind capabilities like voice recognition, natural-language processing (NLP), image processing and others that benefit from advances in algorithms, abundant computation power and advanced analytical methods like machine learning and deep learning.

Most of the computer-generated solutions now emerging in healthcare do not rely on independent computer intelligence. Rather, they use human-created algorithms as the basis for analyzing data and recommending treatments.

 Ex 1. Main tenets of Artificial Intelligence.   

Ex 1. Main tenets of Artificial Intelligence.

 

By contrast, “machine learning” relies on neural networks (a computer system modeled on the human brain). Such applications involve multilevel probabilistic analysis, allowing computers to simulate and even expand on the way the human mind processes data. As a result, not even the programmers can be sure how their computer programs will derive solutions.

Starting around 2010, the field of AI has been jolted by the broad and unforeseen successes of a specific, decades-old technology: multi-layer neural networks (NNs). This phase-change reenergizing of a particular area of AI is the result of two evolutionary developments that together crossed a qualitative threshold:

  1. Fast hardware Graphics Processor Units (GPUs) allowing the training of much larger—and especially deeper (i.e., more layers)—networks, and
  2. Large labeled data sets (images, web queries, social networks, etc.) that could be used as training testbeds.  

This combination has given rise to the “data-driven paradigm” of Deep Learning (DL) on deep neural networks (DNNs), especially with an architecture termed Convolutional Neural Networks (CNNs).”  More to come on this shortly.

Diagnosing with “The Stethoscope of the 21st Century”
A new kind of doctor has entered the exam room and his name is not Dr. Ultron.  Artificial intelligence is making its way into hospitals around the world. Those wary of a robot takeover have nothing to fear; the introduction of AI into health care is not necessarily about pitting human minds against machines. AI is in the exam room to expand, sharpen, and at times ease the mind of the physician so that doctors are able to do the same for their patients.

 Ex 2. Productivity Gains from Artificial Intelligence in Healthcare   

Ex 2. Productivity Gains from Artificial Intelligence in Healthcare

 

Bertalan Meskó, known better as The Medical Futurist, has called artificial intelligence the “the stethoscope of the 21st century.” His assessment may prove to be even more accurate than he expected. While various techniques and tests give them all the information they need to diagnose and treat patients, physicians are already overburdened with clinical and administrative responsibilities, and sorting through the massive amount of available information is a daunting, if not impossible, task.

That’s where having the 21st century stethoscope could make all the difference.

The applications for AI in medicine go beyond administrative drudge work, though. From powerful diagnostic algorithms to finely-tuned surgical robots, the technology is making its presence known across medical disciplines. Clearly, AI has a place in medicine; what we don’t know yet is its value. To imagine a future in which AI is an established part of a patient’s care team, we’ll first have to better understand how AI measures up to human doctors. How do they compare in terms of accuracy? What specific, or unique, contributions is AI able to make? In what way will AI be most helpful — and could it be potentially harmful — in the practice of medicine? Only once we’ve answered these questions can we begin to predict, then build, the AI-powered future that we want.

AI vs. Human Doctors
Although we are still in the early stages of its development, AI is already just as capable as (if not more capable than) doctors in diagnosing patients. Researchers at the John Radcliffe Hospital in Oxford, England, developed an AI diagnostics system that’s more accurate than doctors at diagnosing heart disease, at least 80 percent of the time. At Harvard University, researchers created a “smart” microscope that can detect potentially lethal blood infections: the AI-assisted tool was trained on a series of 100,000 images garnered from 25,000 slides treated with dye to make the bacteria more visible. The AI system can already sort those bacteria with a 95 percent accuracy rate. A study from Showa University in Yokohama, Japan revealed that a new computer-aided endoscopic system can reveal signs of potentially cancerous growths in the colon with 94 percent sensitivity, 79 percent specificity, and 86 percent accuracy.

In some cases, researchers are also finding that AI can outperform human physicians in diagnostic challenges that require a quick judgment call, such as determining if a lesion is cancerous. In one study, published December 2017 in JAMA, deep learning algorithms were able better diagnose metastatic breast cancer than human radiologists when under a time crunch. While human radiologists may do well when they have unrestricted time to review cases, in the real world (especially in high-volume, quick-turnaround environments like emergency rooms) a rapid diagnosis could make the difference between life and death for patients.

AI is also better than humans at predicting health events before they happen. In April, researchers from the University of Nottingham published a study that showed that, trained on extensive data from 378,256 patients, a self-taught AI predicted 7.6 percent more cardiovascular events in patients than the current standard of care. To put that figure in perspective, the researchers wrote: “In the test sample of about 83,000 records, that amounts to 355 additional patients whose lives could have been saved.” Perhaps most notably, the neural network also had 1.6 percent fewer “false alarms” — cases in which the risk was overestimated, possibly leading to patients having unnecessary procedures or treatments, many of which are very risky.

AI is perhaps most useful for making sense of huge amounts of data that would be overwhelming to humans. That’s exactly what’s needed in the growing field of precision medicine.  Hoping to fill that gap is The Human Diagnosis Project (Human Dx), which is combining machine learning with doctors’ real-life experience. The organization is compiling input from 7,500 doctors and 500 medical institutions in more than 80 countries in order to develop a system that anyone — patient, doctor, organization, device developer, or researcher — can access in order to make more informed clinical decisions.

Potential Pitfalls you need to consider.
There are practical actions that IT leaders can take or need to be aware of to cut through the AI confusion, complexity and hype and to position their organizations to successfully exploit AI for real business value.

Do your homework, get calibrated, and keep up.
While most executives won’t need to know the difference between convolutional and recurrent neural networks, you should have a general familiarity with the capabilities of today’s tools, a sense of where short-term advances are likely to occur, and a perspective on what’s further beyond the horizon. Tap your data-science and machine-learning experts for their knowledge, talk to some AI pioneers to get calibrated, and attend an AI conference or two to help you get the real facts; news outlets can be helpful, but they can also be part of the hype machine. Ongoing tracking studies by knowledgeable practitioners, such as the AI Index (a project of the Stanford-based One Hundred Year Study on Artificial Intelligence), are another helpful way to keep up.

  1. Adopt a sophisticated data strategy.
    AI algorithms need assistance to unlock the valuable insights lurking in the data your systems generate. You can help by developing a comprehensive data strategy that focuses not only on the technology required to pool data from disparate systems but also on data availability and acquisition, data labeling, and data governance.
     
  2. The explainability problem
    Explainability is not a new issue for AI systems.  But, it has grown along with the success and adoption of deep learning, which has given rise both to more diverse and advanced applications and to more opaqueness.  Larger and more complex models make it hard to explain, in human terms, why a certain decision was reached (and even harder when it was reached in real time). This is one reason that adoption of some AI tools remains low in application areas where explainability is useful or indeed required.  Furthermore, as the application of AI expands, regulatory requirements could also drive the need for more explainable AI models.
     
  3. Bias in data and algorithms
    Many AI limitations can be overcome through technical solutions already in the works.  Bias is a different kind of challenge. Potentially devastating social repercussions can arise when human preferences (conscious or unaware) are brought to bear in choosing which data points to use and which to disregard. Furthermore, when the process and frequency of data collection itself are uneven across groups and observed behaviors, it’s easy for problems to arise in how algorithms analyze that data, learn, and make predictions. Negative consequences can include misrepresented scientific or medical prognoses or distorted financial models.  In many cases, these biases go unrecognized or disregarded under the veil of “advanced data sciences,” “proprietary data and algorithms,” or “objective analysis.”

    As we deploy machine learning and AI algorithms in new areas, there probably will be more instances in which these issues of potential bias become baked into data sets and algorithms. Such biases have a tendency to stay embedded because recognizing them, and taking steps to address them, requires a deep mastery of data-science techniques including data collection.

HPE is already there.
HPE is hyper-focused on delivering enterprise artificial intelligence breakthroughs that unlock new revenue streams and build competitive advantages for our partners and customers. HPE offers a comprehensive set of computing innovations specifically targeted to accelerate deep learning analytics and insights. Building on a strong track record of comprehensive, workload-optimized compute solutions for AI and deep learning with its purpose-built HPE Apollo portfolio, HPE introduced a portfolio of new deep learning solutions that maximize performance, scale, and efficiency. HPE now offers greater choice for larger scale, dense GPU environments and addresses key gaps in technology integration and expertise with integrated solutions and services offerings.

HPE last year introduced the HPE Deep Learning Cookbook, a set of tools and recommendations to help customers choose the right technology and configuration for their deep learning tasks. It now includes the HPE Deep Learning Performance Guide which uses a massive knowledge base of benchmarking results and measurements in the customer’s environment to guide technology selection and configuration. By combining real measurements with analytical performance models, the HPE Deep Learning Performance Guide estimates the performance of any workload and makes recommendations for the optimal hardware and software stack for that workload.

 Ex 4. Main Components of the HPE Deep Learning Cookbook

Ex 4. Main Components of the HPE Deep Learning Cookbook

Capitalizing on the full range of AI and deep learning capabilities requires purpose-built computers capable of learning freely, reasoning, and determining the most appropriate course of action in real time. The new HPE Apollo 6500 Gen10 System best addresses the most important step of training the deep learning model, offering eight NVIDIA Volta GPU support and delivering a dramatic increase in application performance.  

The HPE SGI 8600 server is the premier HPC platform for petaflops-scale deep learning environments. A liquid cooled, tray-based, and high-density clustered server, the 8600 now includes support for Tesla GPU accelerators with NVLink interconnect technology. The 8600 utilizes GPU-to-GPU communication which enables 10X the FLOPS per node compared to CPU-only systems, so it is designed to scale efficiently and enable the largest and most complex data center environments with unparalleled power efficiency.

The NVLink 2.0 GPU interconnect is particularly useful for deep learning workloads, characterized by heavy GPU-to-GPU communications. High-bandwidth, low-latency networking adapters (up to four high-speed Ethernet, Intel® Omni-Path Architecture, InfiniBand Enhanced Data Rate [EDR], and future InfiniBand HDR per server) are tightly coupled with the GPU accelerators, which allows the system to take full advantage of the network bandwidth.

Consider what HPE is already doing for healthcare with AI.   The German Center for Neurodegenerative Diseases (DZNE) is using HPE’s memory-driven computing architecture to quickly and accurately process massive amounts and diverse types of data. Generated by genomics, brain imaging, clinical trials and other research into Alzheimer’s disease, the vast data is too much for traditional computing methods; they are simply too slow. Our system’s single, huge pool of addressable memory is easing bottlenecks, and opening the door to a cure.

The United Kingdom is aiming to cut cancer deaths by 10% using artificial intelligence as the key driver of improved health outcomes.  The ambitious new plan calls for the National Health Service (NHS, whose EMR data is hosted on HPE servers), the AI industrial sector, and health charities to use data and AI to transform the diagnosis of chronic diseases, with the goal of seeing around 22,000 fewer people dying from cancer each year by 2033. The plans will see at least 50,000 people each year diagnosed at an early stage of prostate, ovarian, lung, or bowel cancer through the use of emerging technologies that will cross-reference people’s genetics, habits, and medical records with national data to spot those at an early stage of cancer.

HPE is also partnering with other research and clinical organizations seeking to advance discovery in neuroscience. Imagine the data we can capture, and analyze when working with the human brain which has 100 billion neurons and 100 trillion synapses!

This journey is just getting started…
The promise of AI in healthcare is enormous, and the technologies, tools, and processes needed to fulfill that promise haven’t yet fully arrived. But, if you believe it’s better to let other pioneers take the arrows you may also find it’s very difficult to leapfrog from a standing start if you choose not to explore what AI tools can and can’t do now. With researchers and AI pioneers poised to solve some of today’s most difficult conundrums, you may want to partner with a team that understands what is happening on that frontier.  HPE should be that partner.


Reference Support
"Artificial Intelligence Will Redesign Healthcare," The Medical Futurist, 2017.
“What AI can and can’t do (yet) for your business” McKinsey Quarterly, January 2018“Your Future Doctor May Not be Human. This Is the Rise of AI in Medicine.” Futurism.com January 31, 2018
The One Hundred Year Study (ai100.stanford.edu)

 

^
TABLE OF CONTENTS

 
COend.png
logoheader.png
 

Rethinking Support for Hybrid IT: An Inclusive, Relationship-Based, Tailored Approach


author.jpg

Kelly Haviland
HPE Pointnext
WW Datacenter Care
Product Manager

Kelly is passionate about developing services that delight customers and help them meet their business objectives. He has been working in the services space for the past 10 years and was one of the original developers of Datacenter Care. Prior to his work in support services, he spent 10 years in HP IT working on developing and deploying mission critical infrastructure and applications.


Hybrid IT is a “both-and” world. IT leaders think in terms of both on-premises infrastructure and cloud. They focus on keeping the data center operating smoothly at high levels of availability and at the same time reducing costs. They’re looking for ways to get more value out of their current IT investments and carve out infrastructure for innovative business services.

Successful digital transformations are built around this kind of flexible, inclusive, “both-x-and-y” thinking. When it comes to support for hybrid IT, though, the marketplace hasn’t kept pace, and IT organizations often find themselves limited to an “x-only” experience by their vendors’ offerings. The “x” is, of course, the traditional break/fix approach – which may be perfectly adequate for businesses that just need some help with their keep-the-lights-on needs. But for other companies, there’s a pressing need for a solution that encompasses both those needs and the other crucial dimensions of today’s hybrid environments.

KellyOpener.jpg

At HPE, we’ve understood that hybrid IT calls for a new kind of support services for years – actually since before we launched HPE Datacenter Care back in 2013. Since then we’ve constantly added capabilities and honed the offer to create what we think is the most comprehensive support service in the market today.

Here’s a quick overview of HPE Datacenter Care:

The Core

  1. Reactive Support. We troubleshoot, monitor, and remediate devices across your infrastructure to reduce the number of issues and resolve problems as they occur. Like all of the other components of HPE Datacenter Care, Reactive Support covers your entire IT environment, including non-HPE equipment.
  2. Relationship Management. An account support team, including local and remote team members, gives you a single point of accountability. A local, assigned Account Support Manager develops full knowledge of your business goals as well as your IT environment, and partners with your IT team to deliver insights on best practices, processes, and technologies. This is true relationship-based support that accelerates your business in ways that break/fix can’t.
  3. Enhanced Call Management. Special call routing gives you fast access to HPE’s Advanced Solution Center team and experts in environment-level issues, as well as specialist teams or Centers of Excellence for expertise areas such as Servers, Storage, VMware, Microsoft, SAP, and RedHat.
  4. Proactive Options. Now we add to the solution stack your choice of proactive capabilities, depending on the type of products covered – for example, you can select Server, Storage and SAN, or Network proactive options.

The Building Blocks

HPE understands that every datacenter is different, because every business is different, with its own strategies and objectives. In a sense, your current environment already embodies your business goals, and a generic support solution that fails to take that into account isn’t going to cut it. So HPE Datacenter Care includes specialized capabilities to fit your environment. A good way to think of these is as building blocks that provide a tailored support experience. They cover specific technologies in your infrastructure, yet because they’re standardized, they keep costs low.

Let’s say you just deployed SAP HANA TDI. We can provide solution support with HPE Datacenter Care for SAP HAA TDI with access to specialists and the SAP HANA CoE. Or you use Microsoft Azure Stack or Hyperscale compute? Sure, we can do all of that and more. Other building blocks include HPE OneSphere, Storage, Networking, Multivendor, HPE NonStop, and NFV

Other HPE Deliverables. We can further tailor the solution to your organization’s unique needs with special options such as HPE GreenLake Flex Capacity, our innovative consumption-based IT model that combines on-premises infrastructure with cloud-like agility and economics.

Proof of (Part of) the Pudding

So what kind of impact can HPE Datacenter Care have? A new white paper from IDC (The Business Value of HPE Datacenter Care) looked at that question in some detail, based on data gathered from companies currently using the service, and came up with actual estimated dollar amounts. One result that caught my eye: these businesses reduced their revenue losses due to unplanned outages by around two-thirds on average (see the figure).

kellygraph.png

Source: IDC 2017

And this is just one of some truly eye-opening figures in the study. Now, no question, these are impressive results. But they’re just part of the picture, and maybe not the biggest part. Operational savings are one thing, but what if it turns out that an IT team, freed from the time pressures of dealing with unplanned outages, was able to develop a bunch of new apps that made life easier for in-house users? Or an innovative online platform that brought in a flood of new customers?

Hard to put a dollar amount on that.

Check out this great report on how HPE Datacenter Care helped human capital services giant ADP speed time-to-market without compromising stability and performance.

And read here how HPE Datacenter Care helped The Prince’s Trust, a U.K. charity, save time and money.

Related Articles: 

Featured articles: