CoverWinter2018_FINAL4.png

INSIDE C2


ELTag.png
 
 
 
EL_Quote.png

Welcome to the Winter issue of Connect Converge

Stacie-Neall-mug-shot.jpg

Most everyone that knows me well, including my teen (big eye roll), hears me say often “It’s not how you start it is how you finish.” My intuition told me that when Antonio Neri, President and CEO of Hewlett Package Enterprise took the keynote stage that his words would be prolific – and they were. Soon after outlining the three key priorities; customers, innovation and culture Antonio jumped right into the sticky part of technology- the place where we live our real purpose using technology to tackle both the business challenges and the human challenges. So much goodness is happening in improving food production, health care breakthroughs, and eradicating human trafficking. Technology is about the greater good and living our real purpose through the good work we do. Antonio believes that technologies greater promise lies in the good we can do. HPE is moving that needle every day.

Meanwhile back in the HPE User Community booth if there is anything we have learned over the Discover years is there is power in community. We love the fact that we are always in the top 10 visited booths on the transformation floor. With 14 Tech Forums lead by industry experts, customers and partners these forums provide a continuous learning environment. Connect members new and old understand that tech forums are not only one of the go-to places to network with peers, but also a place to get granular with their technology challenges and receive personal attention to solving those challenges. And that is another great way to start!

With so much at play at Discover, then there’s Lewis Hamilton, the famed Formula One @MercedesAMGF1 World Champion, leaving us with the knowledge it takes to compete and to win with the collaboration of technology. Unequivocally, the future really does belong to the fast. I think we all have much to look forward to from Hewlett Packard Enterprise in 2019- and we can expect a strong finish.

On behalf of the Connect team we wish you and yours a very happy holiday and a prosperous new year.

See you in Las Vegas!

Stacie Neall
Managing Editor
@sjneall

P.S. Interested in getting published in 2019? Please share your technical prowess with your HPE User Community. We have a cool, yes I think it is “cool”, new publishing platform based on Ethereum blockchain technology debuting in 2019.

See how it works here:


PresidentsLetterTag.png
 
 
navid.jpg

Greetings HPE Community,

How great would it be if you could do a 3 day work task in just a day? What if your work was a Champion Formula 1 Team? Would those saved days make a difference? HPE sure thought so when it partnered with the Mercedes-AMGPetronas F1 Team. Analysis that would take their team 3 days between races, is now able to be done in 1, thanks to the solutions provided by Hewlett Packard Enterprise. Saving two days makes quite the difference in planning for the next race when you have cars racing at over 200 miles per hour and a split second is the difference between winning the race! This bit of information, as well as a wealth of other information, was revealed at the recent HPE Discover show in Madrid, complete with an in person interview with Champion F1 Driver Lewis Hamilton, where he wowed the crowd with stories about their Championship F1 season and how HPE played a role.

Of course, Formula 1 wasn't the only topic on hand at the Discover show; Lots of topics ranging from Mission Critical Computing, to Hybrid IT, and of course The Edge were huge topics of conversation and were presented through 3 days worth of sessions, panels, and live demos. Simply put, HPE put on another great show. To top it all off they threw an awesome party with a live concert by the legendary Nile Rogers and Chic.

Your Connect team was there for it all, and in between hearing racing stories and drinking Spanish Wine at the concert, we're able to provide a place of community and learning to our members. Our Connect Lounge on the show floor featured multiple sessions throughout the day ranging from Blockchain to Synergy, and everything in between.

Speaking of Blockchain, we are excited to host another HPE Blockchain Forum in New York City on Thursday, February 21st. For more information and complimentary registration, please visit our website, www.connect-community.org. As always, we are excited about HPE's embrace
of this new technology and are happy to give our members a place they can go to hear from, and network with the players involved at HPE.

After a great few days in Madrid, the Connect team is back at work to provide all we can for the HPE Community. Unlike the post F1 race analysis, the HPE Discover show is a 3 day affair we'd rather not compact to 1!

Navid Khodayari
Idelji
Connect Worldwide President

 


Congrats.png

Each year Connect Worldwide recognizes an outstanding volunteer and an individual or group within HPE that goes above and beyond to support Connect with their time and talents. The Connect board of directors reviews the nominations and selects the winners. Due to the number of outstanding Connect volunteers who help support and grow the Connect global community, it has increasingly difficult to choose just one winner from each category. We extend a special thank you to this year's winners and all of our engaged volunteers.

Steve.png

Steve  Davidek is the IT Manager for the City of Sparks in Nevada. Starting his career in 1984, Steve has over 36 years of experience in the IT industry working with HPE Servers, Storage, and Networking.  At the City of Sparks, he manages 20+ HP Proliant Servers – blade and rack, 5 3Par arrays, implements Aruba Networking, manages virtual servers and desktops, and maintains HP PCs and notebooks for nearly 500 users.  As a volunteer for the HPE User Groups, Steve has served at the local level as a chapter president, and has also led advocacy efforts for the past 18 years across the globe. Steve served as a member of the Connect Board of Directors for 2 years as President and as past-president. Steve currently supports the HPE User Community as the official emcee for Connect’s Tech Forums at the HPE Discover events.


Iain.png

Iain Liston-Brown passed Radio Amateurs Examination at the age of 14. He was the Chair of Governors at a local Secondary (High) School with approximately 1300 students from Grades 7 through 13. Iain is a Liasion Officer and Committee Volunteer for BITUG, the British Isles NonStop User Group, a "not for profit" independent user group organization. He organizes community events and specializes in Customer Management, Business Requirements Definitions, Financial Business Cases, NonStop Architecture & Solutions, Team Leadership and Manufacturing & Finance. Iain achieved the Wine & Spirit Education Trust Level 3 "Advanced" certificate in 2010.

COend.png


TopThinkingWinterc22018.png
ChrisP.png
Chris Purcell

Chris Purcell has 29+ years of experience working with technology within the datacenter. Currently focused on integrated systems (server, storage, networking and cloud which come wrapped with a complete set of integration consulting and integration services.)

You can find Chris on Twitter as @Chrispman01 and @HPE_ConvergedDI and his contribution to the HPE CI blog at www.hpe.com/info/ciblog

What does it mean to software-define something, or to move into the digital age? These phrases may sound strange at first, but if you look around, companies all over the world are doing these things every day.

Hell, take pizza for example. You might not think you can software-define pizza, but it has already happened, and you probably didn’t even realize it. Domino’s Pizza has distilled the pizza ordering experience down from calling the pizza store, to ordering pizza online, to now only requiring one simple action: text Domino’s a pizza emoticon, and your favorite saved order is delivered to you. That pizza emoticon may look like a tasty morsel, but Domino’s has transformed it into a template that you can customize, edit, and use to repeatedly and reliably order your favorite pizza with minimal effort. That’s what it means to software-define something.

 

Software-define your data center

For a data center, it’s just as important to take the complicated processes of old and distill them down into simple actions. Bring the magic and simplicity of software and integrate it directly into the hardware at the most basic level, so that from start to finish, the process is defined and controlled through software. For example, instead of manually provisioning servers one-by-one, your infrastructure management software can let you quickly and reliably discover, deploy, and provision your data center. Like the pizza emoticon, you should be able to create templates for your workloads and applications and apply them to servers with minimal effort.

Maintaining those servers is just as important, so when requirements change or firmware needs to be updated, you merely edit a single template. Changes are easily propagated across your infrastructure in the same repeatable and reliable fashion that you used to set it up.

Controlling your infrastructure through software doesn’t stop at templates. Part of moving into the digital age also means taking advantage of everything digital has to offer to create the optimal solutions.

Looking back at our pizza example, Domino’s Pizza didn’t stop with the pizza slice emoticon. They’ve partnered with some of the coolest tech around, such as smart home speakers and smart TVs, to enable its customers to easily order – even when their phone is buried deep somewhere in the couch. In fact, the list of ways it has come up with for customers to order pizza is mind-boggling — this company seems to have something for everybody.

Here’s something to think about. Does your data center take advantage of today’s tech to have something for everybody? Do you have tools that make your infrastructure easily accessible to developers in your company? Do you have automation tools that make life easier for IT admins? Templates are great but automating them is even better. Integrating tools that can intelligently predict changes in workload needs and shuffle around resources to meet them are the kinds of tools you want. The best way to achieve this is to choose an infrastructure management solution that has an open, unified API that allows you to easily integrate these tools, and again, do it with minimal effort.

Lastly, it’s also key to make sure that you’re able to software-define with one infrastructure management solution for many different platforms within your data center, not just servers or storage. Lots of companies offer management solutions that manage all your servers, and then another solution to manage your storage, and another to manage your networking. Domino’s doesn’t have separate tools for you to order pizza, brownies, or a salad. Your data center is one product that is delivering services to your company, so you should manage all of it as one.

Software-define your infrastructure with HPE OneView

Are you ready to software-define? Consider HPE OneViewone tool that allows you to automate, integrate, and innovate better in your data center. HPE OneView uses template-based provisioning and updating to speed time-to-value for the services you deliver with your infrastructure. HPE OneView also simplifies the lifecycle management of your infrastructure and makes it easy to integrate with today’s most innovative tools  such as Chef, Docker, Puppet, Ansible, Microsoft, VMware, and more by leveraging an open, unified API. With HPE OneView, you can software-define your HPE solutions such as HPE ProLiant servers, HPE BladeSystem, HPE 3PAR, and more.

Think about how you could distill your processes down to be simpler and faster, then head over to hpe.com/info/oneview to learn more about how you can leverage HPE OneView to get started. Or, download the free e-book, HPE OneView for Dummies to learn more.

COend.png

AroundStorageWINTER2018_C2.png
CalvinBW.png
Calvin Zito
HPE Blogger
& Storage Evangelist

Calvin Zito is a 35 year veteran in the IT industry and has worked in storage for 27 years. He’s an 8-time VMware vExpert. As an early adopter of social media and active in communities, he has blogged for 10 years.

You can find his blog at
hpe.com/storage/blog

He started his “social persona” as HPStorageGuy, and after the HP separation manages an active community of storage fans on Twitter as @CalvinZito

You can also contact him via email at calvin.zito@hpe.com

Recently, I was in Madrid for HPE Discover. With the opening of the event, I shared big storage news. My article talks about enhancements to InfoSight, Memory-Driven Flash, HPE Cloud Volumes and more. So read on!

Dr. Heinz-Herman Adam leads a standing-room only Connect Tech Forum at HPE Discover Madrid about how the University of Münster uses 3PAR in their multi-tier storage strategy.

Dr. Heinz-Herman Adam leads a standing-room only Connect Tech Forum at HPE Discover Madrid about how the University of Münster uses 3PAR in their multi-tier storage strategy.

For the last few Discover events, our storage news went out a couple of weeks before Discover. With so much happening, management only picks the top news to actually go out at Discover. This time the news featured storage - which means it's a pretty big announcement. And there’s a lot to talk about, so let me dive into an overview of what we announced.

The world’s most intelligent storage

About a year ago, we started talking about the pillars of the HPE Storage point of view: predictive, cloud ready, and timeless. We had posts on ATSB talking about it. But over the year, we’ve refined the pillars and now they are:

  • AI-driven

  • Built for cloud

  • As-a-service experience

“Above” the pillars, we’ve really honed in on what we want to be famous for: intelligent storage. Intelligent storage is focused on unlocking your data’s potential, driving actionable insights, and delivering impact to your business. My colleague Jenna Colleran has a post diving deeper into intelligent storage and why it’s important. If you want to get the jump on understanding it, also check out my Chalk Talk on Unlocking data's full potential with intelligent storage.

Intelligent storage is AI-driven

One of the pillars is AI-driven. It used to be “predictive” but we think storage needs to go beyond predictive and be AI-driven. HPE InfoSight is leaping from predictive analytics into AI. And there are new features to mention that again, we’ll dive deeper in later in the week. Katie Fritsch has that post and there are a few posts on HPE Storage Tech Insiders (our technical deep-dive blog on hpe.com) that dives into the new features. I’ll summarize what's new:

  • New 3PAR Performance Insights powered by InfoSight prevents performance bottlenecks in real time. I have a podcast with Phill Gilbert from the 3PAR product management team that you can listen to and get a good understand of it.

  • Cross-stack recommendations for HPE Nimble Storage goes beyond predictive analytics of your cross-stack environment into making AI-driven guidance to help optimize VM performance by diagnosing root cause of performance bottlenecks.

  • New Resource Planner for Nimble is again an AI-driven tool to help optimize workload placement based on available resources and takes the guesswork out of determine which systems have headroom.

  • And just for good measure, if you missed it, a couple weeks ago we announced InfoSight for HPE Servers. I have a Chalk Talk that explains it.

Intelligent storage is built for cloud

With more applications being developed for the cloud, enterprises need easy mobility for their hybrid cloud environment (on-premises and cloud). HPE Cloud Volumes is an enterprise-grade, multi-cloud solution with easy multi-cloud mobility. Doug Ko has an article summarizing the HPE Cloud Volumes news. Also check out the deep dive we did around “Built for Cloud” with a focus on HPE Cloud Volumes at our recent Storage Tech Day. 

Here’s what we’ve announced:

  • HPE Cloud Volumes now support Docker and Kubernetes containers. About three years ago, I remember sitting down with a friend at VMworld and he explained containers to me. For a mixed environment (of traditional IT and containers), your storage has to support both and we do that with both 3PAR and Nimble – and now Cloud Volumes works with Docker and Kubernetes. Back to the Tech Day, check out the deep dive we did around container orchestration.

  • HPE Cloud Volumes will expand into UK and Ireland in 2019.

  • Completed SoC 2 Type 1 and HIPPA compliance certifications.

Enhancements to the HPE Storage portfolio

HPE Memory-Driven Flash is a new storage architecture built with Storage Class Memory (SCM) and NVMe. We’re announcing availability with 3PAR starting in December and is expected with HPE Nimble Storage in 2019. This is the industry’s first enterprise storage with SCM and NVMe. HPE Memory-Driven Flash can improve latency by 10X over using NVMe SSDs and lowers latency up to 2X and is 50% faster than NVMe all-flash arrays.

You can get the jump on this by watching my SCM and NVMe for HPE 3PAR and Nimble Storage Chalk Talk but Matt Morrissey has an article diving into this topic. This was also a topic at our HPE Tech Day (are you seeing a pattern here?) so check out that deep dive that explained the benefits for SCM over just using NVMe SSDs.  If you want to see an independent view of this, check out the blog posts from indie blogger Philip Sellers talking about why we jumped to SCM (vs NVMe SSDs, which a lot of the competition is doing) and another from Richard Arnold wrapping up all our storage news.  

You have been waiting for it so we’re to also announcing Peer Persistence (synchronous replication) for Nimble Storage. This is included free with all Gen 5 Nimble arrays at no extra cost. We’ll have more on this too but here’s my Peer Persistence Chalk Talk.

HPE also announced an enhanced partnership with Cohesity that addresses the challenges of secondary data sprawl and infrastructure silos by consolidating backup, files, objects, test/dev, and analytics on to a single software-defined scale out platform based on Cohesity software with qualified HPE Apollo and HPE ProLiant servers. Beginning in early 2019, customers can order these solutions as single SKU from HPE that will include Cohesity and supported Apollo and ProLiant servers. Simon Watkins has a blog post with the details that includes a podcast I did with Cohesity at Discover. 

And we’re announcing the availability of the HPE Apollo 4200 Gen10 platform – ideal for big data analytics, scale-out software defined storage and other data centric workloads. We have a blog post from Ashwin Shetty on this topic next that includes a walkthrough of the Apollo 4200 Gen10 video I did in Madrid. 

Intelligent storage delivered as a service

I was surprised to hear that today HPE manages 500PB of customer data delivered as a service with HPE GreenLake Flex Capacity. That’s significant given we recently announced GreenLake. We’re expanding it to include consumption-based data protection with Veeam. This enables businesses to protect all the right data from any source for application development and digital services delivery. Including the Veeam Availability Suite with HPE StoreOnce in the HPE GreenLake Flex Capacity solution brings an elastic solution with robust support, and strong integration expertise. Available globally, customers can get started with Veeam and HPE GreenLake Flex Capacity quickly, with no capital outlay, and pay only for what they use.  Monthly charges adjust with usage, and HPE meters and plans capacity so that there is always extra capacity ready to meet customer needs. And I have a blog post for this topic from Don Randall from HPE Pointnext that you should check out for all the details. 

But wait, there’s more

To access all of the latest ATSB articles on our Discover Madrid news, we used the label Storage News so they all appear on the same page - or I should say 6 per page as we now have 7 posts about the news and I expect a few more in the next couple of weeks.

COend.png


EducationCornerTitle.jpg
KellyB.png
Kelly Baig Badging Program Manager, HPE Education Services

25+ years in high tech in various roles that include Consulting, Channel Mgmt, Product Mgmt and Marketing. Technology areas include storage and data management, high availability, cloud and hosting, networking, and mobility/wearable technology for enterprise, SMB , and channel business. Industries include healthcare, financial services, ISVs, Service Providers and telecos.

For many, technology transformation is almost universally held back by the readiness of people within their teams to operate in the transformed environment. Many customers who have embraced new technology, to increase their agility and flexibility, have encountered challenges with their transformation efforts.  The impact can be significant delays in their progress and sometimes a complete halt. The primary challenges for many is finding the required talent, in equipping their teams with the necessary skills to build and manage these complex environments, and often in overcoming the resistence inherent to the challenges that occur when people confront change.

HPE_Digital_Learner_2.jpg

With any large IT transformation the impact on roles for teams of people is enormous both in terms of the number of roles which are impacted, and in terms of the scope of the change which is required for each role. With large types of technology transformations, including cloud and edge computing, IT and business people are impacted and require new understanding of processes, roles and tools. Without an effective approach for enabling people, the project is not only delayed but also at risk of being able to achieve the expected outcomes for the customer’s organization.

This set of challenges comes along at a time when the huge pace of technology change with digital transformation is urgent for most organizations, and when they have urgent need for more of their people to understand and apply digital technology than ever before. But these challenges with people and their skills are stalling or preventing the organization from being able to progress the transformation. Within the workforce, we also see a new generation of workers, those who are demanding new methods for professional learning and development which their organizations are struggling to meet.

At HPE we realized we needed to come up with an approach to help our customers to be successful with technology transformation by ensuring that their people are front-and-center in the transformation planning, and served within a digital community which enables their progression in the manner that they need. This innovative approach is transformative to how HPE-led technology projects are planned with our customers and increases the benefits that customers are likely to obtain from these projects.  

i00055920.png

HPE launches HPE Digital Learner™ - a comprehensive, continuous learning solution for our customers 

In rethinking the problem that organizations are facing today, we examined digital delivery platforms. Our innovation is in how we organize community engagement, project communication, and learning information for our customers.  As part of the talent enablement approach the HPE Digital Learner™ platform provides a social community for team communication and learning guidance. Our platform is designed to prescribe guided learning journeys for teams and individuals that encapsulate the steps in training and role development which our experts have defined and proven over the course of hundreds of previous engagements; it also enables metrics and reporting around any given project so continuous learning is encouraged.

This innovation responds to a huge sea-change in how customers need to obtain technical training and acclimation for technology transformation projects, which traditionally has been delivered in live instructor-led training, live consulting efforts, and/or have been left out of the project altogether. Organizations can no longer afford to send all of their people to this type of training, because of the impact to their time out of office. Moreover, the acceleration of technology change and the scope of change requires large numbers of people to be supported throughout the process of continuous change. For all of these reasons, HPE understood that today’s training model needed to evolve into one which would help to establish and support a culture of continuous change and innovation effectively for the customer’s entire organization.

Our approach is the first of its kind to focus on people with a digital delivery platform and prescribed learning journeys, tailored to a specific organization’s exact requirements whilst sustaining change over time. 

The biggest benefit? Our customers establish a sustainable method for enabling their people to be successful with their technology transformation, creating a culture of success in which their people feel enabled and supported through change.

For more information on HPE Digital Learner™, visit www.hpe.com/ww/digitallearner.  

COend.png


 
schwabold.png
Web-Theresa_Schwab-B&W.png
Theresa Schwab Growth Strategist

Theresa Schwab is a growth strategy coach and consultant. Theresa leverages her experience founding and operating a successful IT Service business in Austin, TX, to help technology-based businesses grow profitable revenue, increase efficiencies, and build teams to scale. Theresa’s current roster of clients includes businesses ranging from the early launch phase to well-established 25-year-old firms. Theresa’s corporate experience includes leadership roles at Dupont, Motorola and Freescale Semiconductors. Connect with Theresa on LinkedIn, Twitter or her blog.

One of my favorite memories from my tenure at Motorola is calling Dan Keitz and anticipating how he would make me laugh when he answered. Dan, an amateur comedian, replaced his previous outgoing voicemail message every day with another humorous one intended to spark a laugh from the caller and that it did. In this age of texting, chat, and website form fills, the skill of answering the phone appears to be a lost art.

Recently, I’ve been listening to service industry clients’ incoming calls. Clicking on each call felt like opening an exquisitely wrapped gift. I couldn’t wait to see what the next call revealed. How many rings will it take before someone answers?  The anticipation kept building. Is this call physically ringing somewhere and no one is answering?  Will anyone ever answer? 

The calls answered by a real live human being held an element of surprise too. The wide range of greetings only furthered my curiosity. I envisioned war rooms filled with people robotically answering call after call. Others caught someone on their cell phone with a cacophony of background noise ranging from road noise to restaurants to echo chambers. It felt like winning when I scored one answered in a professional manner mentioning the business name followed by “this is _________. How may I help you?”

Here are a few insights and advise after listening to a few of these calls:

  1. Don’t make a prospect call you twice. It’s disappointing I have to say it. Have you set clear expectations with your team on how quickly to return calls, especially prospect calls?

  2. If you must have an automated attendant, have it pick up quickly.

  3. Always answer like the caller is a new prospect. It may be the 100th cold call, but don’t act like it.

  4. Be prepared and know exactly the questions to ask a prospect. Don’t act shocked when someone calls you wanting to do business. Give your team a script or detailed instructions for creating the best caller experience.

Even though I have my clients’ cell phone numbers, I will periodically call their office to experience their first impressions directly. 

What emotions do you want to invoke in your callers - confidence, reassurance, enthusiasm, joy?

When was the last time you secretly called your office?

 
COend.png

DiscoverLasVegas2019FULLPAGE.jpg

 


DanaTitle.png
DanaGarderProfilePic.png
Dana Gardner

Analyst Dana Gardner hosts conversations with the doers and innovators—data scientists, developers, IT operations managers, chief information security officers, and startup founders—who use technology to improve the way we live, work, and play.

View an archive of his regular podcasts.

The next BriefingsDirect hybrid cloud advancement interview explores how the triumvirate of a global data center hosting company, a hybrid cloud platform provider, and a global cloud community are solving some of the most vexing problems for bringing high-performance clouds to more regions around the globe.

We will now explore how EquinixMicrosoft Azure Stack, and Hewlett Packard Enterprise (HPE)’s Cloud28+ are helping managed service providers (MSPs) and businesses alike obtain world-class hybrid cloud services.

David Anderson

David Anderson

Here to explain more about new breeds of hybrid cloud solutions are David Anderson, Global Alliance Director at Equinix for its Microsoft alliance, and Xavier Poisson, Vice-President of Worldwide Services Providers Business and Cloud28+ at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There seems to be a paradox when it comes to hybrid cloud -- that it works best in close proximity technologically yet has the most business payoff when you distribute it far and wide. So how are Equinix, Microsoft, and HPE together helping to solve this paradox of proximity and distribution?

Anderson: That’s a great question. You are right that hybrid cloud does tend to work better when there is proximity between the hybrid installation and the actual public cloud you are connecting to. That proximity can actually be lengthened with what we call interconnectedness.

Interconnectedness is really business-to-business (B2B) and business-to-cloud private network Ethernet connections. Equinix is positioned with more than 200 data centers worldwide, the most interconnections by far around the world. Every network provider is in our data centers. We also work with cloud providers like Microsoft. The Equinix Cloud Exchange connects businesses and enterprises to those clouds through our Equinix Cloud Exchange Fabric. It’s a simple one-port virtual connection, using software-defined networking (SDN), up to the public clouds.

That provides low-latency and high-performance connections -- up to 10 Gigabit network links. So you can now run a hybrid application and it’s performing as if it’s sitting in your corporate data center not far away.

The idea is to be hybrid and to be more dispersed. That dispersion takes place through the breadth of our reach at Equinix with more than 200 data centers in 45 metro areas all over the world -- and so, interconnected all over.

Plus, there are more than 50 Microsoft Azure regions. We’re working closely with Microsoft so that we can get the cloud out to the customers fairly easily using the network service providers in our facilities. There are very few places on Earth where a customer can’t get from where they are to where we are, to a cloud – and with a really high-quality network link.

Gardner: Xavier, why is what we just heard a good fit for Cloud28+? How do you fit in to make hybrid clouds possible across different many regions?

Xavier Poisson

Xavier Poisson

Poisson: HPE has invested a lot in intellectual property in building our own HPE and Microsoft Azure Stack solution. It’s designed to provide the experience of a private cloud while using Microsoft as your technology’s tool.

Our customers want two things. The first is to be able to execute clouds on-premises, but also to connect to wider public clouds. This is enabled by what we are doing with a partner like Equinix. We can jump from on-premises to off-premises for an end-user customer.

The second is, when a customer decides to go to a new architecture around hybrid cloud, they may need to get reach and this reach is difficult now.

So, how we can support partners to find the right place, the right partners at the right moment in the right geographies with the right service level agreements (SLAs) for them to meet their business needs?

The fact that we have Equinix inside of Cloud28+ as a very solid partner is helping our customers and partners to find the right route. If I am an enterprise customer in Australia and I want to reach into Europe, or reach into Japan, I can, through Cloud28+, find the right service providers to operate the service for me. But I will also be hosted by a very compelling co-location company like Equinix, with the right SLAs. And this is the benefit for every single customer.

This has a lot of benefits for our MSPs. Why? Because our MSPs are evolving their technologies, evolving their go-to-market strategies, and they need to adapt. They need to jump from one country to another country, and they need to have a sustainable network to make it all happen. That’s what Equinix is providing.

We not only help the end-user customers, but we also help our MSPs to build out their capabilities. Why? We know that with interconnectedness, as was just mentioned, that they can deliver direct cloud connectivity to all of their end users.

Together we can provide choice for partners and end-user customers in one place, which is Cloud28+. It’s really amazing. 

Gardner: What are some of the compelling new use cases, David? What are you seeing that demonstrates where this works best? Who should be thinking about this now as a solution?

Data distribution solutions 

1-Microsoft-Azure-Stack-logo.png

Anderson: The solution -- especially combined with Microsoft Azure Stack -- is suited to those regions that have had data sovereignty and regulatory compliance issues. In other words, they can’t actually put their data into the public cloud, but they want to be able to use the power, elasticity, and the compute potential of the public cloud for big data analytics, or whatever else they want to do with that data. And so they need to have that data adjacent to the cloud.

Same for an Azure Stack solution. Oftentimes it will be in situations where they want to do DevOps. The developers might want to develop in the cloud, but they are going to bring it down to a private Azure Stack installation because they want to manage the hardware themselves. Or they actually might want to run that cloud in a place where public Azure may not yet have an availability zone. That could be sub-Saharan Africa, or wherever it might be -- even on a cruise ship in the middle of the ocean.

There's a lot of legacy hardware out there. The need is for applications to run on a cloud, but the hardware can't be virtualized. These workloads could be moved to Equinix and then connect to a cloud.

Another use case that we are driving hard right now with Microsoft, HPE, and Cloud28+ is on the idea of an enterprise cage, where there is a lot of legacy hardware out there. The need is for applications to run to some degree on a cloud, but the hardware can’t be virtualized. But these workloads could be moved to an Equinix data center and connected to the cloud. They can then use the cloud for the compute part, and all of a sudden they are still getting value out of that legacy hardware, in a cloud environment, in a distributed environment.

Other areas where this is of value include a [data migration] appliance that is shipped out to a customer. We’ve worked a lot with Microsoft on this. The customer will put up to 100 TB of data on the appliance. It then gets shipped to one of our data centers where it’s hooked up through high-speed connection to Azure and the data can be ingested into Azure.

Now, that’s a onetime thing, but it gives us and our service providers on Cloud28+ the opportunity to talk to customers about what they are going to do in the cloud and what sort of help might you need.

Scenarios like that provide an opportunity to learn more about what enterprises are actually trying to do in the cloud. It allows us then to match up the service providers in our ecosystem, which is what we use Cloud28+ for with enterprise customers who need help.

Gardner: Xavier, it seems like this solution democratizes the use of hybrid clouds. Smaller organizations, smaller MSPs with a niche, with geographic focus, or in a vertical industry. How does this go down market to allow more types of organizations to take advantage of the greatest power of hybrid cloud?

Hybrid cloud power packaged

equinix-sy4-colocation-hall_1.jpg

Poisson:We have packaged the solutions together with Equinix by default. That means that MSPs can just cherry pick to provide new cloud offerings very quickly.

Also, as I often say, the IT value chain has not changed that much. It means that if you are a small enterprise, let’s say in the United States, and you want to shape your new generation of IT, do you go directly to a big cloud provider? No, because you still believe in your systems integrator (SI), and in your value-added reseller (VAR).

Interestingly, when we package this with Equinix and Microsoft, having this enterprise cage, the VARs can take the bull by the horns. Because, when the customer comes to them and says, “Okay, what should I do, where should put my data, how can I do the public cloud but also a private cloud?” The VAR can guide them because they have an answer immediately -- even for small- to medium-sized (SMB) businesses.

Our purpose at Cloud28+ is to explain all of this through thought leadership articles that we publish -- explaining the trends in the market, explaining that the solutions are there. You know, not a lot of people know about Equinix. There are still people who don’t know that they can have global reach.

If you are a start-up, for example, you have a new business, and you need to find MSPs everywhere on the globe. How you do that? If you go to Cloud28+ you can see that there are networks of service providers or learn what we have done with Equinix. That can empower you in just a few clicks.

We give the access to partners who have been publishing more than 900 articles in less than six months on various topics such as security, big data, interconnection, globalization, artificial intelligence (AI), and even the EU’s General Data Protection Regulation (GDPR). They learn and they find offerings because the articles are connected directly to those offering services, and they can get in touch.

We are easing the process -- from the thought leadership, to the offerings with explanations. What we are seeing is that the VARs and the SIs are still playing an enormous role. 

So, it’s not only Microsoft, with HPE, and with the data centers of Equinix, but we put the VARs into the middle of the conversation. Why? Because they are near the SMBs. We want to make everything as simple as you just put in your credit card and you go. That’s fair enough for some kinds of workloads.

But in most cases, enterprises still go to their SIs and their VARs because they are all part of the ecosystem. And then, when they have the discussion with their customers, they can have the solution very, very quickly.

Gardner: Seems to me that for VARs and SIs, the cloud was very disruptive. This gives them a new lease on life. A middle ground to take advantage of cloud, but also preserve the value that they had already been giving.

Take the middle path 

Poisson: Absolutely. Integration services are key, application migrations are key, and security topics are very, very important. You also have new areas such as AI and blockchain technologies.

For example, in Asia-Pacific and Europe, Middle East and Asia (EMEA), we have more-and-more tier-two service providers that are not only delivering their best services but are now investing in practices around AI or blockchain -- or combine them with security -- to upgrade their value propositions in the market.

For VARs and for Sis, it is all benefit because they know that solutions exist, and they can accompany their customers to the transition. For them, this is all also a new flow of revenue.

Gardner: As we get the word out that these distributed hybrid cloud solutions are possible and available, we should help people understand which applications are the right fit. What are the applications that work well in this solution?

The hybrid solution gives SIs, service providers, and enterprises more flexibility than if they try and move an application completely into the cloud.

Anderson: The interesting thing is that applications don’t have to be architected in a specific way, based on the way we do hybrid solutions. Obviously, the apps have to be modern. 

I go back to my engineering days 25 years ago, when we were separating data and compute and things like that. If they want to write a front-end and everything in platform-as-a-service (PaaS) on Azure and then connect that down to legacy data, it will work. It just works.

The hybrid situation gives SIs, service providers, and enterprises more flexibility than if they try and move an application, whatever it is, completely into the cloud, because that actually takes a lot more work.

Some service providers believe that hybrid is a transitory stage, that enterprises would go to hybrid just to buy them time till they go fully public cloud. I don’t believe Microsoft thinks that way, and we certainly don’t think that way. I think there is a permanent place for hybrid cloud. 

In fact, one of the interesting things when I first got to Equinix was that we had our own sellers saying, “I don’t want to talk to the cloud guys. I don’t want them in our data centers because they are just going to take my customers and move them to the cloud.” 

The truth of the matter is that demand for our data centers has increased right along with the increase in public cloud consumption. So it’s a complementary thing, not a substitution thing. They need our data centers. What they are trying to do now is to close their own enterprise data centers. 

And they are getting into Equinix and finding out that the connectivity possibilities and -- especially in the Global 2000 enterprises -- nobody wants cloud vendor lock-in. They are all multicloud. Our Equinix Cloud Exchange Fabric solution is a great way to get in at one point and be able to connect to multiple cloud providers from right there. 

It gives them more flexibility in how they design their apps, and also more flexibility in where they run their apps.

Gardner: Do you have any examples of organizations that have already done this? What demonstrates the payoffs? When you do this well, what do you get for it?

Cloudify your networks

Anderson: We have worked with customers in these situations where they have come in initially for a connection to Microsoft, let’s say. Then we brought them together with a service provider and worked with them on network transformations to the point where they have taken their old networks – a lot of Multiprotocol Label Switching (MPLS) and everything else that were really very costly and didn’t perform that well -- and ended up being able to rework their networks. We like to say they cloudifytheir networks, because a lot of enterprise networks aren’t really ready for the heavy load of getting out to the cloud.
Network.jpg

And we ended up increasing their performance by up to 10, 15, 20 times -- and at the same time cut their networking costs in half. Then they can turn around and reinvest that in applications. They can also then begin to spin up cloud apps, and just provision them, and not have to worry about managing the infrastructure.

They want the same thing in a hybrid world, which is where those service providers that we find on Cloud28+ and that we amplify, come in. They can build those managed services, whether it’s a managed Azure Stack offering or anything else. That enables the enterprise IT shops to essentially do the same thing with hybrid that they are doing with public cloud – they can buy it on a consumption model. They are not managing the hardware because they are offloading that to someone else.

Because they are buying all of their stuff in the same model -- whether it’s considered on-premises or a third-party facility like ours, or a totally public cloud. It’s the same purchasing model, which is making their procurement departments happy, too.

Gardner: Xavier, we have talked about SIs, VARs, and MSPs. It seems to me that for who we used to call independent software vendors (ISVs), the former packaged software providers, that this hybrid cloud model also offers a new lease on life. Does this work for the applications providers, too? 

Extend your reach 

Poisson: Yes, absolutely. And we have many, many examples in the past 12 months of ISVs, software companies, coming to Cloud28+ because we give them the reach. 

Lequa AB, a Swedish company, for example, has been doing identity management, which is a very hot topic in digital transformation. In the digital transformation you have your role when you speak to me, but in your other associations you have another role. The digital transformation of these roles needs to be handled, and Lequa has done that. 

And by partnering with Cloud28+, they have been able to extend their reach in ways they wouldn’t ever have otherwise. Only in the past six months, they have been in touch with more than 30 service providers across the world. They have already closed deals.

If I am only providing baseline managed information services, how can I differentiate from the hyperscale cloud providers? MSPs now care more about the applications to differentiate themselves in the market.

On one side of the equation for ISVs, there is a very big benefit -- to be able to reach ready-to-be-used service providers, powered by Equinix in many cases. For the service providers, there is also an enormous benefit.

If I am only providing baseline managed information services, how can I differentiate from the hyperscale cloud providers? How can I differentiate from even my own competitors? What we have seen is that the MSPs are now caring more about the application makers, the former ISVs, in order for them to differentiate in the market.

So, yes, this is a big trend and we welcome into Cloud28+ more and more ISVs every week, yes.

Gardner: David, another concern that organizations have is as they are distributing globally, as there are more moving parts in a hybrid environment, things become more complex. Is there something that HPE is doing with new products like OneSphere that will help? How do we allow people to gain confidence that they can manage even something that’s a globally distributed hybrid set of applications?
globalclouds.jpg

Confident connections in global clouds 

Anderson: There are a number of ways we are partnering with HPE, Microsoft, and others to do that. But one of the keys is the Equinix Cloud Exchange Fabric, where now they only have to manage one wire or fiber connection in a switching fabric. That allows them to spin up connections to virtually all of the cloud providers, and span those connections across multiple locations. And so that makes it easier to manage. 

The APIs that drive the Equinix Cloud Exchange Fabric can be consumed and viewed with tools such as HPE OneSphere to be able to manage everything across the solution. The MSPs are also having to take on more and be the ones that provide management.

As the huge, multinational enterprises disperse their hybrid clouds, they will tend to view those in silos. But they will need one place to go, one view to look at, to know what’s in each set of data centers.

At Equinix, our three pillars are the ideas of being able to reach everywhere, interconnect everything, and integrate everything. That idea says we need to be the place to put that on top of HPE with the service providers because then that gives you that one place that reaches those multiple clouds, that one set of solid, known, trusted advisors in HPE and the service providers that are really certified through Cloud28+. So now we have built this trusted community to really serve the enterprises in a new world.

Gardner: Before we close out, let’s take a look into the crystal ball. Xavier, what should we expect next? Is this going to extend to the edge with the Internet of Things (IoT), more machine learning (ML)-as-a-service built into the data cloud? What comes next?

futurecloud.jpg

The future is at the Edge

Poisson: Today we are 810 partners in Cloud28+. We cover more than 560 data centers in more than 34 countries. We have been publishing nearly 30,000 cloud services in only two years. You see how fast it has been growing?

What do we expect in the future? You named it: Edge is a very hot topic for us and for Equinix. We plan to develop new offering in this area, even new data center technology. It will be necessary to have new findings around what a data center of tomorrow is, how it will consume energy, and what we can do with it together. 

We are already engaged in conversations between Equinix, ourselves, and another company within the Cloud28+ community to discuss what the future data center could be.

A huge benefit of having this community is that by default we innovate. We have new ideas because it's coming through all of the partners. Yes, edge computing is definitely a very hot spot. 

For the platform itself, I believe that even though we do not monetize in the data center, which is one of the definitions of Cloud28+, the revenues at the edge are for the partners, and this is also by design.

Nonetheless, we are thinking of new things such as a smart contracting around IoT and other topics, too. You need to have a combination of offerings to make a project. You need to have confidentiality between players. At the same time, you need to deliver one solution. So next it may be solutions on best ways for contracting. And we believe that blockchain can add a lot of value in that, too.

Cloud28+ is a community and a digital business platform. We are thinking of such things as smart contracting for IoT and using blockchain in many solutions.

Cloud28+ is a community and a digital business platform. By the way, we are very happy to have been recognized as such by Gartner in several research notes since September 2017. We want to start to include these new functions around smart contracting and blockchain. 

The other part of the equation is how we help our members to generate more business. Today we have a module that is integrated into the platform to amplify partner articles and their offerings through social media. We also have a lead-generation engine, which is working quite well.

We want to launch an electronic lead-generation capability through our thought leadership articles. We believe that if we can give the feedback to the people filling in these forms, with how they position versus all of their peers, on how they position versus the industry analysts, they will be very eager to engage with us.

And the last piece is we need to examine more around using ML across all of these services and interactions between people. We need to deep dive on this to find what value we can bring from out of all this traffic, because we have such traffic now inside Cloud28+ that trends are becoming clear.

For instance, I can say to any partner that if they publish an article on what is happening in the public sector today, it will have a yield that is x-times the one that has been published at an earlier date. All this intelligence, we have it. So what we are packaging now is how to give intelligence back to our members so they can capture trends very quickly and publish more of what is most interesting to the people.

But in a nutshell, these are the different things that we see.

Gardner: And I know that evangelism and education are a big part of what you do at Cloud28+. What are some great places that people can go to learn more?

Poisson: Absolutely. You can read not only what the partners publish, but examine how they think, which gives you the direction on how they operate. So this is building trust. 

For me, at the end of the day, for an end-user customer, they need to have that trust to know what they will get out of their investments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise

COend.png


XYPROHeaderC2Winter2018.png
SteveBW.png
Steve Tcherchian
CISSP - CISO and Director of Product, XYPRO Technology, www.xypro.com
@SteveTcherchian @XYPROTechnology

Steve Tcherchian, CISSP, PCI-ISA, PCIP is the Chief Information Security Officer and the Director of Product Management for XYPRO Technology. Steve is on the ISSA CISO Advisory Board, the NonStop Under 40 executive board and part of the ANSI X9 Security Standards Committee. A dynamic tech visionary with over 15 years in the cyber security field, Steve is responsible for strategy and innovation of XYPRO’s security product line as well as overseeing XYPRO’s risk, compliance and security to ensure the best experience to customers in the Mission-Critical computing marketplace.

2018 was another troubling year in the cybersecurity world. We saw a repeat of last year’s data breaches on a larger scale. Google, Toyota, Facebook, Under Armour, LifeLock, Air Canada, Blue Cross and many, many more fell victim to some sort of compromise. Hardly a week went by where we weren’t reading about a new mega breach. Even the popular online video game, Fortnite, was hacked and children’s personal data was found for sale on the dark web. No one was off limits. It’s to the point where we’ve become numb to the news; we shrug it off and move on. But as consumers, we should be concerned with the lackluster cybersecurity practices companies have in place. It’s clearly not protecting our data.

2018 was another troubling year in the cybersecurity world. We saw a repeat of last year’s data breaches on a larger scale. Google, Toyota, Facebook, Under Armour, LifeLock, Air Canada, Blue Cross and many, many more fell victim to some sort of compromise. Hardly a week went by where we weren’t reading about a new mega breach. Even the popular online video game, Fortnite, was hacked and children’s personal data was found for sale on the dark web. No one was off limits. It’s to the point where we’ve become numb to the news; we shrug it off and move on. But as consumers, we should be concerned with the lackluster cybersecurity practices companies have in place. It’s clearly not protecting our data.

A ZDNet article recently mentioned “Researchers at security firm Positive Technologies, tested 33 websites and services using its proprietary application inspector and found that banking and financial institutions were “the most vulnerable” to getting hacked.”

Companies spend billions on security each year, yet why is this still an issue? It’s almost 2019 and still most applications are horribly insecure and security best practices are not followed. Applications are designed for functionality, not security because security is seen as difficult and time-consuming, often blamed for adding delays to product launches and revenue generating activities.

Passwords: The Achilles Heel

Picture1_2.png

One of the most critical security risks to any organization are passwords, especially default passwords and passwords to privileged accounts. Privileged accounts have elevated access to perform administrative functions. They can be administrator accounts, service accounts, database connection accounts, application accounts and others. Most of these accounts were set up ages ago when an application or system was deployed. They typically have multiple integration points and because of the risk of “breaking something,” the passwords for these accounts are rarely rotated, likely shared and improperly stored.

In today’s ecosystem where privileged account abuse is the most common way for hackers to compromise a system, proper credential storage and accountability is paramount to risk mitigation. Relying on manual methods is resource intensive, error prone and leaves gaps.

The Varonis 2018 Global Data Risk Report highlighted 65 percent of companies have over 500 accounts with passwords that have never been rotated. These passwords have a higher likelihood of showing up in online password dumps and being used to infiltrate networks. Simply put – they’re a cyber criminal’s best friend.

Proper password management can be overwhelming to manage, but it doesn't have to be. Current processes for requesting access to privileged accounts are manual and complex. Unfortunately, governance is often an afterthought, leaving many enterprises vulnerable to increased security risks and potential non-compliance with external regulations or internal corporate mandates.

Picture2_2.png

XYPRO identified a need to address this risk within the HPE NonStop server world and we have entered into strategic partnerships with SailPoint Technologies, CyberArk, Centrify, CA Technologies, RSA and Splunk to cover these gaps.

Our newest solution, XYGATE Identity Connector (XIC) extends Identity Management and Governance capabilities to the NonStop server. Most organizations already have active projects to integrate their CyberArk and SailPoint investments into the rest of their enterprise of which the HPE NonStop is now included. Identity governance, privileged account management and multifactor authentication requirements are addressed with the latest solution in the XYGATE suite.

The New Regulation Landscape

2018 saw the introduction of The General Data Protection Regulation, or GDPR. A major piece of legislation designed to address the protection and responsible use of each and every European Union citizen’s personal data. GDPR is not an EU only regulation; it affects any business or individual handling the data of EU citizens, regardless of where that business or individual is based. The penalties for non-compliance are stiff: Up to €20 million (about $24 Million USD) or 4 percent of annual global turnover, whichever is greater. GDPR went into effect in May, 2018.

According to Bart Willemsen, research director at Gartner – “The GDPR will affect not only EU-based organizations but many data controllers and processors (entities that decide what processing is to be performed and/or carry out that processing) outside the EU as well. Threats of hefty fines, as well as the increasingly empowered position of individual data subjects in controlling the use of their personal data, tilt the business case for compliance and should cause decision makers to re-evaluate measures to safely process personal data.”

The GDPR is similar in some ways to PCI DSS in that it aims for a comprehensive approach to data protection that goes well beyond technical controls. Even though the individual GDPR requirements aren’t as technically detailed, its security objectives are the same as PCI DSS: to protect, secure and track use of specific types of data. Compliance with its requirements requires both implementing security best practices and modifying processes and human behavior to comply with those best practices, including timely analysis of anomalies.

In 2018, California also adopted the California Consumer Privacy Act (CCPA). Like GDPR, CCPA focuses on protecting the information of a natural person who can be identified. These regulations require that businesses adopt organization-wide security measures appropriate to protect collected consumer data. We will likely be seeing more compliance regulations with regards to consumer data protection in the near future. The key again is implementing security best practices.

At XYPRO, we see data privacy as a large part of the security landscape going forward. In 2018, we enhanced our product suite to assist our customers with their data privacy and protection initiatives. We have introduced GDPR assessment functionality into our XYGATE Compliance PRO product, published numerous white papers and articles on the topic as well as participating in organizations and activities that can influence data protection regulations going forward. This will ensure the NonStop community has a voice in this area. We plan to continue these efforts in 2019 and beyond.

XYGATE SecurityOne: A Single Platform

In testimony given before the Senate Subcommittee on Science, Technology and Space, famed cryptographer and cyber security specialist Bruce Schneier said:

“Prevention systems are never perfect. No bank ever says: “Our safe is so good, we don’t need an alarm system.” No museum ever says: “Our door and window locks are so good, we don’t need night watchmen.” Detection and response are how we get security in the real world…”

Picture3.png

Schneier gave this testimony back in July of 2001, yet nearly 20 years later, organizations are getting hit by incidents they didn’t detect, proving this premise is still valid and more critical than ever before. I’m surprised by the number of conversations I have with IT and Security professionals who still carry the “set it and forget it” approach to security. They believe protection and compliance is good enough. No matter what type of protection a system has, given enough time, an attacker will find a way through. The faster you can detect, the faster you can respond, limiting the amount of damage a security breach can cause.

Detection is not a simple task. Traditional methods are the setting up of distinct rules or thresholds. For example, if a user fails 3 logons in a span of 5 minutes, detect it and send an alert. In most cases that rule is explicit. If the failed logon events spanned 20 minutes, or worse yet, 10 days, it likely would not be detected. The limitation of relying on these types of rules is they can’t alert on what they aren’t specifically looking for (i.e. what they don’t know). Low and slow incidents and unknown unknowns – activity not normal on a given system – will fly under the radar and no one would be the wiser until it’s too late. The damage is done, the data is taken, the system is compromised, and customer confidence is lost.

Correlating events from multiple data sources proves to be a challenge for detection. The traditional method is to scour through event records, try to put the pieces together and then create a rule to detect that pattern in the future. The weakness is that can only be accomplished after an incident has already occurred. Then the rule is put together on the off chance the same combination of events will happen again. However, it’s not entirely reasonable to anticipate and define every possible incident pattern before it happens.

For data to be meaningful and actionable, it requires context. Contextualization allows the system itself to determine what is actionable and what is just noise. XYPRO’s XYGATE SecurityOne can evaluate each potential alert and, based on activity that happened previously for that that user, IP, system etc…, determine whether the reported activity is business as usual or a serious issue that needs to be paid attention to.

Context is Key

In 2018, XYPRO was granted US Patent 9,948,678 by the United States Patent and Trademark Office. XYPRO’s patent titled Method and System for Gathering and Contextualizing Multiple Security Events, covers the aggregating, correlating and contextualizing of disparate and unrelated security and system events. This proprietary technology provides faster detection of suspicious activity by intelligently combining security and non-security-related data while applying a layer of context which makes the newly enriched data much more insightful and actionable.

What will 2019 have in store?

Targeted Ransomware. As long as Security best practices are not being followed, ransomware will continue being dangerous to business and a profitable source of income for cyber criminals

More Compliance and Data Privacy Regulations - GDPR set the stage for government intervention of data protection regulations. Most companies are still playing catch up when it comes to data protection. We’ll certainly see more government oversight in this area.

Virtualization - Virtualization, containers and serverless applications introduce a new paradigm to traditional security concepts. There will be advancement in this area as we understand more of its potentials and security gaps.

Automation and Faster Access to Actionable Data - Data is no good if its not received quickly and its not meaningful to act on. Humans can no longer keep up with the volume and velocity of security data being received. This introduces a new problem. Context. Making sense of new data derived from old data. We’ll see more context-based solutions coming into play of the next year.

Modernization, Integration and Digital Transformation - Consumers are disrupting the way business is being done. As organizations continue to evolve and adapt to the world around them, business models are changing. We will see more application modernization projects of on-premise core enterprise applications. New consumption models for services and organizations leveraging more of their investments by integrating everything into consolidated technologies rather than fragmented solutions.

HPE NonStop servers are a staple of many modern, mission critical organizations. The NonStop is central to activities that affect our lives on a daily basis; how we shop, pay, bank and communicate. As technology evolves around us, the NonStop server continues to modernize and XYPRO is thrilled to be a part of this evolution. XYPRO’s innovation efforts don’t stop there. We unflinchingly look forward, to identify where research and development investments should be made, always looking for ways to best serve our customers. This commitment has led us to new areas that provide even greater value and security to NonStop server users, integrating the NonStop with the rest of the enterprise and beyond. At XYPRO, we protect your data like it’s our own. Because it is.

COend.png



MicrofocusBanner2.png
Cynthia Leonard
Micro Focus - Security Social Media and Communications Manager

Cynthia Leonard is the security social media and communications manager at Micro Focus. Special help for this article provided by Brent Jenkins, Charles Clawson, Kent Purdy, Rob MacDonald, and Gil Cattelain, all of Micro Focus.

Now that Micro Focus has successfully completed the spin-merge with HPE Software, we wanted to highlight our expanded security portfolio via our first annual Micro Focus Cybersecurity Summit 2018, which took place in Washington, DC, on September 25-27, 2018. This unique Summit allowed face-to-face interaction with our product managers and security leaders such as Micro Focus CMO John Delk and Former FBI Cybersecurity Special Agent, Chris Tarbell. Many like-minded customers heard best practices, solution roadmaps and cybersecurity topics critical to their organization/agency.

Secure development, security testing, and continuous monitoring and protection of apps

David Harper, practice principal for Fortify on Demand presented on “Application Security as a Service.” David discussed how 80% of breaches today are from application vulnerabilities, which are only growing due to the fact companies continue to have more and more applications, along with shorter and shorter release cycles. One approach companies can take is secure gating with Fortify on Demand, David said. The challenge, however, is even though a security gate may work for your organization now, can it keep up with DevOps? We then heard some great advice on building security into the software development lifecycle (SDLC) and addressing it early on. David closed with a fairly detailed plan of creating an application security program by implementing a security gate first, then securing the DevOps lifecycle with compensating control. 
Fortify.jpg

Micro Focus’ Lucas von Stockhausen shared the stage with Fortify customers for “Shifting security left: bringing security into continuous integration and delivery.” While discussing what shifting security left means, the team pointed out that it’s NOT about moving current activities left, changing the location of the stop, or controlling development, but more about changing how you do security, compromising in order to reduce risk, and finally, becoming a part of development.  Application Security teams not only feel frustrated, ignored and left out, but are looked at as roadblocks, and being anti-business. During this presentation, however, the team discusses how shifting left correctly can change all of that.

ArcSight.jpg

Detecting known and unknown threats through correlation, data ingestion and analytics

Marius Iversen, a platform engineer for a major telecommunications company located in the Netherlands, presented “ArcSight is an open architecture for SecOps.” He discussed the need for his organization to abstract event data related to their customers into a custom web driven portal. In order to accomplish this, they use APIs (Application Programming Interfaces) extensively, which allowed them to present visualizations based on data pulled from many different security tools into a single customer dashboard. 

Even though applications like ArcSight are natively multitenant, there are also security advantages to having them access data through a custom portal, verses giving them direct access to the tools themselves. As he states it “ArcSight is generally integrated into the core of your network where you don’t want customers having access. We resolved this by using APIs because we can control what data comes out and what information should be presented to customers.”
IAM.jpg

Discovering an integrated approach to Identity and Access Management

Today CISOs place IAM concerns on top of the list because continuously connected users need swift access to business processes at a reduced risk. In the session “Access management: The glue between business value and security,” Micro Focus’ Kent Purdy and Chan Yoon talked mostly about these three access management trends: organizations are looking for more than just passwords; risk-based access is on the rise; and one size authentication no longer applies. They also pointed out some deployment gotcha’s, as well as some unique approaches that Micro Focus takes on solving these and other IAM-related problems.

Micro Focus’ Rob MacDonald and Derek Gordon from PWC discussed how Identity among other technologies can improve the customer experience in the session, “Improving the customer experience by understanding customer relationships.” IoT has a big part in that discussion both from a security and customer experience perspective. To harness the power of IoT, businesses must learn how to manage it safely. At the heart of all enterprise security is the concept of identity. Just like people, connected things need to be given an identity from day one. Connected things and the people who use them must follow rules that govern access to information.

Ensure all devices follow standards and compliance to secure your

A significant part of any IT department’s day includes managing and maintaining security and compliance standards across a wide array of endpoints while enabling access to corporate applications and resources. The ZENworks portfolio includes a host of UEM products that consolidate management into a single solution. The session, “Automating IT management processes across device lifecycles with ZENworks: present and future,” with Micro Focus’ Jason Blackett and Gil Cattelain, looked at endpoint management needs and how ZENworks helps address them. 

ZenWorks.jpg

The “Securing your devices and data with ZENworks” session hosted by Micro Focus’ Darrin VandenBos considered what happens when security incidents happen, such as a stolen corporate laptop or smart phone, and how IT teams can best tackle security through their ZENworks implementation. Specific topics included patch management, containerization, data encryption, VPN enforcement, and other specifics that are critical to secure an enterprise’s IT assets. 

The Summit included session with a number of our customers. The session, “Simplifying IT processes and increasing user productivity” featured a case study highlighting Trinity Health, one of the largest multi-institutional Catholic healthcare delivery systems in the nation, serving communities in 22 states with 94 hospitals and 109 continuing care locations.  The discussion focused on Trinity’s use of ZENworks Configuration Management and ZENworks Patch Management, with a particular emphasis on software distribution, secure patch management, asset management and automation of desktop migration to Windows 10, as well as touching on reporting and imaging. 

Voltage.jpg

Exploring data-centric security solutions that safeguard data throughout its entire lifecycle 

In the session, “Voltage data-centric security innovations to expand protection—in use, motion and at rest,” Micro Focus’s Reiner Kappenberger shared how his team is growing the data security portfolio, adding key capabilities to make it the most comprehensive data-centric security portfolio in the industry. He detailed how they recently added transparent protection for cloud, commercial and in-house applications without critical application changes or integration required. Micro Focus is investing heavily in the protection of data of all types, he added, for structured and unstructured data, whether in use, in transit or at rest, for persistent protection and management of sensitive data across the enterprise.

Enterprises are adopting cloud services whole heartedly. In the panel discussion, “Cloud-based data privacy and protection: protecting data and privacy across hybrid IT,” challenges for enterprises to govern data security and privacy across hybrid IT were outlined. Concerns about control over platforms, multi-tenancy, data residency, identity and access, collaboration and data flowing into and between clouds were discussed.

Lastly, you can’t have an InfoSec conference without addressing GDPR, and the Cybersecurity Summit had a panel discussion on “Regulatory changes in GDPR and the United States.” GDPR raises the stakes for enterprises around the world to improve data governance and protection through its lifecycle. The panel discussed strategies for risk mitigation, including an interlock of tools and techniques to improve governance, manage identity and protect data. The biggest take away was that GDPR has hidden opportunities to create value and dramatically change the ROI of calculation and compliance.

If you missed the event, there is still a chance to see the sessions outlined above and more, online at the Digital Summit. Registration is free, even if you did not attend the Cybersecurity Summit. Then make plans to attend our second Summit, happening in the summer of 2019.
COend.png


MCSOlutions2018TITLE3.png
kyle_jeffrey_02_editBW.jpg
DianaCortes.jpg
Jeff Kyle Vice President and General Manager for HPE Mission Critical Solutions

Jeff Kyle is a 25-year technology industry veteran with experience in hardware and software engineering, customer sales and support, business planning and product management and marketing. Jeff leads the product management, planning, and engineering teams for the HPE Data Center Infrastructure Group focused on delivering data management and data analytics solutions in the Mission Critical Systems portfolio. He is based in Palo Alto, California

Diana Cortes Marketing Manager for HPE

Diana Cortes is Marketing Manager for Mission Critical Solutions at Hewlett Packard Enterprise. She has spent the past 20 years working with the technology that powers the world´s most demanding and critical environments including HPE Integrity NonStop, HPE Integrity with HP-UX and the HPE Integrity Superdome X and HPE Integrity MC990 X platforms.

As HPE advances its transformation journey, the company is implementing a recent strategic pivot toward Value Compute. HPE Mission Critical Solutions are at the center of the Value portfolio and their continuous innovation focuses on meeting the evolving requirements of customers that need solutions for continuous business. I sat recently with Jeff Kyle, Vice President and General Manager for HPE Mission Critical Solutions, to get an update on market trends, strategic initiatives, new offerings, and a look to the future of this strategic area.

DC Let’s start with a key HPE initiative we’ve heard about for a while. Can you give us an update on the strategy around Memory-Driven Computing? Why HPE is investing in this paradigm shift—from a processor-driven architecture to a memory-driven approach?
circlesHPEoptics.png

JK This initiative, which we’ve been working on for a number of years, results from the inability of conventional compute to keep pace with data growth. Every other year, the amount of data previously created doubles. But how do you turn that data into action? We have this great opportunity, but existing technologies can’t get there. Conventional architecture has always been limited by the practical tradeoffs of memory speed, cost and capacity. So, at HPE, we concluded the compute paradigm must change—with memory at the center, not the processor.

Our recent milestones include a Memory-Driven Computing prototype with 160TB memory—the world’s biggest single memory computer. Also very exciting, we launched a “sandbox” for our customers. This is a set of large-memory HPE Superdome Flex systems you can access remotely to try out Memory-Driven Computing programming. The Sandbox gives customers a chance to see, without having a Superdome Flex system in-house, how they can get a 10x, 100x, even 1000x speedup on the core workloads that drive their enterprises.

One final point. Although currently leading the industry, HPE is no longer alone in this strategy. We are part of the Gen Z consortium of leading computer industry companies dedicated to creating and commercializing a new data access technology. All 54 member companies—from physical connectors, to microprocessors, to systems companies like HPE, to service providers—see the same future path we do.

DC You’ve just mentioned the HPE Superdome Flex, another key milestone in HPE’s journey toward Memory-Driven Computing. Launched exactly one year ago, can you share with us how the market is responding?
circlesDraw.png

JK We are just ending a fantastic year for Mission Critical Solutions, and market adoption of HPE Superdome Flex has spearheaded this growth. Let’s recap what this highly modular, scalable platform does for our customers. At just 4 sockets and less that 1TB of memory, it allows customers to start with small environments and grow incrementally as their requirements evolve, scaling up to 32 sockets in a single system with 48TB of shared memory. This is Memory-Driven Computing in action, empowering customers to analyze and process enormous amounts of data much faster than they could before.

Customers are deploying highly critical workloads on this platform, whether at 4-sockets or much larger. Because of that, we designed it with advanced Reliability, Availability and Serviceability capabilities not found in other standard platforms. Its unique RAS features span the full stack, and we work very closely with our software partners such as Microsoft, Red Hat, SAP, SUSE, VMware and others so that their software will not only take advantage of the RAS capabilities of the system, but also perform well on it—especially in those large configurations that, frankly, many of these software packages haven’t run on before. We have seen strong adoption of the platform around the globe and across all industries, from manufacturing, to financial services, to telecommunications, public sector, travel and many others, including high performance computing. And we see new use cases develop continuously thanks to the flexibility of the platform.

DC Can you share with us some of the common use cases for the Superdome Flex platform? Why are customers choosing it and how are they using it?
Robotics_Circle.png

JK HPE Superdome Flex is most commonly used as a database server, whether for conventional or in-memory databases.

We see many customers migrating their Oracle databases from either Unix environments or scale-out x86 deployments—including Oracle Exadata—to Superdome Flex. Motivations are two-fold: to reduce both licensing costs and complexity. Oracle license costs depend on the number of processor cores, and, in the majority of Unix systems, Oracle licensing is twice as much per core compared to x86 servers.   Migration promises large savings, but customers are often concerned about availability on x86. With Superdome Flex, they feel confident it can deliver the uptime they need, at a much lower cost and with room to grow. Secondly, customers want to reduce the complexity of their Oracle environment. If they move from x86 clusters to a scale up environment such as Superdome Flex, they can greatly reduce complexity—plus avoid the costs of cluster licensing.

The second primary use case we see is SAP HANA environments. SAP’s clear strategy is to stop supporting third-party databases by 2025;  the hundreds of thousands of customers running SAP for critical applications need to move to SAP HANA. Therefore customers look for a partner that can give them confidence and peace of mind as they embark on a transformational HANA journey. As the clear leader in the SAP HANA infrastructure market, HPE can offer them that.

Beyond this, we also offer the broadest HANA portfolio in the industry, led by HPE Superdome Flex. It’s a perfect fit for HANA because of its modularity, memory capacity and performance—we’ve set a number of world records in SAP HANA benchmarks. Many times, customers start by moving one or two SAP applications to SAP HANA, and then grow their HANA environment. So the ability of Superdome Flex to scale incrementally is very important.  

Customers are also deploying SQL Server. Having evolved from a departmental to an enterprise database, including support for Linux, SQL Server customers now need more scalability and availability than they can get with other x86 platforms. That’s where Superdome Flex comes into play. More and more, customers are deploying critical enterprise workloads on SQL Server and to do that they need a highly available environment. The differentiated RAS features of Superdome Flex bring that extra layer of protection they seek. JK

DC How about high performance computing? You mentioned that as a use case for HPE Superdome Flex.
earth.png

JK High performance computing is a relatively new area for us in HPE Mission Critical Solutions. When we acquired SGI a couple of years ago we not only gained the world-class scalable technology that became the foundation for Superdome Flex; we also gained HPC expertise. While many HPC applications use a scale-out cluster approach, there are certain data-intensive HPC workloads that are challenging to distribute across multiple nodes in an HPC cluster. They are best tackled holistically, using a single “fat” node—one node with a large number of processors and shared memory. That’s exactly the Superdome Flex architecture. Use cases include genomic research, computer aided engineering, cyber security, financial risk management and large data visualization, among others. In fact, in some cases the entire workflow may all take place on a single node, removing key I/O bottlenecks, by keeping all the data in-memory on a single node.

There are two main areas we see more and more HPC customers turn to Superdome Flex. First, workloads dominated by access patterns to large volumes of data. If deployed in traditional clusters, the amount of communications across cluster nodes creates a great deal of waiting time versus productive data processing time; this can be solved by keeping all data in a single, easily accessible memory, as on Superdome Flex. This translates into less time and effort to get results. Second, a particular HPC job may be too big for any one node. If memory is exhausted, the job will fail and the time spent running it is wasted. With the large shared memory capacity of Superdome Flex, the risk of memory exhaustion, and therefore failed jobs, decreases. For instance, the research that the Centre for Theoretical Cosmology and the University of Cambridge are doing around the origin of the universe. They are using Superdome Flex as the platform for their COSMOS system, which is also leveraged extensively by the Faculty of Mathematics at the University of Cambridge to solve problems ranging from environmental issues to medical imaging.

DC Let me shift gears. A top of mind issue for customers is their cloud strategy—you mentioned Unix to Linux migrations in the Superdome Flex use cases, especially for mission-critical workloads. How are you seeing mission critical customers address modernization of their environments in light of the multitude of cloud options they have today?

JK No matter their size, cloud is being discussed at the vast majority of enterprises today. One thing we know about cloud is that one size doesn’t fit all; each customer’s strategy looks different. Specific workload requirements should be driving the particular consumption model. In the Mission Critical Solutions space we see customers being very careful when selecting the deployment model for workloads that are so vital for the functioning of their enterprises.

I’ve just returned from our annual Discover conference in Madrid and my conversations with customers there bear this out. Many continue to operate on-premise in traditional data centers, and some deploy via private clouds. Their main concerns around public cloud are related to security, control and regulations. While efficient and appropriate for a number of workloads, public cloud is often not considered a viable model for many highly critical workloads. IDC recently published some very interesting research around Cloud, and specifically Cloud repatriation, where customers move some workloads back from public cloud environments to either hosted private clouds or on-premises. There are a few reasons why this is happening. One is that they aren’t gaining the economic benefits from public cloud they thought they would. Another is because they can’t comply with government or industry regulations. A third, because of costly and damaging security breaches. And finally, sometimes they are just not getting the performance they need from the public cloud. So, instead, they move back and increase investment in private cloud, both on and off premise, to address security, control, performance and cost issues. So as I said, every environment is different and a hybrid, multi-cloud model is now the norm for companies.

DC Interesting take with workload requirements driving consumption models. Continuing with the cloud topic, what are some innovations HPE Mission Critical Solutions is driving that can enable a confident move to cloud, while mindful of customer and application requirements?

JK Mission critical continues to evolve to address customer requirements, and as such we are moving to cloud-enable our solutions. One example is HPE GreenLake, a flexible, pay-per-use consumption model. Cost is on par with public cloud, and we are now able to offer it for SAP HANA deployments together with our HPE Superdome Flex platform.

Looking at other areas within Mission Critical, we continue to evolve our HPE NonStop platform to ensure it can be consumed in a variety of different ways; for example, with Virtualized NonStop, customers can get all the unique benefits of the NonStop software ecosystem using standard virtualization packages—including VMware, as recently announced. We also offer NonStop dynamic capacity for Virtualized NonStop, as well as a new offering, the HPE Virtualized Converged NonStop, a virtualized, turnkey entry-class system preconfigured by HPE Manufacturing for simplified deployment.

Within our Integrity with HP-UX ecosystem, we recently announced OpenStack support for HP-UX. You can manage and use Integrity servers with HP-UX in private cloud environments, and, as a future possibility, deploy HP-UX in Linux containers. So we have a lot of focus on ensuring our solutions are cloud-enabled and that customers can choose to deploy their critical workloads using various consumption models while preserving those high levels of reliability and uptime that are paramount for that set of workloads and business requirements.

DC You mentioned HP-UX with OpenStack, and we have been hearing a lot about HPE’s vision for HP-UX as a container solution. What has happened with the Integrity with HP-UX family of products in the last year since HPE released the Integrity i6 servers? And how is the HP-UX vision coming along?

JK We continue to advance this important set of solutions for our current customers and innovate in the areas that matter most to them. We have announced support for Intel 3D XPoint with Integrity i6 servers, which will mean significant performance gains for our HP-UX customers, at a lower cost. In fact, some of our estimates predict up to 140% higher performance and 55% lower TCO compared to Integrity i4 servers with HDD. In addition, the HP-UX 2018 Update release offers a variety of enhancements, including integration with storage—both 3PAR and MSA flash—and improvements in our HPE Serviceguard high availability and disaster recovery solution.

As for running the HP-UX environment in Linux containers, our engineering teams continue to make good progress in terms of features, integration and capabilities. The program continues to advance with high customer interest and we are also open for customer trials.

DC Exciting innovations all around your portfolio. How about the future, what’s in store for your customers and for the market overall when it comes to their mission critical workloads?
globe_circle.png

JK As with everything in technology, what I can say with confidence is that Mission Critical Solutions will continue to evolve to meet the changing requirements of our customers. We will see more varied consumption models and customers adopting private cloud and multi-cloud environments, even for more traditional environments. I also believe the cloud repatriation trend will continue; this is a learning process for enterprises. The market is realizing that not everything can be moved to the cloud and customers are adjusting consumption models accordingly.

Moreover, the data explosion will continue and we will continue to advance our Memory-Driven Computing strategy. We will see many advancements in the AI and ML space and see commercial users increasingly adopt these technologies previously tied to the research and scientific communities. I´m looking forward to yet another strong year in the HPE Mission Critical Solutions space.



Title-HPESIMPLIVITY.jpg
Prashanto-K-BW.png
Ram-D-BW.png
Prashanto Kochavara
Solutions Product Manager, HPE Software Defined and Cloud Group

Ramkumar Devanathan,
Product Manager, Micro Focus Hybrid Cloud Management

Hewlett Packard Enterprise (HPE) offers hyperconverged solutions that consolidate IT infrastructure and advanced data services into a single, integrated all-flash node. A growing number of IT organizations have chosen to adopt the HPE SimpliVity powered by Intel® platform for the simplicity that it provides, as it can significantly reduce datacenter footprint and improve performance, and it does not require specialized expertise to manage.

HPE SimpliVity hyperconverged infrastructure (HCI) solves a number of IT challenges:

  • SMB and midsize organizations employ HPE SimpliVity to consolidate multiple workloads into a smaller footprint, while providing all required data services within their IT environments.

  • Midsize and large enterprises run Tier 1 applications for production workloads and run other apps that need to be isolated for performance and compliance reasons.

  • Businesses of all sizes leverage HCI to simplify consumption to their users, viewing a public and private cloud environment as a viable option for achieving performance and keeping costs down for specific use cases.

Hybrid cloud management and HCI

Hybrid cloud deployments have been growing at an increasing pace in the past few years. But running and managing workloads across multiple cloud environments opens up a new set of challenges for IT organizations and administrators. One of those challenges is finding software that allows users to manage data across both types of cloud resources. And “users” doesn’t just mean IT administrators anymore.

In order to allow IT teams to quickly respond to line-of-business and developer needs, a broader spectrum of users are demanding a self-service interface. This offering layer has to be easy to use, deliver out-of-the-box functionality so users can hit the ground running, and must offer the visibility and governance needed to enable managers to make well informed decisions.

Hybrid cloud management (HCM) is multi-cloud management software designed to address these challenges. Micro Focus offers enterprise-class software that enables self-service, hybrid service design, service deployment automation, process orchestration and continuous application delivery through application release orchestration.

In a continuing partnership, HPE and Micro Focus have developed an integration plugin that links HPE SimpliVity and Micro Focus HCM, while enabling the direct management and operation of a SimpliVity environment from within HCM. If HCM is already being used to manage an existing hybrid environment, HPE SimpliVity can easily be integrated into that environment via the plugin, thereby exposing its data services to enable lifecycle management.  

Benefits of an HPE SimpliVity + Micro Focus HCM solution

In a typical organization, resource requests from end users to IT admins can take days to be fulfilled. With HCM, end users can self-service their needs without requiring manual intervention by the IT staff. IT admins can pre-select or design custom services, placing them within the service catalog. Users can then access the catalog to request and fulfill their requirements without admin involvement. For example, a user can choose from a list of backup policies and offerings made available by the administrators and apply it to groups of virtual machines as necessary. Similarly, a user could be granted the necessary privileges to create clones of specific VMs for test and development purposes.

SimpliVity with Micro Focus HCM.png

Advanced data services can also be automated from within the Micro Focus HCM suite. End users are able to perform a number of HPE Simplivity operations on their resources, constrained by their entitlements, without the involvement of IT admins. For example, an admin can create backup policies for his/her organization to be accessed through the self-service portal, and then a developer/consumer user can apply any of those backup policies to their entitled VMs, or initiate an on-demand backup.

The following HPE Simplivity data services operations may be automated from the Micro Focus HCM suite.

  • Create, delete and update a backup policy

  • Create, delete, and resize an HPE SimpliVity datastore

  • Set backup policy for a virtual machine

  • Backup and restore a virtual machine

  • Move a virtual machine

  • Clone a virtual machine

  • Display the list of backups and their sizes associated with a certain virtual machine

The HCM self-service portal offers users and administrators the ability to perform a number of operations through the integration plugin/content pack for HPE Simplivity.

Administrators

Using the hybrid cloud management dashboard/self-service portal of the HCM suite, an administrator can perform these HPE Simplivity operations:

  • Create, delete and resize an HPE Simplivity datastore

  • Create, delete and update HPE Simplivity backup policies

  • Perform lifecycle operations for a virtual machine

Users/Consumers

Using the self-service portal, users/consumers can perform the following operations:

  • Deploy/clone a virtual machine

  • Backup and restore a virtual machine

  • Invoke a backup policy

See use cases in this video.

The free HPE SimpliVity/HCM plugin is available for HPE SimpliVity and Micro Focus HCM customers to download through the Micro Focus Marketplace or the HPE GitHub repository. The GitHub repository will be used to address any issues and aggregate enhancement requests from users.

Within the Marketplace and GitHub repository, you will find a video demo that provides insight into how the integration was developed and highlights the value that the plugin provides to customers.

Micro Focus HCM is available for trial usage with all HPE SimpliVity platforms. Contact HPE at svt.hcm@hpe.com to get connected with an HCM Solution Architect in your region and access the software bits.

For more information on HPE SimpliVity visit:  www.hpe.com/simplivity

For more information on Micro Focus HCM visit:  www.microfocus.com/hybridcloud

COend.png


comfortebanner.png

Company profile

Bankart is a card payments processing center headquartered in Slovenia that serves 23 banks and other institutions across six countries in four different currencies. With our Central Authorization System (CAS), Bankart processes over 43 million ATM, POS, internet, and mobile transactions every month on ACI’s BASE24© Classic. We also control and manage ATM and POS networks for most of our banks.

In addition to our payments processing and network management services, our CAS also handles card validation, PIN verification, and in case a bank is experiencing technical issues and is unable to process an authorization, we will conduct off-line authorizations for them. This final service requires the storage, use, and management of sensitive cardholder data, which has to be protected.

Bankart’s payments processing network configuration

Since Bankart’s Central Authorization System must be up and running 24/7, it is hosted on HPE NonStop servers in a dual site configuration working in active-active mode. Between those two servers, data is replicated with Oracle GoldenGate©. Furthermore, some data is being replicated to back office systems running on Windows servers. Both authorization servers are connected to our POS network, ATM network, and web interfaces for online transactions, all of which are routed to the banks we serve, other processing centers, and interchanges. To manage all of this, we have to maintain various databases, files, and logs with card holder data.

The challenge

Throughout this network, there are several files and databases that contain cardholder data which needs to be protected from both external threats and accidental exposure to unauthorized insiders.

Bankart already employed Volume Level Encryption to protect cardholder data, however VLE is only useful when physical hard drives leave our premises. If a malicious actor infiltrates the system undetected, then the data is still left in the clear and vulnerable. An additional level of protection was required to secure the data in the event of a breach.

Our requirements

Given our complex network configuration and the high level of service our customers expect, we had very high standards for the solution that would protect the cardholder data we manage:

  1. High Availability – able to integrate on a live system with zero down-time and available 24/7

  2. Highly Configurable – compatible with a diverse system on a file and record level

  3. Ease of integration – little or no changes to applications or source code

  4. Scalability – should be possible to extend the solution to other systems within the company

  5. PCI and GDPR compliance – cardholder data must be rendered unreadable wherever it is stored

The comforte advantage

Bankart chose SecurDPS from comforte because it fulfilled all of the above requirements and more. It was easy to implement in our complex IT environment without changes to source code or down time, it properly secured cardholder data in accordance with PCI and GDPR requirements, and it is a scalable, enterprisewide solution that can later be expanded to other systems in the company. Additionally, SecurDPS enabled us to be more cost effective by omitting volume level encryption.

How tokenization works

To protect cardholder data, we utilized tokenization. The main difference between tokenization and classical encryption is that tokenization replaces sensitive data elements with non-sensitive data elements of no exploitable value while preserving the format of the data. The advantage is that tokenized data can be transferred between systems that may be sensitive to data format without any changes to existing applications. This ensures that there are no security gaps in the system and makes it possible to implement the solution quickly and without interrupting services.

graphic2.png

Tokenization in practice: log files

One of our objectives was to tokenize logs that are recorded at various intervals throughout the day and on a daily basis. Some of the logs are key sequenced while others are entry sequenced with cardholder data in different places.

We tokenized the log files using a configuration that would apply to logs created after a given date and time. This allowed us to secure the data in phases by setting different logs or bundles of logs to be tokenized at different times. We started by tokenizing one log file of a certain type and after determining that everything went as expected, we then tokenized all logs of the same type. The same process was applied until all of the log files were protected.

This entire process was carried out while the system was live without any of Bankart’s partners or customers noticing any difference in service levels.

Tokenization in practice: databases

Another major hurdle was to tokenize our databases. As all of our databases are constantly in use, there is no point during the day or night where any of the databases are not being used, so they also had to be protected without interruption. For this, we configured the program to start protecting records on each database after a given date and time.

The system continued to work seamlessly even though there were mixed records in the database because it was able to distinguish between tokens and data in the clear while the record format remained the same. For performance purposes and because we wanted the whole database tokenized, we later ran a conversion program on our databases to tokenize all records.

The results

The entire project was carried out within six months by just two members of Bankart’s IT development team who were simultaneously working on other projects. We took a phased approach to tokenizing the files and databases which allowed us to closely monitor system performance throughout the process. All cardholder data on our Central Authorization System is now secured and there were no signs of any performance degradation during implementation or afterwards! Thanks to SecurDPS’s scalability, plans are being made to extend the solution to other systems across the organization.

 
Klemen_HPE_NSX6_edited.jpg

About the author

Klemen Maksimovič, Development Engineer Analyst at Bankart

Klemen Maksimovič has worked for Bankart, the leading clearing house and payment processor in Slovenia, for sixteen years. He has a wide range of experience in card payments, electronic invoicing, and electronic banking, especially ATM and POS systems implementation and support.

Klemen is responsible for the administration and maintenance of Bankart’s BASE24 electronic payments system, with specific responsibility for PCI compliance.

Bankart takes their systems security especially seriously and Klemen currently heads up the team working on the security tokenization of the card payments system, utilizing comforte AG’s SecurDPS platform infrastructure.

Get the White Paper Here
COend.png

 


 

TechDataBanner.jpg

Tech Data hosted its annual Advanced Solutions Partner Summit conference in Scottsdale, AZ on November 5-8, 2018. Over 125 Channel Partners attended to learn about Tech Data and its differentiated offerings. The conference then was “Transformation Awaits”. Many of the general sessions focused on how digital transformation and businesses are changing in the digital age. A number of Specialist Growth Workshops were held to educate our Channel Partners on next-generation technologies such as cloud, security, analytics and IoT, and Converged/Hyperconverged. HPE’s Bob Patterson, Chief Strategist and Senior Enterprise Architect, Enterprise Data and Analytics was featured in the Analytics and IoT Workshop entitled “Strategies for Effective IoT and Analytics Montetization” and HPE’s Jason White, Chief Technologist, Hybrid IT was featured in the Converged/Hyperconverged Workshop entitled “Driving New Opportunities in the Data Center through Digital Transformation”. Jason White also gave an HPE Keynote presentation on “Achieving High Velocity with HPE”. HPE also sponsored the Women’s Breakfast, featuring a panel that included HPE’s Randi Luckeneder, Partner Business Manager – Tech Data, HPE Storage. Over the course of the conference, over 300 business meetings with Tech Data and vendors were held. Tech Data thanks all the Channel Partners that participated and attended the conference and hopes it provided a valuable learning opportunity as well as face-to-face time with Tech Data and vendor executives.

COend.png


AttackofCloudKillersWinterc22018.png
Robert Christiansen CTP.jpg
Robert Christiansen VP of Global Cloud Delivery at Cloud Technology Partners (CTP)

Robert Christiansen, VP of Global Cloud Delivery at Cloud Technology Partners (CTP), an HPE company, is a cloud technology leader, best-selling author, mentor and speaker.

When businesses first started considering public cloud as a serious platform for IT, nobody was thinking too hard about how it would affect the organizational structure of IT shops or what the operating model for cloud should look like. But as cloud engagements have matured, those questions are coming forward in a big way, and IT organizations that haven’t yet put some serious thought into them run the risk of seeing their cloud projects shunted aside or derailed.

It was clear early on that many of the classic IT ops that central teams run – service ticket management, infrastructure health scanning, dedicated security operations, for example – would be going away. Many of those day-to-day functions are now the responsibility of the big cloud providers. That in itself was a substantial paradigm shift, and a difficult one to embrace for lots of IT pros who had lived and breathed those processes for many years of their careers.

Plus, the operating model that emerged – Cloud Ops – is very, very different. There's a lot more automation involved, a lot more self-healing software that detects problems and fixes them. There’s a stronger emphasis on agility, speed and cost-effectiveness than what’s typical in the classic on-prem model. I know a financial services company that runs five thousand virtual machines in the cloud – a pretty decent-sized datacenter in anybody's book. They run it 24x7, 365 days a year, with a Cloud Ops team of just 18 people, six to a shift. No security teams, no maintenance people, no-one walking the halls making sure the air conditioner's running at the right level. If a server is not performing well, they just kill it and start up another one. They have exceptionally low overhead, while maintaining a very high level of service.

For more and more companies, the cloud model is no longer an option – it’s a mandate. If your competitor has an IT operation like the one I just described and you’re running a data center with 50 or 100 people, how are you going to compete?

Still, the transition to Cloud Ops can be challenging, no question. You’re up against all sorts of governance and insight issues and change management demands. These potential cloud killers are not so much technology challenges as organizational and cultural ones, and they all stem from the fact that Cloud Ops is a wholly different kind of IT world.

Here are three principles that I’ve found useful while working with companies on the front lines of cloud rollouts:

1. Understand the need for a completely different governance model.

What I have in mind here is not your classic governance, risk and compliance (GRC) processes – though those are crucial for cloud success too. Cloud Ops governance is all about how do I maximize value? It’s about having deep visibility into the financial health of the assets.

Finance controls are the number one thing that needs to be done differently from the beginning. As my colleague John Treadway pointed out in a recent post (3 Ways HPE GreenLake Hybrid Cloud Drives Hybrid IT Success), “if you're not paying attention to what you're using in a public cloud, you can easily end up overpaying.” What’s more, the acceleration of consumption is much, much faster with cloud than with the classic model, with all its POs, contracts, legal involvement and so on. Within a very short time, you can end up with uncontrolled, unmonitored usage and zero visibility. You’ve got spend that you can't answer for, so what likely happens is that Finance steps in and kills the project.  

Then you have a wrecked cloud initiative, and IT blames the business or falls back and says, "our cloud program cost too much" without actually looking for the root cause of the problem. It all could have been avoided by putting the right controls in place from the start.

2. Recognize that your Cloud Ops team can’t be the same team that’s running your classic model.

These groups need to be separate and differently dedicated. I may be taking a bit of a contrarian position here, because the IT leaders I talk to sometimes have a hard time getting their heads around this. And the decision as to how to populate each team is a tough one, for sure. But I’ve worked with quite a few companies that originally tried to merge their on-prem operational teams and their cloud operational teams in the hope that they could act as one. It hasn't worked well at all.

The on-prem operational teams may not understand how the new model works. They’re probably not familiar with the technology and the new software platforms that you’re deploying. The cloud folks may lack the depth of experience needed to manage the on-prem assets. This is where the friction starts, with team dynamics issues, turf battles, silo building. I’m a convinced advocate for bypassing these problems by the simple expedient of keeping the two teams’ workflows distinct and separate.

3. Target your training.

A corollary to my previous point: it’s crucial to ensure that your cloud team has a clear understanding of the new paradigm. Maybe under the old model your staffers devoted a lot of time to the care and feeding of servers, ensuring their health and avoiding any need to turn them off. But now perhaps your new model is – and again I’m thinking of that financial services firm I mentioned – if a box isn’t working, get rid of it and bring in another one.

I’m not suggesting, of course, that your on-prem ops team won’t need access to training resources to continue learning and building their careers. But here again the two-team approach is useful. Cloud Ops training will be very different; it should be oriented more towards managing spend controls, enabling or disabling services, and supporting users who are consuming cloud services.

Importantly, it should also focus on the DevOps relationship and the benefits it delivers to IT service consumers. A tight connection between development teams and operations teams is pivotal for maximizing the value of cloud implementations. That partnership needs to be carefully fostered, and a big part of that is through training.

Building a top-flight Cloud Ops function is a demanding task, but an essential one for companies that want to see the best results from this innovative, agile paradigm.

Get Started Today

HPE has experienced consultants to support you through every stage of the cloud lifecycle. Get the expert assistance you need to quickly and effectively bring cloud computing services to your business. Take a look at our HPE Cloud Services website and get started today. 

COend.png


HybridITChallengesWinterc22018.png

IT Ops, Developers and LOBs offer their unique insights

lauren-whitehouseBW.png
Lauren Whitehouse
Marketing Director-Software-Defined and Cloud Group, Hewlett Packard Enterprise (HPE)

Lauren Whitehouse is the marketing director for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). She is a serial start-up marketer with over 30 years in the software industry, joining HPE from the SimpliVity acquisition in February 2017. Lauren brings extensive experience from a number of executive leadership roles in software development, product management, product marketing, channel marketing, and marketing communications at market-leading global enterprises. She also spent several years as an industry analyst, speaker and blogger, serving as a popular resource for IT vendors and the media, as well as a contributing writer at TechTarget on storage topics.

Digital transformation is ushering in a new era of hybrid IT– a combination of both public and private cloud – that allows businesses to innovate while meeting their own unique organizational needs. Yet, a hybrid IT environment can create complexity and operational friction that can slow a business down and hold them back.

As businesses seek ways to remove IT friction, streamline operations, and accelerate business innovation across their hybrid environment, it’s important for them to think about the needs of three particular groups – IT operations, developers, and line of business (LOB) executives. What challenges do each face? What opportunities do they see?

To answer these questions, IDC conducted in-depth interviews with IT operations staff and line of business individuals at Fortune 100 enterprises. The results can be found in a comprehensive research report – The Future of Hybrid IT Made Simple.

IT ops: Where’s my automation for deployment and management?

A hybrid IT environment is definitely more challenging for IT operations than a single, virtualized compute infrastructure located on premises. A lack of automation in a hybrid IT environment means deployment and management of siloed resources must be managed separately.

Other concerns with hybrid IT include IT interoperability and integration, application certification, change management/tracking, and complexity of the overall infrastructure. In addition, extensive training is needed for operations and development personnel as IT shifts to a service broker model.

As these challenges mount, IT can no longer be treated as a back-office function. Instead, IT ops is expected to drive new sources of competitive differentiation, while still supporting legacy infrastructure and processes.

As one IT ops executive explains in the report, “Hybrid IT is more complex when it comes to deployment and ongoing management. The initial setup of the process takes some time, and training people how to use the different portals further extends deployment timelines. Every time something new comes up, it’s always a challenge because people don’t necessarily like to learn anything new. There’s always a learning curve, and they are usually not too happy about it. Change management is always a headache.”

Application Developers: Where are my developer services and ease of use?

Hybrid IT is also challenging for application developers, but for completely different reasons. Developer services, such as infrastructure APIs, workflow, and automation tools, are not consistently available across private and public clouds. And a lack of unified provision tools means that IT must serialize much of public and private cloud service delivery, which leads to bottlenecks.

Developers feel that a complex hybrid IT infrastructure is difficult to interact with, slowing down their ability to quickly roll out new services on new platforms. Interoperability between development, test/QA, and production environments is also a problem, along with the learning curve on available tools that manage cloud resources. Integration and version control between their on-prem and cloud environments is also lacking, which slows them down and increases complexity.

The report quotes one application developer as saying, “Our major concern is with deploying third-party applications across multiple clouds. A big issue is the proprietary nature of each of these clouds. I can’t just take the virtual image of the machine and deploy it across multiple clouds without tweaking it.”

Line-of-Business (LOB) Executives: Where’s my visibility and cost controls?

LOB executives have very different concerns. They are frustrated by the slow response for new private cloud services. Although public cloud services are fast, executives feel that they also carry risk. They wonder if using public cloud exposes their business to the outside world. They also are concerned that they will be locked into a specific public cloud service. Adherence to SLAs, transparency, privacy, consistency across clouds, overall performance, and cost—all these issues weigh heavily on a LOB executive’s mind.

According to one LOB executive quoted in the report, “Application integration with on-premises data management layers like file systems is a problem when developing in the cloud. With hybrid IT, our goal is to ensure that data is available across all locations, using some kind of a secure message broker integrated with a database and a distributed file system.”

Reducing hybrid IT complexity – is it possible?

So what’s the solution? Is it possible to operate a hybrid IT environment without the headaches associated with it?

According to IDC, the answer is yes—but only if a multi-cloud strategy is bound together with an overarching hybrid IT strategy. And this is where companies like Hewlett Packard Enterprise (HPE) can help. HPE software-defined infrastructure and hybrid cloud solutions lets businesses reduce complexity so they can innovate with confidence.

For IT operations staff, using composable and hyperconverged software-defined infrastructure means that they will be able to move quickly. They can easily deploy and redeploy resources for all workloads. Plus, automating and streamlining processes frees up resources so IT can focus on what matters most. Developers can drive innovation using multi-cloud management software, rapidly accessing the tools and resources required to quickly develop and deploy apps. Lastly, multi-cloud management options let LOB executives gain insights across public clouds, private clouds, and on-premises environments, providing the visibility needed to optimize spending.  

By delivering solutions that make hybrid IT simple to manage and control across on-premises and off-premises estates, a business can better meet the needs of IT operations, developers, and LOB executives. A hybrid IT strategy combined with multi-cloud management empowers everyone to move faster, increase competitiveness, and accelerate innovation.

To find out how HPE can help you determine and deploy a digital transformation strategy for your hybrid IT environment, visit HPEPointnext. Read IDC’s full report, The Future of Hybrid IT Made Simple.

COend.png


Future Leaders of IT
Congratulations to 2017's recipients!

Caroline Anderson

Morgantown, Pennsylvania
Penn State University
Computer Science

Caroline+Anderson.jpg

Caroline Anderson is a first year computer science major at Pennsylvania State University Schreyers Honors College. Caroline’s interest in computer science sparked as a sophomore in high school while enrolled in an AP Computer Science class. Since then, her love for computer science has only continued to grow.

Caroline completed two summer internships at NASA’s Independent Verification and Validation facility creating a database to categorize potential hazards for critical NASA missions. During her junior and senior years of high school, she was a researcher at West Virginia University’s Human Computer Lab under Dr. Saiph Savage. In this lab, she volunteered for e-NABLE, a nonprofit organization that builds 3D printed prosthetic hands for children in need, where she guided high school campers to build 20 prosthetic devices. She also created the West Virginia University Statler College twitter bot @WVUStatlerBot that answers commonly asked questions for engineering students at West Virginia University.

In high school, Caroline served as the treasurer of both the National Honor Society and Interact Club, and has logged over 200 hours of community service. This summer (2018), she plans to research at Carnegie Mellon University's Human Computer Interaction Institute. Caroline is very excited to learn more about computer science and what area will allow her to best serve others. You can learn even more about Caroline at her website: carolineganderson.com.

afe+addeh.jpg

Afe Addeh

Greenbelt, Maryland
University of Maryland College Park
Computer Science

Afe Addeh is a first-generation, African American student who has achieved many goals and accomplishments since she began her exploration into the field of computer science. In high school, she led her school's Cyber Patriot team to the “Top Three” in state competition for two years in a row. Afe also participated in the Congressional App Challenge since 2014 and won 3rd place for her app, Career Finder, in 2017.
During high school, Afe sought to inspire other female minority students to excel in STEM courses and activities. In 2016, Afe launched the first girl-prioritized club and the first computer programing club in her school: Girls Who Code.  Afe will begin her first year in college at the University of Maryland at College Park, where she will major in computer science with a minor in computer graphics/animation

COend.png