Opinion: Flash. Cloud, Data - What Can The CIO Expect In 2016?

Jan 16, 2016

John Roese, Senior Vice President and Chief Technology Officer at EMC, outlines his enterprise IT predictions for 2016.

Containers will become enterprise-grade

Once the preserve of web-scale companies such as Facebook and Netflix, containers have been generating considerable interest as an agile and effective method for building next generation mobile, web, and big data applications. In 2016 container technologies will evolve as functionality is added to make them suitable for enterprise application deployments.

Enterprise-grade containers will need to address two important challenges. First, the containers need to be able to support run-time data persistence. This attribute is not required for the stateless applications required by social media companies – if a container in a social networking web-scale application fails it can simply be recreated – but persistence is vital for providing a consistent enterprise-grade service where all data is easily recoverable in the event of a fault. The solution to persistence is to build present enterprise grade storage products into the container specification via both existing protocols and new container specific abstractions.

Second, containers will need to mature to support enterprise security and governance requirements. At present, containerised architectures delivered as microservices are not fully enterprise-ready. There are limited or no concepts of audit, trust or validation. Even the basic concept of an enterprise firewall does not currently exist in containers. This is something we expect to see remedied in 2016 as enterprise-grade security and governance features are re-envisioned and applied to the container specification in native ways.

Big Data and real-time analytics will come together

In 2016 we will see a new chapter open in the world of big data analytics as a two-tier model emerges. Tier one will comprise ‘traditional’ big data analytics: large scale data analysed in non-real-time. The new second tier on the other hand will comprise relatively large data being analysed in real-time, courtesy of in-memory analytics. In this new phase of big data, technologies such as DSSD, Apache Spark and GemFire will be every bit as important as Hadoop. This second tier represents a new and exciting way of using data lakes – for on-the-fly analysis to influence events as they happen. It has the power to give businesses a level of control and agility simply not seen before.

For in-memory analytics to live up to its promise however, two things need to happen. First, the underlying technologies need to develop to ensure there is enough memory and space to house large scale data sets. Some thought will also need to be given as to how data can be efficiently moved between big object stores and the in-memory capability. The two operate at radically different performance curves and IT teams will need to be able to manage the demarcation point to ensure data can move back and forth at speed transparently. Work is currently underway with new object stores, rack-scale flash storage, and technology to make them work together as a system. Open source initiatives will play an important role in meeting this challenge.

Second, the large-scale in-memory environment requires data to be persistent as well as dynamic. The issue here is that if you ‘persist’ data within the in-memory environment, any flaws in the data will also persist. As a result, in 2016 we will see the rollout of storage-style data services to the in-memory environment. These services will include deduplication, snapshots, tiering, caching, replication and the ability to determine the last-known state where the data was valid or the system was operating correctly. These capabilities will be incredibly important in the move to real-time analytics as more non volatile memory technologies become commercial in 2016.

Enterprise Cloud strategies will transform, giving way to “right Cloud for the right workload” models

Next year we will see enterprises realise a more fundamental understanding of cloud services. A new mature approach to cloud will emerge where IT will use a portfolio of cloud services with offerings optimised for each of the application workload types. For example, the cloud service used to support your SAP workload is different than the cloud service you will use to run your new customer loyalty mobile application.

To date, IT has largely been searching for a single cloud service to meet all its needs. This has always been an oversimplification of the technology. As we see cloud move from the deployment to execution stage it will become clear that there are four types of cloud services IT can choose from. Two are based on moving existing ‘second platform’ investments to the cloud and two are based on creating entirely new ‘third platform’ cloud services and infrastructures. The four types of cloud are:

  1. On-premise second platform cloud
  2. Hybridised off-premise second platform cloud
  3. On-premise third platform cloud
  4. Off-premise third platform cloud

It is likely that enterprises are going to have to adopt a cloud strategy that embraces all four types of cloud services. If they do not have a plan for all four they might find they are running workloads in the wrong cloud environment – whether that is from the perspective of economics, efficiency or regulatory compliance.

Any cloud strategy will also need to embrace cloud interworking. It will be incredibly important for applications to able to access data transparently and securely across the four different cloud platforms. And linked to this will be the degree to which applications can treat off-premise cloud resources in the same way they would on-premise resources. This is highly complex to achieve, but is now possible through the use of technologies like cloud gateways, cloud abstractions such as CloudFoundry and Virtustream xStream, software defined data replication, and advanced data encryption services.

Cloud native application development skills will be at a premium

As businesses move from simply building clouds to using them, it will become clear what the cloud is really about: facilitating the creation of business applications providing differentiated capability. In terms of the skills required, creating applications in the cloud is radically different to those IT teams currently have in place. This skills gap raises the very real possibility that cloud adoption could begin to slow.

Business cannot afford to let this situation develop any further. The ability to create cloud-native applications is quickly asserting itself as a key – if not the key – competitive differentiator for enterprises. In 2016 businesses need to recompose their ability to create software using the cloud tools of the future. This means re-training their IT teams or looking to third party developers to provide the necessary expertise.

Flash will achieve scale…and businesses must prepare for the worst

This final prediction is less a prediction and more a note of caution. As any storage veteran can tell you, when new technologies achieve scale there are more often than not unforeseen issues that lead to industry wide endemic failure. I am not saying that this definitely will happen to flash, but it is at the very least a possibility that businesses should consider as the technology starts to achieve real scale in the year ahead.

The adoption of innovative technology always involves a certain degree of risk management and success often comes down to how well businesses can cope when this technology experiences teething troubles.

Over the last several decades, there have been widespread industry disruptions based on unexpected and widespread defects in CPU’s memory chips, hard drives, switching silicon and almost every other area of the technical ecosystem. The key in these cases is to work with vendors who know the industry well and have worked through similar endemic failures in the past.

For example, in 2010 EMC led the industry through the replacement of the majority of large capacity disk drives (Seagate Moose drive failure). This type of experience by your storage provider will prove invaluable if the worst-case scenario for flash comes to pass.

So, for 2016 our advice to businesses is to make sure they don’t fall into the trap of assuming things won’t go wrong. As with any part of business, planning for worst-case scenarios is good practice.

Photo Credit: alphaspirit/Shutterstock

Author: John Roese
View the original article here.
Published under license from ITProPortal.com

 

 

 

Comment

 

Understanding the risks and rewards of public sector cloud 

Download the Whitepaper now

Partner

24Newswire
Sign up to receive latest news