Cloud computing’s progress from the trough of disillusionment to the scope of enlightenment in 2015, according to the Gartner hype cycle, coincided with the evolution of the “data company”. The technology industry has been dealing with a rapid explosion in data production and management requirements since 2011, when global smartphone ownership tripled.
The phenomenon of the world’s largest and fastest growing retailer (Amazon), taxi service (Uber) and hotel chain (AirBnB), can in some part be credited to the tremendous scalability enabled by the cloud. While these respective business models, which all involve an element of crowdsourcing assets, are considered revolutionary in their respective industries, achieving success on a global level in such a short space of time requires a multitude of factors. The cloud has enabled each of these companies to take an idea and a service that worked and replicate it with scale in multiple geographies in an extremely targeted way.
Therefore, it is not solely increased access and existence of data that is responsible for the enviable growth and popularity of these services. What sets them apart is also an understanding and availability of the data at their disposal which defines the service they provide their customers and informs their strategy with evidence of what those customers want. The vast majority of enterprises of comparative size produce and use data by the petabyte, but we would not necessarily consider them a data company in every case. So, how do companies make data - arguably the most valuable asset at their disposal - define and evolve their business model, proposition and service?
Manage your assets
Let’s delve deeper into what the modern data company looks like. From the outside-in, the companies described above present a clean, slick and fast user interface that is intuitive and simple to use. There are customisable elements on top of the basic information analysed to make each transaction, whether it’s Uber’s pooling service (whereby passengers can share a fare with a fellow customer who has requested a similar journey) or Amazon’s various delivery options and suggested items functionality.
While the user interface is extremely simple and useable, the storage infrastructure and data management platforms which sit behind it are performing sophisticated computing tasks. Uber, for example, simultaneously manages multiple pieces of information during every transaction: GPS and location data of both the driver, the passenger and the destination; availability and profile for each driver in the area; and payment and account details for both parties.
These pieces of data all have very different requirements in terms of storage, protection, privacy, accessibility and management. These different requirements dictate where the data is stored and how easy it is to access and move, which leads to data being kept in silos. For example, data such as customer payments information, which needs to be stored in a highly-secure and controlled environment and is relatively inactive for the majority of the time, is better suited to a private cloud or on-premises data centre. Whereas the location data that tracks the driver’s movement and the location of passengers using the application has completely different storage needs. This requires a fluid, flexible environment where data is likely to move around (quite literally) at high speed and may need to become “active” at any time – the kind of environment usually associated with a public cloud.
Furthermore, in order to continually improve the service or increase profitability - Uber’s additions such as the surcharge at peak times and Spotify’s playlist integration are examples - data needs to be collated, anonymised and analysed. Organising data in a manner which allows it to be appropriately analysed by data scientists and analysts to deliver insight is a challenge, especially when it all lives in different places with different protection and privacy requirements. Therefore, while the app which fronts Uber’s business has been praised as simple and clean, the level of sophistication required of the infrastructure that sits behind that interface to deliver that user experience is extraordinary.
Keeping data moving
The first step for enterprises that aspire to become more data-driven is to find a way of managing, moving and measuring data to determine its true potential value to the business. Ultimately, data sets need to be working in harmony to make the customer experience as simple as possible, whether you’re a consumer-facing brand or a B2B enterprise.
To achieve this, businesses must build their own Data Fabric vision, enabling data from different storage environments to be seamlessly connected as part of a single secure and agile system. NetApp’s data management operating system, ONTAP 9, integrates the best traditional and emerging technologies to provide a foundation for next-generation architecture, from traditional disk storage to cloud-based and software-defined environments.
To become a data-driven company with a simple, intuitive and effective value proposition and customer experience, enterprises must understand and manage their data in a way that is flexible, scalable and secure. Next-generation Data Fabric architecture provides the foundations for opening up new revenue streams and ways of doing business.