When Mastercard wanted to improve the speed and security of credit card transactions, when Baylor College of Medicine was scaling up its human genomic sequencing program, and when toymaker Spin Master was expanding into online video games and television shows, they all turned to object storage technology to facilitate the processing of massive amounts of data.
Object storage, with its virtually infinite capacity and low cost, has a long history of being deployed for backup, archiving, disaster recovery, and regulatory compliance. But the demands of today’s data-centric organizations have brought the technology from the dusty storage closet to the center stage of digital transformation.
For any tech decision-maker thinking about an overall data strategy, having a large central repository, also known as a data lake, is the preferred approach—it helps break down silos and aggregate data from multiple sources for the type of data analysis that delivers value to the business. Object storage is the most effective underlying technology for applying data analytics, machine learning, and artificial intelligence to those vast data stores, says Scott Sinclair, storage analyst at market researcher Enterprise Strategy Group.
“The biggest advantage of object storage is to add more value to primary data. It doesn’t just store files; it adds context,” says Paul Schindeler, a former IDC analyst and currently CEO of the Dutch consultancy Data Matters. An object store includes metadata, or labels, which enables companies to easily search vast volumes of data, determine the origin of the data, whether it has been altered and, more important, to set policies and keep auditable records on who can see the file, who can open it, and who can download data.
Most organizations today use a mix of storage types: file storage, block storage, and object storage. But the use of object storage is surging for a number of reasons: speed, scalability, searchability, security, data integrity, reliability, and protection against ransomware. And it’s the wave of the future when it comes to big data analytics.
Object storage, then and now
Object storage was developed in the 1990s to handle data stores that were simply too large to be backed up with file and block storage, says Sinclair. When introduced, the almost infinite scalability, low cost, and immutability of object storage made it ideal for backup and recovery and long-term archiving and compliance with regulations such as the Health Insurance Portability and Accountability Act, in health care, and Sarbanes-Oxley, in banking.
The next watershed event in the evolution of object storage was the ascendance of cloud storage. Cloud services vendor Amazon Web Services chose object storage architecture as the foundation for its popular Simple Storage Service (S3), and object storage has become the standard platform for all cloud storage, whether from Google, Microsoft, or others. In addition, S3 protocols have become the industry standard for modern data-centric applications, whether they run in the cloud or in a corporate data center.
More recently, organizations have come to the realization that they need to do more than just park and protect their data; they need to extract value from vast troves of historical data, as well as from new data sources and data types, such as internet-of-things sensor data, video, and images. That’s where object storage really shines. It has become the platform organizations are building their data analytics capabilities on to modernize their computing environments, create innovation, and drive digital transformation.
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.