DDN BLOG

The challenge created by mounting resolutions and frame rates, the exponential growth in production storage requirements and the ever increasing content delivery options is not so much in the vast amount of data but rather the way we have chosen to store it. That is, why is big data such a big problem in media workflows? Possibly, because we have been storing the data in the wrong way.

How much of the data that you store and use, by volume, on a daily basis actually changes? Specifically in media and entertainment workflows, the amount can be very small. Certainly ingest, editing, rendering and transcoding create and modify files, but think of the number of assets that are simply called into production to aid in the creation of these files. Think of the number of assets that are brought in and, though used, never actually get modified? Think of the number of assets that are stored just in case they are needed, but are never actually called upon. Then, think of the number of files that are created and stored hopefully to be accessed and downloaded hundreds of millions of times, but never change again. Finally, think of the number of files created with no intent of ever accessing them again. So, how much of the data you store and use, by volume, on a daily basis really changes? Maybe five, ten, 20, 30 percent? While constantly changing higher resolution content does need an increasingly higher performance storage system to support it, consider the other 70 plus percent. Making the decision on how and where to store your data can be tough, but fortunately, there are some new options that may now make these decisions much easier.

Enter object storage. You may or may not have heard of object storage. If you use a computer much then you’ve probably used it with one of today’s online or file sharing services. Surprisingly, this same capability can be used as part of your file based media workflow to increase performance, increase capacity, increase collaboration, increase data protection, decrease storage complexity and all at a lower costs. Object storage is perfect for storing immutable content, which is content that is not going to change much. It’s simple get, put, and delete interface and extremely low overhead allow you to read an object in a single operation and store an object in only two steps. This simplicity and operational efficiency of object storage combined with innovative ways of storing and protecting data allows use of 99.9 percent of the storage disk. Further, it is optimized for both large and small files so it can store everything from the largest mezzanine file to the smallest HTTP chunks associated with modern adaptive bitrate streaming types with almost no waste. With flexible data protection schemes and over five nines of data durability, as well as over heads ranging from 25 –  300 percent it also makes a great archiving solution with costs that rival that of tape backup but with the access speed of disk. Finally, if these benefits weren’t enough, object storage scales to trillions of objects in a single name space and can easily be expanded by simply plugging in additional nodes with the whole system online.

Today, there are options for storing data, and you need to store your data in a system that has been designed for the type of data you are storing. File systems are great and they have served us very well for many years, but as Big Data crashes down upon us it is time to bring object storage into the workflow. That 70 plus percent of your storage content that doesn’t change much, doesn’t need the complexity of a file system. By rethinking your data storage strategy to a simple, easily scalable object storage, you will save money and get better access, protection, reliability and performance from your data.

  • Michael King
  • Michael King
  • Sr. Director Marketing Strategy & Operations
  • Date: October 28, 2013