When DDN WOS 3.0 launched last week, I couldn’t help but think of those lyrics in Loretta Lynn’s 1978 hit country song.  The words couldn’t be more appropriate to describe where we are today. Allow me to take you back…

OK, so it wasn’t 1978; but in 2007, our Chief Scientist, Dave Fellinger, and a small software team from DDN’s R&D group got together with some of the world’s largest web and social networking companies. After extensive brainstorming around pizza and beers on how to design a scalable data placement technology, an idea was born.

It was clear to us then that it was only a matter of time before hyper-scale systems would dominate the enterprise and that traditional file systems would no longer cut it in a world where information objects would be counted in billions and trillions, rather than millions. It was also clear that the majority of these trillions of pieces of data would come not only from social media or multimedia content, but from machine-generated data – hence the need for efficiency not only for large objects (easy to engineer) but for very small ones (hard to engineer), too. Also, obvious to us was that the majority of these information objects would be mostly “write once & read many”, and that any changes would favor versioning of the objects rather than re-write.

As DDN’s technology roots are well established in the data-intensive computing community, we already had a track record of embracing the bleeding edge of scalability and efficiency. Therefore, we applied our unique knowledge of the world of massive scale to guide our next era of innovation.  To us, it was about taking a clean sheet of paper approach, where we questioned the file system architectural status quo that had not fundamentally changed in over 30 years and made the smart decision to turn convention on its head.

What emerged was the idea of developing a simplified file system for immutable objects. We started with the data placement algorithms and called it a “bucket file system”, since all objects of like size are placed together in a bucket or ORG (Object Replication Group). We decided that anything we would do with this information object file system would be driven by our core focus on systems efficiency, and that the pillars of the architecture would be Hyper-scalability, Performance, Resiliency and Geographical Distribution Awareness.

    Efficiency to eliminate the bloat of traditional extent-based file systems where it can be hard to justify the cost to access storage metadata, as well as expensive “garbage collection” mechanisms which grab non-adjacent blocks and throw them into a file allocation table (FAT) to create space for files, leading to degradation due to fragmentation over time.

    Hyper-scalability to ensure that we could handle the incoming wave of #bigdata (forgive my marketing hype moment here) and more importantly, future-proof our customer infrastructure.

    Performance (both atomic and aggregate) as this is central to our core data-intensive computing heritage. Fundamentally, this is also what drives faster time to results, regardless of the industry vertical or application.

    Resiliency because, at hyper-scale, data protection is primordial and traditional methods like backup up are not realistic.

    Geographical Distribution Awareness to address the fact that in this future #bigdata world (there, I did it again) data ingress and egress points are not centralized.

With all of these design tenets in place, in 2008, we started down the long path of architecting and implementing this “bucket file system” which, via the magic of marketing, we then named WOS which stands for Web Object Scaler – (and Chris Mellor in his recent article lovingly referred to as “the Wizard of WOS”; see magic does exist!).

Our 3.0 release represents an important milestone in our history, and the launch pad for many more innovative capabilities coming soon. Watch this space! In the meantime, here is a brief timeline of our WOS core technology adventure.

  • Mid 2008: Bucket File System development starts
  • Thanksgiving 2008: First functional validation with equipment on customer premises (Web 2.0 property)
  • Late spring 2009 WOS is named. WOS V1.0 Early-access release and first big milestone for the team
  • Thanksgiving 2009: WOS V1.0 GA and first revenue shipments to a federal customer
  • Spring 2010: WOS v1.1 GA highlighting large object support (TB), native streaming and HTTP REST API
  • Summer 2010: WOS V1.2 GA bringing Java API & streaming support for Python and PHP amongst other things
  • Fall 2011: WOS V2.0 GA another big milestone built on first generation customers’ feedback and operational experiences. The list of features is long and major highlights were: Asynchronous replication, Object-Assure, Emergency local data re-protection, Automated intra-node capacity balancing, and most exciting of all, proof that we could scale our namespace to 256 billion information objects
  • Spring 2012: WOS V2.0.1 focused on manageability, serviceability with full SNMP support, and also increased I/O performance with 10Ge support
  • Summer 2012: WOS V2.1 increased self-healing capabilities building upon our Object Assure technology, with support for sparse information objects and prepared the foundation for a trillion-size namespace
  • Early Fall 2013: WOS V3.0 a new milestone in the WOS story with cluster support of up to 32 trillion objects, new hardware, search, etc…WOS is now not only a hyper-scale storage platform, but also a hyper-scale In-Storage Processing platform allowing node base meta-data intelligence such as content indexing.

I would be remiss if I didn’t mention that in parallel to this core platform development, we also “spun-off” additional tightly integrated complementary solutions such as:

  • WOS ACCESS & WOS CLOUD a multi-protocol suite to support legacy APIs such as NFS or CIFS, cloud APIs like S3 and mobile client access;
  • WOS BRIDGE which allows a transparent publish/subscribe mechanism between state of the art parallel file systems like Lustre™ or GPFS™.

While I’m incredibly proud of the litany of features and functionality in our latest release, I really need to tip my hat at all the teams across DDN who relentlessly contributed to the advancement of object storage technology – these guys are the engine of our customer adoption and our customer’s success.

  • DDN Storage
  • DDN Storage
  • Date: September 26, 2013