Let me first start off by saying that I really don’t like it when IT companies refer to their customers in a general way as “the enterprise.” From the moment that Captain Kirk (or Picard, if you please) called it a day, for me there has ceased to be a common definition of the enterprise that accurately reflects the wide variety of IT requirements encountered across commercial & government computing organizations.
Yes, I know I’m guilty of this in my title. Enterprise. <cringe> I’m more than happy to use a better word if any of my faithful readers can make a better suggestion. Until then: It’s not my first sin… nor will it be my last. As such, I’ll just appreciate your continued forgiveness. On to the show…
No, not the kind you scratch from your teeth. I’m talking about the really nasty stuff. I’m talking about the growth of cholesterol plaques that slowly block blood flow in your arteries. Arterial plaque build up is a common cause of blood clots that can lead, in turn, to a heart attack or a stroke. But why, you may well ask, am I talking about plaque?
Well, here’s the analogy:
- Data is the lifeblood of the enterprise.
- Object storage provides the scalability and resilience required to serve as the one true circulatory system of the data-driven enterprise.
- Linux file systems are the equivalent of cholesterol to object storage and limit the ability to get data efficiently to all of the organizations’ vital organs.
I’ve been thinking a tremendous amount about object storage performance this week. With DDN’s latest releases of WOS 3.0 software and WOS7000 appliance today, we have once again staked our claim as the world’s performance leader. This time we “cranked it up” to 32 and we’re delivering a level of scale more than 32 times larger and faster than anyone else in the market – heady stuff, so much so that it makes my head spin!
Marketing-wise, we’re swimming upstream here. The market for object storage has been dominated for far too long by organizations positioning object storage as the next great tape replacement. Big deal. Tape is always going to have a place in the data center stack where customers don’t want/need online access to their data (true archives, offsite DR, etc.). However, object storage will and already does take the place of tape in other cases. We believe the real wins are elsewhere….
DDN invented its object storage platform for an entirely new level of application scalability and performance that can ONLY be served by an object storage system. We created our first WOS PRD (product requirements document) by partnering with several of the world’s largest social networking and web site companies that had the very real challenge of delivering performance at scale. If you want to learn more about this market, I invite you to visit highscalability.com to get an appreciation of what hyper-scale and hyper-performance really means.
Today, I would like to explain why our customers choose DDN high-performance object storage solutions and what that means for their businesses. Before I do that though, let me make one very important distinction. It’s essential to understand the difference between the object storage semantic, and the object storage implementation.
- Semantic: the object storage semantic allows machines to access data across a very wide namespace via object identifiers and provide a greater level of context to their data through self-defining the object’s metadata.
- Implementation: most everyone (but not all) have implemented an object storage system on top of a Linux file system and managed this store with some scalable database which manages object references, security, etc.
Whereas DDN today offers a system which presents an object semantic (via our own API, or REST, or S3, etc.), we have implemented our object storage system on a fundamentally different architecture which rids the system of the cholesterol and plaque that slows down object access and cripples the performance efficiency of a system.
Differences in Object Storage Implementations:
By removing the I/O blender that is the Linux file system (such as ext3/4 or XFS), DDN has written a native disk driver interface which allows us to do things differently than object (translation) storage systems that translate object semantics to linux file system (LFS) I/Os.
With our WOS implementation of an object disk driver, we have:
- Eliminated inode and block fit capacity waste; we give the user over 95% of the object storage disk, as opposed to LFSs which can consume up to 30% of the whole platter due to block metadata and fragmentation.
- Eliminated excessive disk seeks. Data is written contiguously to disk and we need only one disk transaction to commit a write… as compared to up to 11 disk head seeks with a LFS. This increases throughput and drives latency way down, especially for small (as low as 512B) and random object I/O.
On top of this, WOS does not have the need for a distributed database system to manage and arbitrate all of the objects in a namespace. Objects in WOS are managed intelligently in a peer:peer mode where there is no central point of management or bottleneck/clot.
This all begs the question:
“What does this level of efficiency buy DDN’s WOS customers?”
In releasing the latest version of WOS technology, we’ve done a great amount of work to understand the perils of implementing an object storage system on top of a LFS. In doing so, we also researched the performance of one of the more popular object storage systems, OpenStack Swift (layered on ext4). Here’s what we found:
With a 1200% gain in system efficiency vs LFSs, there’s a very clear argument here for WOS as a credible alternative to organizations that are looking to cure themselves from the ill effects of lesser-equipped object (translation) technology. We’ve established that it’s efficient, and “plaque-free”, but is it appropriate for the scalability needs of tomorrow’s enterprise? Take the efficiency to the bank, and buy only 8% of what you would need from competing systems… or, enjoy the added performance. The choice is yours.
Scale is relative. If you consider that other object storage systems are known to be archival in nature, then maybe we’re not making the right comparison. The new DDN WOS technology is big. Really big.
|System Attributes||WOS 3.0|
|Maximum Number of Servers||8192 (30 HDDs/Server)|
|Maximum # of Unique Objects (One Namespace)||32 Trillion|
|Maximum Replicas Per Unique Object||4-way replication|
|Maximum Total Cluster Capacity (using 4TB)||30.72 PB; 0.98EB|
|Namespace Performance||9.8TB/s, 256M Files/second|
Scalability for scalability’s sake? Nah.
- Our friends at Amazon have already crossed the 2 trillion object milestone.
- DDN is already responding to RFPs that call for trillion-object scale.
With almost 10TB/s of performance, WOS technology can be scaled to run at 10x the world’s fastest storage system, which also just so happens to be built by DDN. As compared to the world’s fastest scale-out NAS, WOS 3.0 is 65x faster. Performance… well, remember that’s our specialty!
Data growth is an organizational certainty. To ensure the greatest organizational agility and long-range investment protection around data storage resources that will grow to support data growth, it’s critical that you understand heart-healthy approaches to building clot-free storage infrastructures.
- High efficiency saves $s for HW TCO by doing less with more;
- Low latency ensures applications are quickest to first-byte;
- High scalability ensures your investment is protected today and tomorrow.
So, to close out today’s “wellness check”, I’ll leave you with my doctor-approved object storage checklist to digest before you go jumping onto the object storage bandwagon. Stay healthy!
DDN’s Scalable Object Storage Checklist:
1. Operations per HDD/SSD
2. Operations per Object Server
3. Single-Site Object Retrieval Latency
4. Multi-Site Object Retrieval Latency
5. Max Object Operations per second