Performance Density – A New Metric for Rack-Scale Design

I’m constantly amazed by the specsmanship of the data storage industry.  Every month we hear about some new system that can achieve a gazillion IOPS or store 100’s of petabytes.  We revel in our own glory, often without consideration of the consequences.  Current examples are NVMe All-Flash Array and Software-Defined Storage (SDS) marketeers running amok. Our industry is consistently rewarded for storage capacity density. This is the number of TBs that can be crammed into a shelf or rack unit (RU).  It is why drive makers constantly increase capacities for every form factor and Big Storage is cleverly stuffing as many drives as possible into…

Set the Speed Dial for Pavilion Data

NVMe SSDs are the most expensive non-volatile storage in today’s data centers. But optimizing your ROI on NVMe can be tricky. Conventional wisdom says the highest performance with the lowest latency is achieved by installing the SSD directly into the server and scaling throughput or IOPS in a parallel fashion by adding more servers, each with its own NVMe SSD. However, increasing IOPS or throughput to match your workload is directly proportional to the amount of NAND Flash and power settings on the SSD. If your workload requires a small amount of storage, as most databases do, but your need…

Supercomputing Architecture for Any Data Center

Mapping the Human Genome was just the beginning for the life sciences discipline of bioinformatics.  Now, the goal is to apply every map of every person against a perfectly healthy baseline to identify sequences of DNA that carry diseases like diabetes, asthma, migraine, and schizophrenia.  However, even with the most powerful compute, network and storage resources on our planet, full genomic map comparisons are not possible. The International HapMap Project is a more efficient method to isolate genetic mutations using massively parallel processing with Hadoop® MapReduce and Spark® analytics to obtain statistically significant comparisons.  HapMap would not exist without the…

Disaggregation for Less Aggravation

NVMe SSDs ushered in an era of standards-based access to Solid State storage.  Previous protocols like SCSI and SAS were designed for spinning disk media and carried severe overhead that squelched the performance that can be delivered by NAND.  It’s no coincidence that IDC forecasts NVMe will make up more than 50% of all enterprise SSD shipments by 2020[1]. One step forward, two steps back While significant increases in IOPS and throughput can be delivered with today’s NVMe SSDs.  The deployment model is arcane.  By implementing NVMe inside of the server, we are back to the problems of direct-attached storage…

Pavilion’s perspective as vendors enter the NVMe game

NVMe Demands Storage System Re-Design, Not Retro-Fits We are experiencing exciting times in the storage industry.  During the week of May 1, 2018, Dell announced the NVMe-based PowerMax.   At Pavilion we welcome these kinds of announcements, since it accelerates the customer awareness around the fact that the future of storage is NVMe.   Let the loud and expensive bullhorn sound…its fascinating to watch.   Why?  Because the transition to all-NVMe storage is being validated before our eyes. You know you’re on to something important when the AFA storage incumbents make their retrofit product announcements.  Rather than doing the difficult, yet required work…