Ideal for a broad range of performance-hungry HPC applications, such as real-time analytic processing and modeling of Big Data across Life Sciences, Oil and Gas, Research and Financial Services industries among others, where support for high bandwidth and high IOPS workloads is essential, DDN is placing previews of IME technology into leading HPC facilities around the world and currently has placed test beds in over a dozen Top100 sites.
Planning for exascale, accelerating time to discovery and extracting results from massive data sets requires organizations to continually seek faster and more efficient solutions to provision I/O and accelerate applications.
DDN’s IME is the world’s first, application-aware, non-volatile memory enabled acceleration engine and buffer cache. IME resides between the high-performance compute and parallel file system in an HPC environment, and maximizes compute utilization without crippling the storage back-end. By virtualizing SSDs across the entire environment into a single fast tier of storage, users now can take advantage of the declining price of commodity hardware and deliver near in-memory performance to accelerate I/O, reduce latency and provide greater operational and economic efficiency.
Providing multiple advantages over any competing technology, IME delivers scale-out data protection with distributed erasure coding, seamless integration with both Lustre® and GPFSTM environments, intelligent adaptive data placement, POSIX compatibility and standards-based HPC application interfaces.
IME is designed for HPC users looking to benefit from technology that delivers orders of magnitude improvements in checkpointing and I/O-intensive HPC application performance.
Heralding a major step forward in DDN’s software defined storage strategy, the new IME technology demonstrates DDN’s focus on designing state-of-the-art HPC and Big Data solutions, which also include WOS® Object Storage technologies as well as DDN’s highly optimized SFA storage appliances.
IME Decouples Physical Storage from Compute Resources to Redefine the I/O Provisioning Paradigm, Driving Increased I/O and Application Performance, Flexibility and Lower Total Cost of Storage
IME enables organizations to separate the provisioning of peak and sustained performance requirements with up to 70 percent greater operational efficiency and cost savings than utilizing exclusively disk-based parallel file systems.
Inserting IME between the compute and parallel file system removes POSIX semantics that bring cluster performance to its knees due to the proliferation of small and mal-aligned I/O that can often make up 90 percent of the HPC workload. By shielding parallel file systems from fragmented I/O in cache, with IME users now can run jobs at or near line rate with more jobs in parallel resulting in faster time to insight.
Unlike simple burst buffer technologies that have been designed for write-intensive, large file sequential operations such as checkpoint-restart, IME enables acceleration across a wide array of HPC applications and workloads with no application modifications required.
In addition, IME enables greater flexibility and choice for end users due to its non-vendor-captive software-based approach. DDN’s open, no vendor lock-in solution provides extreme flexibility in how users can architect their environments and a wide selection of specialty and commodity hardware platforms and components.