NVMe-oF™ (NVM Express™ over Fabrics)

NVM Express (NVMe™) – Non-Volatile Memory Express is a communications interface/protocol developed specially for SSDs. Designed to take advantage of the unique properties of pipeline-rich, random access, memory-based storage, NVMe reduces latency and provide faster CPU to data storage device performance.

Itis a new and innovative method of accessing storage media and has been capturing the imagination of data center professionals worldwide. NVMe is an alternative to the Small Computer System Interface (SCSI) standard for connecting and transferring data between a host and a peripheral target storage device or system. SCSI became a standard in 1986, when hard disk drives (HDDs) and tape were the primary storage media. NVMe is designed for use with faster media, such as solid-state drives (SSDs) and post-flash memory-based technologies. NVMe provides a streamlined register interface and command set to reduce the I/O stack’s CPU overhead by directly accessing the PCIe bus. Benefits of NVMe-based storage drives include lower latency, deep parallelism and higher performance.

What is NVME over Fabrics (NVMe-oF™)?

NVMe over Fabrics (NVMe-oF™) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. NVMe-oF™ defines a common architecture that supports a range of storage networking fabrics for NVMe block storage protocol over a storage networking fabric. This includes enabling a front-side interface into storage systems, scaling out to large numbers of NVMe devices and extending the distance within a datacenter over which NVMe devices and NVMe subsystems can be accessed.

NVMe-oF™ is designed to extend NVMe onto fabrics such as Ethernet, Fibre Channel, and InfiniBand.

GPU Computing, the basics:

GPU-accelerated computing is the use of a Graphics Processing Unit (GPU) together with a CPU to accelerate deep learning, analytics, and engineering applications. Pioneered in 2007 by NVIDIA, GPU accelerators now power energy-efficient data centers in government labs, universities, enterprises, and small-and-medium businesses around the world. They play a huge role in accelerating applications in platforms ranging from artificial intelligence to cars, drones, and robots.

HOW GPU’s ACCELERATE SOFTWARE APPLICATIONS

GPU-accelerated computing offloads compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run much faster.

How GPU Acceleration Works

GPU versus CPU Performance

A simple way to understand the difference between a GPU and a CPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

 GPUs have thousands of cores to process parallel workloads efficiently
GPU versus GPU: Which is better?

Check out the video clip below for an entertaining GPU versus CPU comparisson


Video: Mythbusters Demo: GPU vs CPU (01:34)