- NVMe Storage
- Enterprise NVMe
Storage technology has evolved over time to greater levels of performance, capacity scalability and cost effectiveness. The future demands for storage will create a plethora of challenges as IT must deal with heightened service level objectives (e.g., application availability/uptime, application response time/performance) and a new generation of strategic applications driven by Big Data analytics. Going forward, market success will be achieved by storage vendors that can address a quantum leap in throughput with higher IOPs, full bandwidth and new levels of low latency for business-critical application workloads. This new performance paradigm is required to enhance customer service level expectations and to make faster business decisions, Storage will be the key focal point as organizations of all sizes and vertical markets strive for a competitive advantage. The emergence of NVMe (Non-Volatile Memory Express) is a game-changing enabler of accelerated storage speed.
When it comes to enterprise class NVMe not all storage solutions are the same. There are generally considered to be two groups of enterprise NVMe storage methodologies for scale-out storage. The bulk of the market has designed solutions falling under the header of NVMe-oF, or NVMe over Fabrics. The Apeiron invented methodology is NoE or NVMe over ethernet and is far superior in performance, linear scale-out, and overall better TCO with the industry’s highest density utilization per rack unit.
However It wasn’t just the advent of NVMe that has accelerated storage technology, it was Apeiron’s persistent commitment to removing all of the legacy bottlenecks inherent in SAN environments including those present in implementations of NVMe-oF. Gone are the storage controllers, gone are the resource craving legacy storage protocols, and gone are the external switching hardware needed for routing the cluster. By removing all of the IO blocking components and software, and passing the data transport in it’s entirety over a lightweight, hardened layer 2 ethernet tunnel, exponential gains in performance were attained, even when compared to other NVMe options such as over Fabrics. The need for these legacy technologies are no longer present with today’s sophisticated storage aware applications, so why have them.
Apeiron developed NVMe over Ethernet (NoE) to deliver the performance and cost of server-based scale-out with the manageability of enterprise storage. We wanted to give IT managers the performance they saw with drives installed in servers, with the benefits of external pooled storage. This enables data centers to scale processing and storage resources independently as they should, and need. While providing the elastic management of a large pool of NVMe SSDs under software tools like OpenStack, Docker, or Hadoop, NVMe over Ethernet uniquely can achieve performance to servers that is often better than seen with SSDs installed directly, a fact that often surprises IT professionals. Now they can have the best of both worlds.
The ADS was designed with a lossless Ethernet architecture that can scale to thousands of external NVMe drives (that look to the server as DAS) with latency overhead of less than 2us - built up of multi-port server HBAs and 2U NVMe shelves. Clusters scale with linear performance. We took a fresh approach to the design implementing our data path using high-speed FPGAs, selecting Layer2 Ethernet as a transport protocol, and passing NVMe commands natively across the fabric. The native transport is critical to making sure no performance is lost in the pooled environment.
Apeiron solutions deliver considerably lower TCO and far better latency, much higher bandwidth, and exponentially more IOPS than others using NVMe solid-state drive (SSD) storage or NVMe over Fabrics networks, quite often by a large margin.
We know you're all about big data and you want it fast, so we provided some about our ADS platform in the downloads below. Take a look, see what everyone is talking about, then give us a call so we can help you too. 18.4 million IOPS in 2U. Damn that’s fast.