Hadoop

Hadoop

Key challenges of Hadoop cluster implementations is how to accelerate performance to make faster business decisions without breaking the bank, coupled with how to effectively deal with data center sprawl. When faced with I/O storage infrastructure limitations the answer is generally to increase the number of servers. However, Hadoop management solutions from Cloudera or Hortonworks charge based on a per server basis. When you are scaling the clusters to only get additional storage this becomes an expensive proposition. This has limited Hadoop to applications where performance is not critical like large data lakes built on HDDs, or overnight routine analytics. However, Hadoop with building blocks like Spark is ideally suited to real-time pipelined processes for Deep Learning, or real-time analytics on petabyte scale datasets. Using Hadoop with Apeiron external NVMe SSDs versus internal HDDs increases Hadoop read performance by 49.5x and write performance by 11.6x while reducing the number of DataNodes required by 50%. Even when performance is compared with internal SSDs, Apeiron accelerates Hadoop read performance by 8.7x and write performance by 2.6x with the same DataNode reduction. Finally, this level of Apeiron performance is achieved with 40% fewer Hadoop servers.

In the News

Apeiron Data Systems Announces Dedicated Splunk Appliance With a Proven 10-90x Performance Advantage

FOLSOM, CA, USA, September 20, 2017 /EINPresswire.com/ — Apeiron announced today the immediate availability of the Apeiron Splunk Appliance. This all in one NVMe appliance takes the guess work out of deploying Splunk environments of any size and performance profile.

A Few of Our Customers

See What Apeiron Will Do For You

TOP