Flo and Matt from Mesosphere suggested I query this list.   I got some great
responses off the user@ list over the past few weeks and a few people
suggested I ping the dev list.

 

We are putting the finishing touches on a hyperconverged system that can
push 10M IOPS at <200 microsecond latency for datasets between 5 and 280 TB
per node.   I'm looking for people building applications that want to
leverage this kind of horsepower without having to shard their data across
10s or 100s of machines.  We find most other approaches are struggling to
perform at all over 500K-1M IOPS where we provide consistent performance up
to 10M IOPS.  My challenge is that not all apps want this much I/O at scale.

 

We fully support grid computing, we just think the nodes could/should be a
"little" larger for analytics workloads.

 

I have a bit more background on what we are building below.  We are in
market a validation and feedback phase with beta product now and a
production product coming later this year.  

 

I am not looking for sales targets, more for market validation and use case
discussions.  I appreciate any ideas that I could get from the Mesos
community as I know you are all looking at large scale computing challenges.

 

Cheers,

Kevin

 

 

Kevin Kramer

 

Graphite Systems

"10M+ IOPS in a single low-latency 20-140+TB Appliance"

[email protected]

+1 (415) 475-1987

 

 

 

Background:

Graphite Systems is a 2.5 year old stealth mode company funded by NEA.   We
are seeking market feedback, advice, and a couple of testing partners as we
ready our product for market in the Hyperconverged systems space.  

We are still working with beta product and are not selling the product at
this time (product GA will be later this year so this is
advisory/brainstorming and not sales engagements at this time).


The short story is we at 10x today's approaches like Exadata at around 10%
of the cost: 

.         10X the "in-memory" capacity of today's servers (5.5TB of RAM plus
20-280TB of FnRAM accessible at 60+GB/s)

.         10X the speed of typical Flash Arrays (69GB/s) for larger datasets


.         10X maximum IOPS of other converged systems (~10M IOPS)

.         5-8X lower latency than the leading Flash Arrays (little
degradation for larger than 8K blocks to make this 10-20X for larger block
sizes)


We are a server/system and not storage - we have 60 cores and 5.5TB RAM
available in the system to process Application, OLAP and Datawarehouse
workloads while providing access to this 20-280TB datastore for IOPS
intensive applications with very low latency.  Many flash array vendors are
struggling for 1M IOPS and 1-3 mS latency and can't scale above 20/30/40TB.
Graphite is looking at 10M+ IOPS and .13-.20 mS (130-200 microseconds)
latency for larger datasets - 20-280TB of FnRAM.   We can do things like
read a sector immediately after writing it which is a challenge for most
other solutions.

 

Reply via email to