Folks, my apologies if this has been discussed here before but can someone
please shed some light on how Hypertable is claiming upto a 900% higher
throughput on random reads and upto a 1000% on sequential reads in their
performance evaluation vs HBase (modeled after the perf-eval test in section
7
So if that is the case, I'm not sure how that is a fair test. One
system reads from RAM, the other from disk. The results as expected.
Why not test one system with SSDs and the other without?
It's really hard to get apples/oranges comparison. Even if you are
doing the same workloads on 2
Purtell has more, but he told me no longer crashes, but minor pauses
between 50-250 ms. From 1.6_23.
Still not usable in a latency sensitive prod setting. Maybe in other settings?
-ryan
On Wed, Dec 15, 2010 at 11:31 AM, Ted Dunning tdunn...@maprtech.com wrote:
Does anybody have a recent
Thanks Ryan and Ted. I also think if they were using tcmalloc, it would have
given them a further advantage but as you said, not much is known about the
test source code.
On Wed, Dec 15, 2010 at 2:22 PM, Ryan Rawson ryano...@gmail.com wrote:
So if that is the case, I'm not sure how that is a
On Wed, Dec 15, 2010 at 11:44 AM, Gaurav Sharma
gaurav.gs.sha...@gmail.com wrote:
Thanks Ryan and Ted. I also think if they were using tcmalloc, it would have
given them a further advantage but as you said, not much is known about the
test source code.
I think Hypertable does use tcmalloc or
Why not run multiple JVMs per machine?
Chad
-Original Message-
From: Ryan Rawson [mailto:ryano...@gmail.com]
Sent: Wednesday, December 15, 2010 11:52 AM
To: dev@hbase.apache.org
Subject: Re: Hypertable claiming upto 900% random-read throughput vs HBase
The malloc thing was pointing out
per machine?
Chad
-Original Message-
From: Ryan Rawson [mailto:ryano...@gmail.com]
Sent: Wednesday, December 15, 2010 11:52 AM
To: dev@hbase.apache.org
Subject: Re: Hypertable claiming upto 900% random-read throughput vs HBase
The malloc thing was pointing out that we have
[mailto:ryano...@gmail.com]
Sent: Wednesday, December 15, 2010 11:58 AM
To: dev@hbase.apache.org
Subject: Re: Hypertable claiming upto 900% random-read throughput vs HBase
Why do that? You reduce the cache effectiveness and up the logistical
complexity. As a stopgap maybe, but not as a long
From: Ryan Rawson [ryano...@gmail.com]
Sent: Wednesday, December 15, 2010 11:52 AM
To: dev@hbase.apache.org
Subject: Re: Hypertable claiming upto 900% random-read throughput vs HBase
The malloc thing was pointing out that we have to contend with Xmx and
GC. So it makes it harder for us
multiple JVMs per machine?
Chad
-Original Message-
From: Ryan Rawson [mailto:ryano...@gmail.com]
Sent: Wednesday, December 15, 2010 11:52 AM
To: dev@hbase.apache.org
Subject: Re: Hypertable claiming upto 900% random-read throughput vs
HBase
The malloc thing
,
- Andy
--- On Wed, 12/15/10, Ted Dunning tdunn...@maprtech.com wrote:
From: Ted Dunning tdunn...@maprtech.com
Subject: Re: Hypertable claiming upto 900% random-read throughput vs HBase
To: dev@hbase.apache.org
Date: Wednesday, December 15, 2010, 11:31 AM
Does anybody have a recent
From: Ryan Rawson ryano...@gmail.com
Purtell has more, but he told me no longer crashes, but minor pauses
between 50-250 ms. From 1.6_23.
That's right.
On EC2 m1.xlarge so that's a big caveat... per-test-iteration variance on EC2
in general is ~20%, and EC2 hardware is 2? generations
Along the lines of Terracotta big memory, apparently what they are actually
doing is just using the DirectByteBuffer class (see this forum post:
http://forums.terracotta.org/forums/posts/list/4304.page) which is basically
the same as using malloc - it gives you non-gc access to a giant pool of
13 matches
Mail list logo