By the way "10% faster" does not necessarily mean 10% more requests.....

https://issues.apache.org/jira/browse/CASSANDRA-2975

https://issues.apache.org/jira/browse/CASSANDRA-3772

Also if you follow the tickets "My tests show that Murmur3Partitioner
actually is worse than MD5 with high cardinality indexes, here is what I
did (kernel 3.0.0-19, 2.2Ghz quad-core Opteron, 2GB RAM):

For each test:
wiped all of the data directories and re-compiled with 'clean'
ran stress with -c 50 -C 500 -S 512 -n 50000 (where -c is number of
columns, -C values cardinality and -S is value size in bytes) 4 times (to
make it hot)

RandomPartitioner: average op rate is 845.
 Murmur3Partitioner: average op rage is 721."

Then later:

"I have removed ThreadLocal declaration from the M3P (and cleaned
whitespace errors) which was the bottleneck, after re-running tests with
that modification M3P beats RP with 903 to 847."


847/903 = 0.937984496

I think that I is% 6 or 7% right?, not 10%, and other things in cassandra
are orders or magnitude slower then computing hashes, network, diskio. Also
is this test only testing when using 2ndary indexes? What about people who
do not care about 2ndard indexes. I am sure it is faster and better, but I
am not going to lose sleep until I rebuild all my clusters just to change
the partitioner. So new clusters I will probably use the default but not
going to upgrade existing ones. Let them stay RP.

Edward

On Thu, Jan 3, 2013 at 4:21 AM, Alain RODRIGUEZ <arodr...@gmail.com> wrote:

> Hello I have read the following from the "changes.txt" file.
>
> "The default partitioner for new clusters is Murmur3Partitioner,
> which is about 10% faster for index-intensive workloads.  Partitioners
> cannot be changed once data is in the cluster, however, so if you are
> switching to the 1.2 cassandra.yaml, you should change this to
> RandomPartitioner or whatever your old partitioner was."
>
> Does this mean that there absolutely no way to switch to the new
> partitioner for people that are already using Cassandra ?
>
> Alain
>

Reply via email to