@Araron you're right and i was wrong!
2011/10/20 Yang tedd...@gmail.com
found it , https://issues.apache.org/jira/browse/CASSANDRA-3387
On Thu, Oct 20, 2011 at 1:37 PM, aaron morton aa...@thelastpickle.com
wrote:
It's unlikely that HH is the issue. (Disclaimer, am not familiar with HH
actually this is only an issue in HH, since HH writes all the stored
messages into the same row, so locking is a problem
2011/10/21 Jérémy SEVELLEC jsevel...@gmail.com:
@Araron you're right and i was wrong!
2011/10/20 Yang tedd...@gmail.com
found it ,
It's unlikely that HH is the issue. (Disclaimer, am not familiar with HH in
1.0, i know it's changes a bit)
Take a look at the TP Stats, what's happening ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/10/2011, at 10:10 AM, Jérémy
found it , https://issues.apache.org/jira/browse/CASSANDRA-3387
On Thu, Oct 20, 2011 at 1:37 PM, aaron morton aa...@thelastpickle.com wrote:
It's unlikely that HH is the issue. (Disclaimer, am not familiar with HH in
1.0, i know it's changes a bit)
Take a look at the TP Stats, what's happening
I'm using a cassandra version compiled from 1.0.0 github HEAD.
I have 3 nodes, A B and C, on node A I run a client, which talks only
to B as the coordinator.
the performance is pretty good, a QUORUM read+write takes 10ms.
but then I shutdown C, quickly the performance starts to degrade, and
Hi, what is your replication_factor?
2011/10/19 Yang tedd...@gmail.com
I'm using a cassandra version compiled from 1.0.0 github HEAD.
I have 3 nodes, A B and C, on node A I run a client, which talks only
to B as the coordinator.
the performance is pretty good, a QUORUM read+write
3
sorry forgot this important info
On Oct 19, 2011 11:31 AM, Jérémy SEVELLEC jsevel...@gmail.com wrote:
Hi, what is your replication_factor?
2011/10/19 Yang tedd...@gmail.com
I'm using a cassandra version compiled from 1.0.0 github HEAD.
I have 3 nodes, A B and C, on node A I run a
Ok.
I think a degration could be normal because your cluster is in a degraded
state when a node is down.
With a replication_factor of 3 and with a 3 nodes cluster, each data you
write is replicated on each node. As One node is down, when writing, it's
impossible to send a replica on the down