On 3/11/2012 9:17 PM, Peter Schuller wrote:
multithreaded_compaction: false
Set to true.
I did try that. I didn't see it go any faster. The cpu load was lower,
which I assumed meant fewer bytes/sec being compressed
(SnappyCompressor). I didn't see multiple compactions in parallel.
Nodetool
If it's a Hector thing you may have better luck on the Hector user group.
http://groups.google.com/group/hector-users
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 10/03/2012, at 8:33 AM, Daning Wang wrote:
Thanks Maciej. we have
It may be the case that the joining node does not have enough information. But
there is a default 30 second delay while the node waits for the ring
information to stabilise.
What version are you using ?
Next time you add a new node can you try it with logging set the DEBUG. If you
get the
Thank you for the swift response.
Cem.
On Sun, Mar 11, 2012 at 11:03 PM, Peter Schuller
peter.schul...@infidyne.com wrote:
I am using TTL 3 hours and GC grace 0 for a CF. I have a normal CF that
has
records with TTL 3 hours and I dont send any delete request. I just
wonder
if using GC
On 3/12/12 9:50 AM, aaron morton wrote:
It may be the case that the joining node does not have enough
information. But there is a default 30 second delay while the node
waits for the ring information to stabilise.
What version are you using ?
1.0.7
Next time you add a new node can you try
In this case, where you know the query upfront, I add a custom secondary index
using another CF to support the query. It's a little easier here because the
data wont change.
UserLookupCF (using composite types for the key value)
row_key: system_name:id e.g. facebook:12345 or twitter:12345
Alternate would be to add another row to your user CF specific for Facebook
ids. Column ID would be the Facebook identifier and value would be your
internal uuid.
Consider when you want to add another service like twitter. Will you then
add another CF per service or just another row specific
*We have cassandra 4 nodes cluster* with RF = 3 (nodes named from 'A' to
'D', initial tokens:
*A (25%)*: 20543402371996174596346065790779111550, *
B (25%)*: 63454860067234500516210522518260948578,
*C (25%)*: 106715317233367107622067286720208938865,
*D (25%)*:
It's my understanding then for this use case that bloom filters are of
little importance and that i can
Yes.
AFAIK there is only one position seek (that will use the bloom filter) at the
start of a get_range_slice request. After that the iterators step over the rows
in the -Data files.
I don't understand why I
don't get multiple concurrent compactions running, that's what would
make the biggest performance difference.
concurrent_compactors
Controls how many concurrent compactions to run, by default it's the number of
cores on the machine.
If you are not CPU bound check
Modify this line the log4j-server.properties. It will normally be located in
/etc/cassandra
https://github.com/apache/cassandra/blob/trunk/conf/log4j-server.properties#L21
Change INFO to DEBUG
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
I don't know if it can helps, but the only thing I see on cluster's
nodes is :
== /var/log/cassandra/output.log ==
INFO 10:57:28,530 InetAddress /10.0.1.70 is now dead.
when I try to join the node 10.0.1.70 to the cluster
On 3/12/12 11:27 AM, Cyril Scetbon wrote:
It's done.
Nothing new on
Hi,
If you use SizeTieredCompactionStrategy, you should have x2 disk space
to be on the safe side. So if you want to store 2TB data, you need
partition size of 4TB at least. LeveledCompactionStrategy is available
in 1.x and supposed to require less free disk space (but comes at price
of
We were aware of compaction overhead, but still don't understand why
that shall happened: node 'D' was in stable condition, works for at
least month, had all data for its token range and was comfortable with
such disk space.
Why suddenly node needs 2x more space for data it already have? Why
What version of Cassandra do you have?
On 12/03/2012 11:38, Vanger wrote:
We were aware of compaction overhead, but still don't understand why
that shall happened: node 'D' was in stable condition, works for at
least month, had all data for its token range and was comfortable with
such disk
Cassandra v1.0.8
once again: 4-nodes cluster, RF = 3.
On 12.03.2012 16:18, Rustam Aliyev wrote:
What version of Cassandra do you have?
On 12/03/2012 11:38, Vanger wrote:
We were aware of compaction overhead, but still don't understand why
that shall happened: node 'D' was in stable
On Mon, Mar 12, 2012 at 4:44 AM, aaron morton aa...@thelastpickle.com wrote:
I don't understand why I
don't get multiple concurrent compactions running, that's what would
make the biggest performance difference.
concurrent_compactors
Controls how many concurrent compactions to run, by
It's hard to answer this question because there are whole bunch of
operations which may cause disk usage growth - repair, compaction, move
etc. Any combination of these operations will make things only worse.
But let's assume that in your case the only operation increasing disk
usage was move.
It's my understanding then for this use case that bloom filters are of
little importance and that i can
Ok. To summarise our actions to get us out of this situation, in hope
that it may help others one day, we did the following actions:
1) upgrade to 1.0.7
2) set fp_ratio=0.99
3)
Just ignore it: https://issues.apache.org/jira/browse/CASSANDRA-3955
On Mon, Mar 12, 2012 at 9:31 PM, Roshan codeva...@gmail.com wrote:
Hi
I have upgrade our development Cassandra cluster (2 nodes) from 1.0.6 to
1.0.8 version.
After upgrade to 1.0.8 version, one node keep trying to send
20 matches
Mail list logo