Hi,
I have a column family created with strategy of leveled compaction. If I
execute nodetool compact command, will the columnfamily be compacted using
size tiered compaction strategy?
If yes, after the major size tiered compaction finishes will it at any
point trigger leveled compaction back on
Hi,
In general leveled compaction are I/O heavy so when there are bunch of
writes do we need to stop leveled compactions at all?
I found the nodetool stop COMPACTION, which states it stops compaction
happening, does this work for any type of compaction? Also it states in
documents 'eventually
compaction_throughput_mb_per_sec (16mb default) and/or
explicitly setting concurrent_compactors (defaults to the number of cores,
iirc).
On Thu, Sep 19, 2013 at 10:58 AM, rash aroskar
rashmi.aros...@gmail.comwrote:
Hi,
In general leveled compaction are I/O heavy so when there are bunch
Hello,
I am planning my new cassandra 1.2.5 cluster with all nodes in single
region but divided among 2 availablity zones equally. I want to make sure
with replication factor 2 I get 1 copy in every availability zone. As per
my knowledge using placement strategy EC2Snitch should take care of this.
Thanks for quick response Rob.
Are you suggesting deploying 1.2.9 only if using Cassandra DC outside of
EC2 or if I wish to use rack replication at all?
On Mon, Sep 9, 2013 at 12:43 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Sep 9, 2013 at 8:56 AM, rash aroskar rashmi.aros
Hello,
Has anyone used aws VPC for cassandra cluster? The static private ips of
VPC must be helpful in case of node replacement.
Please share any experiences related or suggest ideas for static ips in ec2
for cassandra.
-Rashmi
what do you mean by not adding as a seed? if I add new node to existing
cluster, the new node should not be added as a seed in cassandra.yaml for
other nodes in the ring?
when should it be added as a seed then? once the cluster is balanced? or
after manually running rebuild command?
On Wed, Aug
Hi,
If I add some data in cassandra cluster with TTL lets say 2 days, took
snapshot of it before it expires. If I use the snapshot to load the data in
different/same cluster, will the data from the snapshot will carry the TTL
of 2 days (from the time when the snapshot was created)? if not can I
first compacted after you reload
it. There's not an easy way to prevent this from happening.
On Thu, Aug 15, 2013 at 1:13 PM, rash aroskar rashmi.aros...@gmail.comwrote:
Hi,
If I add some data in cassandra cluster with TTL lets say 2 days, took
snapshot of it before it expires. If I use
Aaron - I read about the virtual nodes at
http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2
On Tue, Aug 6, 2013 at 4:49 AM, Richard Low rich...@wentnet.com wrote:
On 6 August 2013 08:40, Aaron Morton aa...@thelastpickle.com wrote:
The reason for me looking at virtual
Hi,
I am setting up new cluster for cassandra 1.2.5, and first time using
cassandra compression.
I read about the compressors, and it gathered Snappy Compressor gives
better compression but is sslightly slower than LZ4 compressor. Just wanted
to know your experience and/or opinions as to *Snappy
changes since
then. Be careful about compaction strategies you choose and double check
the options.
Regards,
Romain
rash aroskar rashmi.aros...@gmail.com a écrit sur 25/07/2013 23:25:11 :
De : rash aroskar rashmi.aros...@gmail.com
A : user@cassandra.apache.org,
Date : 25/07/2013 23:25
We observed the same behavior. During last repair the data distribution on
nodes was imbalanced as well resulting in one node bloating.
On Aug 1, 2013 12:36 PM, Carl Lerche m...@carllerche.com wrote:
Hello,
I read in the docs that `nodetool repair` should be regularly run unless
no delete is
Hi,
I am upgrading my cassandra cluster from 0.8 to 1.2.5.
In cassandra 1.2.5 the 'num_token' attribute confuses me.
I understand that it distributes multiple tokens per node but I am not
clear how that is helpful for performance or load balancing. Can anyone
elaborate? has anyone used this
14 matches
Mail list logo