. Are there any objections
about that plan from your point of view?
Thanks in advance!
Andi
From: Aaron Morton [aa...@thelastpickle.com]
Sent: Wednesday, December 18, 2013 3:14 AM
To: Cassandra User
Subject: Re: Unbalanced ring with C* 2.0.3 and vnodes after adding
Check the logs for messages about nodes going up and down, and also look at the
MessagingService MBean for timeouts. If the node in DR 2 times out replying to
DR1 the DR1 node will store a hint.
Also when hints are stored they are TTL'd to the gc_grace_seconds for the CF
(IIRC). If that's low
Wanted to add one more thing:
I can also tell that the numbers are not consistent across DRs this way
-- I have a column family with really wide rows (a couple million
columns).
DC1 reports higher column counts than DC2. DC2 only becomes consistent
after I do the command a couple of times
Here is some more information.
I am running full repair on one of the nodes and I am observing strange
behavior.
Both DCs were up during the data load. But repair is reporting a lot of
out-of-sync data. Why would that be ? Is there a way for me to tell
that WAN may be dropping hinted
Maybe people think that 1.2 = Vnodes, when Vnodes are actually not
mandatory and furthermore it is advised to upgrade and then, after a while,
when all is running smooth, eventually switch to vnodes...
2013/2/13 Brandon Williams dri...@gmail.com
On Tue, Feb 12, 2013 at 6:13 PM, Edward Capriolo
Are vnodes on by default. It seems that many on list are using this feature
with small clusters.
I know these days anything named virtual is sexy, but they are not useful
for small clusters are they. I do not see why people are using them.
On Monday, February 11, 2013, aaron morton
I take that back. vnodes are useful for any size cluster, but I do not see
them as a day one requirement. It seems like many people are stumbling over
this.
On Tuesday, February 12, 2013, Edward Capriolo edlinuxg...@gmail.com
wrote:
Are vnodes on by default. It seems that many on list are using
On Tue, Feb 12, 2013 at 6:13 PM, Edward Capriolo edlinuxg...@gmail.com wrote:
Are vnodes on by default. It seems that many on list are using this feature
with small clusters.
They are not.
-Brandon
the sender immediately by reply
e-mail and delete this message. Thank you for your cooperation.
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Monday, February 11, 2013 12:51 PM
To: user@cassandra.apache.org
Subject: Re: unbalanced ring
The tokens are not right, not right at all. Some are too
Oracle and didn’t include any
BLOB, etc.
[ ... ]
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Tuesday, February 05, 2013 3:41 PM
To: user@cassandra.apache.org
Subject: Re: unbalanced ring
Use nodetool status with vnodes
http://www.datastax.com/dev/blog/upgrading-an-existing
morton [mailto:aa...@thelastpickle.com]
Sent: Tuesday, February 05, 2013 3:41 PM
To: user@cassandra.apache.org
Subject: Re: unbalanced ring
Use nodetool status with vnodes
http://www.datastax.com/dev/blog/upgrading-an-existing-cluster-to-vnodes
The different load can be caused by rack affinity
Use nodetool status with vnodes
http://www.datastax.com/dev/blog/upgrading-an-existing-cluster-to-vnodes
The different load can be caused by rack affinity, are all the nodes in the
same rack ? Another simple check is have you created some very big rows?
Cheers
-
Aaron Morton
Tamar be carefull. Datastax doesn't recommand major compactions in
production environnement.
If I got it right, performing major compaction will convert all your
SSTables into a big one, improving substantially your reads performence, at
least for a while... The problem is that will disable minor
the sender immediately and irrevocably delete
this message and any copies.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Thursday, October 11, 2012 09:17
To: user@cassandra.apache.org
Subject: Re: unbalanced ring
Tamar be carefull. Datastax doesn't recommand major compactions
Hi!
I am re-posting this, now that I have more data and still *unbalanced ring*:
3 nodes,
RF=3, RCL=WCL=QUORUM
Address DC RackStatus State Load
OwnsToken
113427455640312821154458202477256070485
x.x.x.xus-east 1c Up Normal 24.02 GB
Hi,
Same thing here:
2 nodes, RF = 2. RCL = 1, WCL = 1.
Like Tamar I never ran a major compaction and repair once a week each node.
10.59.21.241eu-west 1b Up Normal 133.02 GB
50.00% 0
10.58.83.109eu-west 1b Up Normal 98.12 GB
50.00%
major compaction in production is fine, however it is a heavy operation on
the node and will take I/O and some CPU.
the only time i have seen this happen is when i have changed the tokens in
the ring, like nodetool movetoken. cassandra does not auto-delete data
that it doesn't use anymore just
Hi!
Apart from being heavy load (the compact), will it have other effects?
Also, will cleanup help if I have replication factor = number of nodes?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
it should not have any other impact except increased usage of system
resources.
and i suppose, cleanup would not have an affect (over normal compaction) if
all nodes contain the same data
On Wed, Oct 10, 2012 at 12:12 PM, Tamar Fraenkel ta...@tok-media.comwrote:
Hi!
Apart from being heavy
Does cleanup only cleanup keys that no longer belong to that node.
Yes.
I guess it could be an artefact of the bulk load. It's not been reported
previously though. Try the cleanup and see how it goes.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
But wont that also run a major compaction which is not recommended anymore.
-Raj
On Sun, Jun 17, 2012 at 11:58 PM, aaron morton aa...@thelastpickle.comwrote:
Assuming you have been running repair, it' can't hurt.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
No. Cleanup will scan each sstable to remove data that is no longer
owned by that specific node. It won't compact the sstables together
however.
On Tue, Jun 19, 2012 at 11:11 PM, Raj N raj.cassan...@gmail.com wrote:
But wont that also run a major compaction which is not recommended anymore.
Assuming you have been running repair, it' can't hurt.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 17/06/2012, at 4:06 AM, Raj N wrote:
Nick, do you think I should still run cleanup on the first node.
-Rajesh
On Fri, Jun 15,
Nick, do you think I should still run cleanup on the first node.
-Rajesh
On Fri, Jun 15, 2012 at 3:47 PM, Raj N raj.cassan...@gmail.com wrote:
I did run nodetool move. But that was when I was setting up the cluster
which means I didn't have any data at that time.
-Raj
On Fri, Jun 15,
This is just a known problem with the nodetool output and multiple
DCs. Your configuration is correct. The problem with nodetool is fixed
in 1.1.1
https://issues.apache.org/jira/browse/CASSANDRA-3412
On Fri, Jun 15, 2012 at 9:59 AM, Raj N raj.cassan...@gmail.com wrote:
Hi experts,
I have a
Actually I am not worried about the percentage. Its the data I am concerned
about. Look at the first node. It has 102.07GB data. And the other nodes
have around 60 GB(one has 69, but lets ignore that one). I am not
understanding why the first node has almost double the data.
Thanks
-Raj
On Fri,
Did you start all your nodes at the correct tokens or did you balance
by moving them? Moving nodes around won't delete unneeded data after
the move is done.
Try running 'nodetool cleanup' on all of your nodes.
On Fri, Jun 15, 2012 at 12:24 PM, Raj N raj.cassan...@gmail.com wrote:
Actually I am
I did run nodetool move. But that was when I was setting up the cluster
which means I didn't have any data at that time.
-Raj
On Fri, Jun 15, 2012 at 1:29 PM, Nick Bailey n...@datastax.com wrote:
Did you start all your nodes at the correct tokens or did you balance
by moving them? Moving
This morning I have
nodetool ring -h localhost
Address DC RackStatus State LoadOwns
Token
113427455640312821154458202477256070485
10.34.158.33us-east 1c Up Normal 5.78 MB
33.33% 0
10.38.175.131 us-east 1c Up
Thanks, I will wait and see as data accumulates.
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956
On Tue, Mar 27, 2012 at 9:00 AM, R. Verlangen ro...@us2.nl wrote:
How can I fix this?
add more data. 1.5M is not enough to get reliable reports
What version are you using?
Anyway try nodetool repair compact.
maki
2012/3/26 Tamar Fraenkel ta...@tok-media.com
Hi!
I created Amazon ring using datastax image and started filling the db.
The cluster seems un-balanced.
nodetool ring returns:
Address DC Rack
32 matches
Mail list logo