After thinking about it more, I have no idea how that worked at all. I
must have not cleared out the working directory or something
Regardless, I did something weird with my initial joining of the cluster
and then wasn't using repair -full. Thank y'all very much for the info.
On Wed, May
So I figured out the main cause of the problem. The seed node was itself.
That's what got it in a weird state. The second part was that I didn't
know the default repair is incremental as I was accidently looking at the
wrong version documentation. After running a repair -full, the 3 other
nodes
Hi Luke, I've encountered similar problem before, could you please advise
on following?
1) when you add 10.128.0.20, what are the seeds defined in cassandra.yaml?
2) when you add 10.128.0.20, were the data and cache directories in
10.128.0.20 empty?
- /var/lib/cassandra/data
-
Hi Luke,
I've never found nodetool status' load to be useful beyond a general
indicator.
You should expect some small skew, as this will depend on your current
compaction status, tombstones, etc. IIRC repair will not provide
consistency of intermediate states nor will it remove tombstones, it
Not necessarily considering RF is 2 so both nodes should have all
partitions. Luke, are you sure the repair is succeeding? You don't have
other keyspaces/duplicate data/extra data in your cassandra data directory?
Also, you could try querying on the node with less data to confirm if it
has the
For the other DC, it can be acceptable because partition reside on one
node, so say if you have a large partition, it may skew things a bit.
On May 25, 2016 2:41 AM, "Luke Jolly" wrote:
> So I guess the problem may have been with the initial addition of the
> 10.128.0.20
So I guess the problem may have been with the initial addition of the
10.128.0.20 node because when I added it in it never synced data I guess?
It was at around 50 MB when it first came up and transitioned to "UN".
After it was in I did the 1->2 replication change and tried repair but it
didn't
Hi Luke,
You mentioned that replication factor was increased from 1 to 2. In that
case was the node bearing ip 10.128.0.20 carried around 3GB data earlier?
You can run nodetool repair with option -local to initiate repair local
datacenter for gce-us-central1.
Also you may suspect that if a lot
Here's my setup:
Datacenter: gce-us-central1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 10.128.0.3 6.4 GB 256 100.0%
Do you have 1 node in each DC or 2? If you're saying you have 1 node in
each DC then a RF of 2 doesn't make sense. Can you clarify on what your set
up is?
On 23 May 2016 at 19:31, Luke Jolly wrote:
> I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
>
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
gce-us-east1. I increased the replication factor of gce-us-central1 from 1
to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
the node switched to 100% as it should but the Load showed that it didn't
actually
11 matches
Mail list logo