Hello Shalom, Vladimir,

Thanks for your help.

I had initially 3 nodes, had a hardware failure and reinstalled Cassandra on the node (I took that opportunity to upgrade from 3.1.1 to 3.0.9). I ran nodetool upgradesstables and nodetool repair on each node once I updated Cassandra.

The 3 nodes are in the same private network, I am using the private IPs for the seeds and the listen_address and the public IPs for rpc_address

I am using ssl to encrypt the communication between the nodes, so I am using the port 7001 :

telnet PRIVATEIP 7001
Trying PRIVATEIP...
Connected to PRIVATEIP.

Each node can connect with any other node.


I selected some old data from the new node :

CONSISTENCY;
Current consistency level is ONE.
select count(*) from ....;

 count
-------
     0

CONSISTENCY ALL;
Consistency level set to ALL.
count(*) from ....;

 count
-------
    64

When I switched to ALL I could get the data while the initial level ONE did not have any data. I did not expect to get any data with ALL, am I missing something ?

I do not know if this is related, but while I was inquiring the database, I had the following messages in the debug.log :

DEBUG [ReadRepairStage:15292] 2016-11-21 18:15:59,719 ReadCallback.java:234 - Digest mismatch: org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(2288259866140251828, 00040000002a0000040000421500) (d41d8cd98f00b204e9800998ecf8427e vs ce211ac5533e1a146d9fee734fd8de26) at org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) ~[apache-cassandra-3.0.10.jar:3.0.10] at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:225) ~[apache-cassandra-3.0.10.jar:3.0.10] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]


Thanks for your help,

Cheers,

Bertrand

On 16-11-21 01:28 AM, Shalom Sagges wrote:
I believe the logs should show you what the issue is.
Also, can the node "talk" with the others? (i.e. telnet to the other nodes on port 7000).

Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson> <http://www.facebook.com/LivePersonInc>

        We Create Meaningful Connections

<https://engage.liveperson.com/idc-mobile-first-consumer/?utm_medium=email&utm_source=mkto&utm_campaign=idcsig>


On Sun, Nov 20, 2016 at 8:50 PM, Bertrand Brelier <bertrand.brel...@gmail.com <mailto:bertrand.brel...@gmail.com>> wrote:

    Hello Jonathan,

    No, the new node is not a seed in my cluster.

    When I ran nodetool bootstrap resume
    Node is already bootstrapped.

    Cheers,

    Bertrand

    On Sun, Nov 20, 2016 at 1:43 PM, Jonathan Haddad
    <j...@jonhaddad.com <mailto:j...@jonhaddad.com>> wrote:

        Did you add the new node as a seed? If you did, it wouldn't
        bootstrap, and you should run repair.
        On Sun, Nov 20, 2016 at 10:36 AM Bertrand Brelier
        <bertrand.brel...@gmail.com
        <mailto:bertrand.brel...@gmail.com>> wrote:

            Hello everybody,

            I am using a 3-node Cassandra cluster with Cassandra 3.0.10.

            I recently added a new node (to make it a 3-node cluster).

            I am using a replication factor of 3 , so I expected to
            have a copy of
            the same data on each node :

            CREATE KEYSPACE mydata WITH replication = {'class':
            'SimpleStrategy',
            'replication_factor': '3'}  AND durable_writes = true;

            But the new node has  less data that the other 2 :

            Datacenter: datacenter1
            =======================
            Status=Up/Down
            |/ State=Normal/Leaving/Joining/Moving
            --  Address       Load       Tokens       Owns
            (effective)  Host
            ID                               Rack
            UN  XXX.XXX.XXX.XXX  53.28 GB   256 100.0% xxxxxx  rack1
            UN  XXX.XXX.XXX.XXX  64.7 GB    256 100.0% xxxxxx  rack1
            UN  XXX.XXX.XXX.XXX  1.28 GB    256 100.0% xxxxxx  rack1


            On the new node :

            /XXXXXX/data-6d674a40efab11e5b67e6d75503d5d02/:
            total 1.2G

            on one of the old nodes :

            /XXXXXX/data-6d674a40efab11e5b67e6d75503d5d02/:
            total 52G


            I am monitoring the amount of data on each node, and they
            grow at the
            same rate. So I suspect that my new data are replicated on
            the 3 nodes
            but the old data stored on the first 2 nodes are not
            replicated on the
            new node.

            I ran nodetool repair (on each node, one at a time), but
            the new node
            still does not have a copy of the old data.

            Could you please help me understand why the old data is
            not replicated
            to the new node ? Please let me know if you need further
            information.

            Thank you,

            Cheers,

            Bertrand




This message may contain confidential and/or privileged information.
If you are not the addressee or authorized to receive this on behalf of the addressee you must not use, copy, disclose or take action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply email and delete this message. Thank you.

Reply via email to