Perhaps have a read here?

On 04/04/2023 06:41, David Tinker wrote:
Ok. Have to psych myself up to the add node task a bit. Didn't go well the first time round!

- Make sure the new node is not in seeds list!
- Check cluster name, listen address, rpc address
- Give it its own rack in
- Delete if it exists
- Make sure no compactions are on the go
- rm -rf /var/lib/cassandra/*
- rm /data/cassandra/commitlog/* (this is on different disk)
- systemctl start cassandra

And it should start streaming data from the other nodes and join the cluster. Anything else I have to watch out for? Tx.

On Tue, Apr 4, 2023 at 5:25 AM Jeff Jirsa <> wrote:

    Because executing “removenode” streamed extra data from live nodes
    to the “gaining” replica

    Oversimplified (if you had one token per node)

    If you  start with A B C

    Then add D

    D should bootstrap a range from each of A B and C, but at the end,
    some of the data that was A B C becomes B C D

    When you removenode, you tell B and C to send data back to A.

    A B and C will eventually contact that data away. Eventually.

    If you get around to adding D again, running “cleanup” when you’re
    done (successfully) will remove a lot of it.

    On Apr 3, 2023, at 8:14 PM, David Tinker <>

    Looks like the remove has sorted things out. Thanks.

    One thing I am wondering about is why the nodes are carrying a
    lot more data? The loads were about 2.7T before, now 3.4T.

    # nodetool status
    Datacenter: dc1
    |/ State=Normal/Leaving/Joining/Moving
    --  Address          Load      Tokens  Owns (effective)  Host ID
    UN  3.4 TiB   256     100.0%
     afd02287-3f88-4c6f-8b27-06f7a8192402  rack3
    UN  3.34 TiB  256     100.0%
     e1af72be-e5df-4c6b-a124-c7bc48c6602a  rack2
    UN  3.44 TiB  256     100.0%
     ab72f017-be96-41d2-9bef-a551dec2c7b5  rack1

    On Mon, Apr 3, 2023 at 5:42 PM Bowen Song via user
    <> wrote:

        That's correct. nodetool removenode is strongly preferred
        when your node is already down. If the node is still
        functional, use nodetool decommission on the node instead.

        On 03/04/2023 16:32, Jeff Jirsa wrote:
        FWIW, `nodetool decommission` is strongly preferred.
        `nodetool removenode` is designed to be run when a host is
        offline. Only decommission is guaranteed to maintain
        consistency / correctness, and removemode probably streams a
        lot more data around than decommission.

        On Mon, Apr 3, 2023 at 6:47 AM Bowen Song via user
        <> wrote:

            Use nodetool removenode is strongly preferred in most
            circumstances, and only resort to assassinate if you do
            not care about data consistency or you know there won't
            be any consistency issue (e.g. no new writes and did not
            run nodetool cleanup).

            Since the size of data on the new node is small,
            nodetool removenode should finish fairly quickly and
            bring your cluster back.

            Next time when you are doing something like this again,
            please test it out on a non-production environment, make
            sure everything works as expected before moving onto the

            On 03/04/2023 06:28, David Tinker wrote:
            Should I use assassinate or removenode? Given that
            there is some data on the node. Or will that be found
            on the other nodes? Sorry for all the questions but I
            really don't want to mess up.

            On Mon, Apr 3, 2023 at 7:21 AM Carlos Diaz
            <> wrote:

                That's what nodetool assassinte will do.

                On Sun, Apr 2, 2023 at 10:19 PM David Tinker
                <> wrote:

                    Is it possible for me to remove the node from
                    the cluster i.e. to undo this mess and get the
                    cluster operating again?

                    On Mon, Apr 3, 2023 at 7:13 AM Carlos Diaz
                    <> wrote:

                        You can leave it in the seed list of the
                        other nodes, just make sure it's not
                        included in this node's seed list. 
                        However, if you do decide to fix the issue
                        with the racks first assassinate this node
                        (nodetool assassinate <ip>), and update the
                        rack name before you restart.

                        On Sun, Apr 2, 2023 at 10:06 PM David
                        Tinker <> wrote:

                            It is also in the seeds list for the
                            other nodes. Should I remove it from
                            those, restart them one at a time, then
                            restart it?

                            /etc/cassandra # grep -i bootstrap *
                            doesn't show anything so I don't think
                            I have auto_bootstrap false.

                            Thanks very much for the help.

                            On Mon, Apr 3, 2023 at 7:01 AM Carlos
                            Diaz <> wrote:

                                Just remove it from the seed list
                                in the cassandra.yaml file and
                                restart the node.  Make sure that
                                auto_bootstrap is set to true first

                                On Sun, Apr 2, 2023 at 9:59 PM
                                David Tinker
                                <> wrote:

                                    So likely because I made it a
                                    seed node when I added it to
                                    the cluster it didn't do the
                                    bootstrap process. How can I
                                    recover this?

                                    On Mon, Apr 3, 2023 at 6:41 AM
                                    David Tinker
                                    <> wrote:

                                        Yes replication factor is 3.

                                        I ran nodetool repair -pr
                                        on all the nodes (one at a
                                        time) and am still having
                                        issues getting data back
                                        from queries.

                                        I did make the new node a
                                        seed node.

                                        Re "rack4": I assumed that
                                        was just an indication as
                                        to the physical location of
                                        the server for redundancy.
                                        This one is separate from
                                        the others so I used rack4.

                                        On Mon, Apr 3, 2023 at
                                        6:30 AM Carlos Diaz
                                        <> wrote:

                                            I'm assuming that your
                                            replication factor is
                                            3. If that's the case,
                                            did you intentionally
                                            put this node in rack
                                            4? Typically, you want
                                            to add nodes in
                                            multiples of your
                                            replication factor in
                                            order to keep the
                                            "racks" balanced.  In
                                            other words, this node
                                            should have been added
                                            to rack 1, 2 or 3.

                                            Having said that, you
                                            should be able to
                                            easily fix your problem
                                            by running a nodetool
                                            repair -pr on the new

                                            On Sun, Apr 2, 2023 at
                                            8:16 PM David Tinker

                                                Hi All

                                                I recently added a
                                                node to my 3 node
                                                Cassandra 4.0.5
                                                cluster and now
                                                many reads are not
                                                returning rows!
                                                What do I need to
                                                do to fix this?
                                                There weren't any
                                                errors in the logs
                                                or other problems
                                                that I could see. I
                                                expected the
                                                cluster to balance
                                                itself but this
                                                hasn't happened
                                                (yet?). The nodes
                                                are similar so I
                                                have num_tokens=256
                                                for each. I am
                                                using the

                                                # nodetool status
                                                Datacenter: dc1
                                                --  Address      
                                                 Load     Tokens
                                                 Owns (effective)
                                                 Host ID           Rack
                                                 2.65 TiB 256    
                                                 2.6 TiB  256    
                                                  93.82 KiB  256  
                                                 2.65 TiB 256    

                                                # nodetool netstats
                                                Mode: NORMAL
                                                Not sending any
                                                Read Repair Statistics:
                                                Attempted: 0
                                                Mismatch (Blocking): 0
                                                (Background): 0
                                                Pool Name  Active
                                                Pending  Completed
                                                Large messages  
                                                 n/a 0  71754 0
                                                Small messages  
                                                 n/a 0  8398184  14
                                                Gossip messages    
                                                      n/a         0
                                                   1303634     0

                                                # nodetool ring
                                                Datacenter: dc1
                                                Rack      Status
                                                State   Load      
                                                   Owns  Token
                                                   rack4     Up
                                                Normal  93.82 KiB
                                                  rack1       Up  
                                                  Normal  2.65 TiB
                                                  rack2       Up  
                                                  Normal  2.6 TiB  
                                                  rack3       Up  
                                                  Normal  2.65 TiB
                                                  rack1       Up  
                                                  Normal  2.65 TiB
                                                  rack2       Up  
                                                  Normal  2.6 TiB  
                                                  rack3       Up  
                                                  Normal  2.65 TiB

                                                This is causing a
                                                serious production
                                                issue. Please help
                                                if you can.


Reply via email to