Hello all: I've managed to get my new deployment into an odd state.
I have a three-node cluster. After installation, I was running the riak-admin join commands. Node #3 happend to be down because of a configuration error -- but something seems to have been configured. Now, when I run stats on my first node, I see "nodename": "[email protected]", "connected_nodes": [ "[email protected]", "[email protected]" ], "ring_members": [ "[email protected]", "[email protected]" ], "ring_ownership": "[{'[email protected]',32},{'[email protected]',32}]", On the problemmatic ring, I see: "connected_nodes": [ "[email protected]", "[email protected]" ], "ring_members": [ "[email protected]" ], "ring_ownership": "[{'[email protected]',64}]", My understanding is that all three should show in the ring_ownership. Now, when I try to add node #3, I'm told it is already a member of a cluster. When I try to force-remove node #3, I'm told it is not a member of the cluster. When I try to use leave on node #3 I'm told it is the only member. Any recommendations/thoughts on how to correct this? (short of re-installing node #3) Thanks --Ray When I try a riak-admin leave on node #3, it says it is the only -- Ray Cote, President Appropriate Solutions, Inc. We Build Software www.AppropriateSolutions.com 603.924.6079 _______________________________________________ riak-users mailing list [email protected] http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
