Thanks Erick,
Issue indeed was caused by cassandra-topology.properties file. Though we don't
use it we still had it in our config directory. After removing it "nodetool
status" returns correct topology.
Regards,
Tadevos Papoyan
From: Eric
This isn't expected at all and there's definitely something wrong.
However, when I restart last node (others remain down), when it comes up
> "nodetool status" shows all down nodes under "DC1" no matter how long I
> wait.
>
This sounds like the node can
Hi all,
I have cassandra cluster (v3.11.6 with GossipingPropertyFileSnitch) with 2
datacenters named Zone1 and Zone2 each having 2 nodes. When all nodes are up
"nodetool status" shows correct cluster topology. When I bring down all nodes
except one, remaining node still shows correc
eally 3.1 or 3.11.0 ?
>>>
>>>
>>> On Mon, Sep 9, 2019 at 10:51 AM Nandakishore Tokala <
>>> nandakishore.tok...@gmail.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> we are running apache Cassandra 3.1.0 on AWS in multi-region near
Is it really 3.1 or 3.11.0 ?
>>
>>
>> On Mon, Sep 9, 2019 at 10:51 AM Nandakishore Tokala <
>> nandakishore.tok...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> we are running apache Cassandra 3.1.0 on AWS in multi-region nearly
>>> aro
restarting each node I am seeing some of the nodes are missing
>> from the nodetool status (not UN or DN they are completely missing). after
>> couple of restarts, I am seeing them back.
>>
>> Please help me if I am missing something at the configuration side
>>
>>
>> Thanks
>> Nanda
>>
>
--
Thanks & Regards,
Nanda Kishore
am seeing some of the nodes are missing from
> the nodetool status (not UN or DN they are completely missing). after
> couple of restarts, I am seeing them back.
>
> Please help me if I am missing something at the configuration side
>
>
> Thanks
> Nanda
>
Did you replace any of the nodes which are missing in the nodetool status
?
Regard’s
Dhanunjaya Tokala
On Mon, Sep 9, 2019 at 1:51 PM Nandakishore Tokala <
nandakishore.tok...@gmail.com> wrote:
> Hi All,
>
> we are running apache Cassandra 3.1.0 on AWS in multi-region nearly aro
Hi All,
we are running apache Cassandra 3.1.0 on AWS in multi-region nearly around
200 nodes.
after restarting each node I am seeing some of the nodes are missing from
the nodetool status (not UN or DN they are completely missing). after
couple of restarts, I am seeing them back.
Please help me
Yes, with this proportions it is perfectly ok. Nodes have a similar dataset
and I imagine queries are well distributed. The situation seems to be
normal, at least nothing looking wrong in this `nodetool status` output I
would say.
C*heers,
---
Alain Rodriguez - al...@thelastpickle.
Hi,
Sorry if the question has already been answered.
Where nodetool status is run on a 3 node cluster (replication factor :
3), the load between the different nodes is not equal.
/# nodetool status opush//
//Datacenter: datacenter1//
//===//
//Status=Up/Down//
//|/ State
You have less burden compared to running periodically nodetool and more
> control on the things that you could do.
>
> Regards,
> Horia
>
> On fre, 2018-10-26 at 09:15 -0400, Saha, Sushanta K wrote:
>
> I have script that parses "nodetool status" output and emails ale
running periodically nodetool and more
> control on the things that you could do.
>
> Regards,
> Horia
>
> On fre, 2018-10-26 at 09:15 -0400, Saha, Sushanta K wrote:
>
> I have script that parses "nodetool status" output and emails alerts if
> any node is d
,
> Horia
>
> On fre, 2018-10-26 at 09:15 -0400, Saha, Sushanta K wrote:
>> I have script that parses "nodetool status" output and emails alerts if any
>> node is down. So, when I stop cassandra on a node for maintenance, all nodes
>> stats emailing alarms
that you could do.
Regards,
Horia
On fre, 2018-10-26 at 09:15 -0400, Saha, Sushanta K wrote:
I have script that parses "nodetool status" output and emails alerts if any
node is down. So, when I stop cassandra on a node for maintenance, all nodes
stats emailing alarms.
Any way to tempor
have found these
much more actionable than up/down alerts from a single node’s view of the whole
cluster (like nodetool status)
Sean Durity
From: Saha, Sushanta K
Sent: Monday, October 29, 2018 7:52 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: [E] Re: nodetool status and node
Thanks!
On Fri, Oct 26, 2018 at 2:39 PM Alain RODRIGUEZ wrote:
> Hello
>
> Any way to temporarily make the node under maintenance invisible from
>> "nodetool status" output?
>>
>
> I don't think so.
> I would use a different approach like for exampl
Hello
Any way to temporarily make the node under maintenance invisible from
> "nodetool status" output?
>
I don't think so.
I would use a different approach like for example only warn/email when the
node is down for 30 seconds or a minute depending on how long it takes for
I have script that parses "nodetool status" output and emails alerts if any
node is down. So, when I stop cassandra on a node for maintenance, all
nodes stats emailing alarms.
Any way to temporarily make the node under maintenance invisible from
"nodetool status" output?
Thanks
uch.
> I want to modify some lines in the source code of nodetool status.
> Is it possible ? If it's, how run recompile sources to have the new version
> of cassandra ?
>
> Kind regards.
>
> 2018-07-03 17:32 GMT+01:00 Joshua Galbraith :
> > https://github.com/apach
Thank you so much.
I want to modify some lines in the source code of nodetool status.
Is it possible ? If it's, how run recompile sources to have the new version
of cassandra ?
Kind regards.
2018-07-03 17:32 GMT+01:00 Joshua Galbraith :
> https://github.com/apache/cassandra/blob/cassan
https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/tools/nodetool/Status.java
On Tue, Jul 3, 2018 at 7:57 AM, Thouraya TH wrote:
> Hi all,
> Please, can you give me a link to the source code behind the command
> "nodetool status" ?
> Tha
Hi all,
Please, can you give me a link to the source code behind the command
"nodetool status" ?
Thank you so much.
KInd regards.
observed nodetool status
> displaying "UN" for all 3 nodes in node1.
>
> Executing nodetool status on "Node 3" displays "UN" for all the nodes.
>
> Executing nodetool status on "Node 2" displays "DN" for node 1 (rebooted
> node) and
Hi,
We tried rebooting again Node 1 and this time we observed nodetool status
displaying "UN" for all 3 nodes in node1.
Executing nodetool status on "Node 3" displays "UN" for all the nodes.
Executing nodetool status on "Node 2" displays "DN&quo
, 2017, at 5:28 PM, sat wrote:
>
>
>
> Hi,
>
> We have 3 nodes in cluster, we rebooted one of the cassandra VM, we
> noticed nodetool status returning "UN" for itself and "DN" for other node,
> although we observe gossip sync and ack messages being sha
Try telnet on your listen port. It must be network issue due to port or
firewall issue.
Sent from my iPhone
> On Dec 22, 2017, at 5:28 PM, sat wrote:
>
>
>
> Hi,
>
> We have 3 nodes in cluster, we rebooted one of the cassandra VM, we noticed
> nodetool status retur
Hi,
We have 3 nodes in cluster, we rebooted one of the cassandra VM, we noticed
nodetool status returning "UN" for itself and "DN" for other node, although
we observe gossip sync and ack messages being shared between these nodes.
*Issue in Detail*
*Nodes in cluster*
Node
ozgatlio...@krontech.com>>>
wrote:
Hello,
Nodetool status shows much more than actual data size.
When I restart node, it shows normal a while and increase load in time.
Where should I look?
Cassandra 3.0.8, jdk 1.8.121
Regards,
Osman
This e-mail message, including any attachments, is f
0.00
> 42.31
> Average:all 22.78 15.21 3.34 0.79 0.00
> 57.88
>
>
> Regards,
> Osman
>
> On 12-04-2017 11:53, Bhuvan Rawal wrote:
> Try nodetool tpstats - it can lead you to where your threads are stuck.
> There could be various
factor to go high like disk/cpu getting
choked, you'll probably need to check dstat & iostat output along with
Cassandra Threadpool stats to get a decent idea.
On Wed, Apr 12, 2017 at 1:48 PM, Osman YOZGATLIOGLU
mailto:osman.yozgatlio...@krontech.com>> wrote:
Hello,
Nodetool st
2017 at 1:48 PM, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:
> Hello,
>
> Nodetool status shows much more than actual data size.
> When I restart node, it shows normal a while and increase load in time.
> Where should I look?
>
> Cassandra 3.0.8, jdk 1.8
Hello,
Nodetool status shows much more than actual data size.
When I restart node, it shows normal a while and increase load in time.
Where should I look?
Cassandra 3.0.8, jdk 1.8.121
Regards,
Osman
This e-mail message, including any attachments, is for the sole use of the
person to whom it
Nice! Will take a look.
Best,
x.
On Thu, Jan 26, 2017 at 10:30 AM, Jonathan Haddad wrote:
> Very cool!
>
> On Thu, Jan 26, 2017 at 8:53 AM Eric Evans
> wrote:
>
>> On Wed, Jan 25, 2017 at 11:20 AM, Xiaolei Li
>> wrote:
>> > Thanks for the advice!
>> >
>> > I do export a lot via JMX already. B
Very cool!
On Thu, Jan 26, 2017 at 8:53 AM Eric Evans
wrote:
> On Wed, Jan 25, 2017 at 11:20 AM, Xiaolei Li
> wrote:
> > Thanks for the advice!
> >
> > I do export a lot via JMX already. But I couldn't find the equivalent of
> the
> > Status column (Up/Down + Normal/Leaving/Joining/Moving) from
On Wed, Jan 25, 2017 at 11:20 AM, Xiaolei Li wrote:
> Thanks for the advice!
>
> I do export a lot via JMX already. But I couldn't find the equivalent of the
> Status column (Up/Down + Normal/Leaving/Joining/Moving) from the status
> output. Does anyone know if those are available via JMX?
I've b
t I haven't used it,
>> might be worth checking out or maybe someone else can weigh in.
>>
>> Jon
>>
>> On Wed, Jan 25, 2017 at 7:48 AM Xiaolei Li wrote:
>>
>>> I'm planning to run "nodetool status -r" on every node every minute,
>>> storing the output in a file, and aggregating it somewhere else for
>>> monitoring.
>>>
>>> Is that a good idea? How expensive is it to be running status every
>>> minute.
>>>
>>> Best,
>>> x.
>>>
>>
>
export metrics a bunch of ways. Jolokia, mx4j, jmx_exporter (for
> prometheus), and I know there's a collectd plugin but I haven't used it,
> might be worth checking out or maybe someone else can weigh in.
>
> Jon
>
> On Wed, Jan 25, 2017 at 7:48 AM Xiaolei Li wrote:
>
ometheus), and I know there's a collectd plugin but I haven't used it,
might be worth checking out or maybe someone else can weigh in.
Jon
On Wed, Jan 25, 2017 at 7:48 AM Xiaolei Li wrote:
> I'm planning to run "nodetool status -r" on every node every minute,
&
I'm planning to run "nodetool status -r" on every node every minute,
storing the output in a file, and aggregating it somewhere else for
monitoring.
Is that a good idea? How expensive is it to be running status every minute.
Best,
x.
2016 14h16, jean paul a écrit :
Hi all,
$nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/ Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 127.0.0.1 83.05 KB 256
e.java#L438
> 3. The proxy calls the Gossiper singleton: https://github.com/
> apache/cassandra/blob/trunk/src/java/org/apache/cassandra/
> service/StorageService.java#L2681
>
> Best,
>
> Romain
>
> Le Jeudi 11 août 2016 14h16, jean paul a écrit :
>
>
> Hi all,
&
août 2016 14h16, jean paul a écrit :
Hi all,
$nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/ Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 127.0.0.1 83.05 KB 256
Hi all,
*$nodetool status*Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- AddressLoad Tokens Owns (effective) Host
ID Rack
*UN 127.0.0.1 83.05 KB 256 100.0%
460ddcd9-1ee8-48b8-a618
tware Engineer | @calonso
>> <https://twitter.com/calonso>
>>
>> On 22 March 2016 at 07:57, Anishek Agarwal wrote:
>>
>>> Hello,
>>>
>>> Using cassandra 2.0.17 on one of the 7 nodes i see that the "Load"
>>> c
al wrote:
>
>> Hello,
>>
>> Using cassandra 2.0.17 on one of the 7 nodes i see that the "Load"
>> column from nodetool status
>> shows around 279.34 GB where as doing df -h on the two mounted disks the
>> total is about 400GB any reason of
vely using disk space.
Hope this helps.
Carlos Alonso | Software Engineer | @calonso <https://twitter.com/calonso>
On 22 March 2016 at 07:57, Anishek Agarwal wrote:
> Hello,
>
> Using cassandra 2.0.17 on one of the 7 nodes i see that the "Load" column
> from nodetool
Hello,
Using cassandra 2.0.17 on one of the 7 nodes i see that the "Load" column
from nodetool status
shows around 279.34 GB where as doing df -h on the two mounted disks the
total is about 400GB any reason of why this difference could show up and
how do i go about finding the caus
There was a recent performance inefficiency in nodetool status with virtual
nodes that will be fixed in the next releases (CASSANDRA-7238), so it
should be faster with this fixed.
You can also query StorageServiceMBean.getLiveNodes() via JMX (jolokia or
some other jmx client). For a list of
Is there a faster way to get the output of 'nodetool status' ?
I want us to more aggressively monitor for 'nodetool status' and boxes
being DN...
I was thinking something like jolokia and REST but I'm not sure if there
are variables exported by jolokia for nodetool statu
On Thu, Oct 29, 2015 at 1:08 AM, qihuang.zheng wrote:
> *We have some nodes Load too large, but some are normal. *
>
tl;dr - Clear the snapshots on the nodes which are too large.
Longer :
Are you sure that the nodes which are too large differ in the actual *data*
size, or do they just contain
We have some nodes Load too large, but some are normal.
[qihuang.zheng@cass047221 forseti]$ /usr/install/cassandra/bin/nodetool status
-- AddressLoadTokens Owns Host ID Rack
UN 192.168.47.221 2.66 TB 256 8.7% 87e100ed-85c4-44cb-9d9f-2d602d016038 RAC1
linpyt 2.17 GB256 ?
f61da10c-c2c6-4a5a-8fdc-d2693f2239bc RAC1
Sean Durity – Lead Cassandra Admin
From: Gene [mailto:gh5...@gmail.com]
Sent: Thursday, October 08, 2015 12:43 PM
To: user@cassandra.apache.org
Subject: Re: Why can't nodetool status include a hostname?
Yeah, -r or -
argument
is not listed in this output)
-Gene
On Thu, Oct 8, 2015 at 7:01 AM, Paulo Motta
wrote:
> Have you tried using the -r or --resolve-ip option?
>
> 2015-10-07 19:59 GMT-07:00 Kevin Burton :
>
>> I find it really frustrating that nodetool status doesn't include a
>
Have you tried using the -r or --resolve-ip option?
2015-10-07 19:59 GMT-07:00 Kevin Burton :
> I find it really frustrating that nodetool status doesn't include a
> hostname
>
> Makes it harder to track down problems.
>
> I realize it PRIMARILY uses the IP but perhaps ca
I find it really frustrating that nodetool status doesn't include a hostname
Makes it harder to track down problems.
I realize it PRIMARILY uses the IP but perhaps cassandra.yml can include an
optional 'hostname' parameter that can be set by the user. OR have the box
itself inclu
ne node in my 5-node cluster that effectively owns 100% and it
>> looks like my cluster is rather imbalanced. Is it common to have it this
>> imbalanced for 4-5 nodes?
>>
>> My current output for a keyspace is:
>>
>> $ nodetool status myks
>> Datacenter: Cassa
nodes?
>
> My current output for a keyspace is:
>
> $ nodetool status myks
> Datacenter: Cassandra
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> -- Address Load Tokens Owns (effective) Host ID
> Rack
>
Hi,
I have one node in my 5-node cluster that effectively owns 100% and it
looks like my cluster is rather imbalanced. Is it common to have it this
imbalanced for 4-5 nodes?
My current output for a keyspace is:
$ nodetool status myks
Datacenter: Cassandra
=
Status=Up/Down
ghtly depending on the Cassandra
version you are using. There must be a way to get this via the JMX console as
well, which might be easier for you to monitor.
On 07/03/15 00:37, Kevin Burton wrote:
What’s the best way to monitor nodetool status being down? IE if a specific
server thing
or
you. Also, you may need to modify the REGEX slightly depending on the
Cassandra version you are using. There must be a way to get this via the
JMX console as well, which might be easier for you to monitor.
On 07/03/15 00:37, Kevin Burton wrote:
What’s the best way to monitor nodetool status bei
What’s the best way to monitor nodetool status being down? IE if a specific
server things a node is down (DN).
Does this just use JMX? IS there an API we can call?
We want to tie it into our zabbix server so we can detect if here is
failure.
--
Founder/CEO Spinn3r.com
Location: *San
*linkedin.com/in/carlosjuzarterolo
>>> <http://linkedin.com/in/carlosjuzarterolo>*
>>> Tel: 1649
>>> www.pythian.com
>>>
>>> On Tue, Feb 10, 2015 at 3:40 AM, Cheng Ren
>>> wrote:
>>>
>>>> Hi,
>>>> We ha
gt; rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
>> <http://linkedin.com/in/carlosjuzarterolo>*
>> Tel: 1649
>> www.pythian.com
>>
>> On Tue, Feb 10, 2015 at 3:40 AM, Cheng Ren
>> wrote:
>>
>>> Hi,
>&g
> rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
> <http://linkedin.com/in/carlosjuzarterolo>*
> Tel: 1649
> www.pythian.com
>
> On Tue, Feb 10, 2015 at 3:40 AM, Cheng Ren
> wrote:
>
>> Hi,
>> We have a two-dc cluster with 21 n
nodes and 27 nodes in each DC. Over the
> past few months, we have seen nodetool status marks 4-8 nodes down while
> they are actually functioning. Particularly today we noticed that running
> nodetool status on some nodes shows higher number of nodes are down than
> before while they are
Hi,
We have a two-dc cluster with 21 nodes and 27 nodes in each DC. Over the
past few months, we have seen nodetool status marks 4-8 nodes down while
they are actually functioning. Particularly today we noticed that running
nodetool status on some nodes shows higher number of nodes are down than
Mark,
Thank you. The "initial_token:" was commented and I didn't notice it.
regards,
Jero
On Tue, Jul 29, 2014 at 1:37 PM, Mark Reddy wrote:
> Looks like you are running into this issue:
> https://issues.apache.org/jira/browse/CASSANDRA-7239
>
>
> Mark
>
>
>
>
> On Tue, Jul 29, 2014 at 5:32 P
Hi everyone,
After upgrade and clean install on a 5 nodes cluster:
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 192.168.3.50 -123645 bytes 256 6
Looks like you are running into this issue:
https://issues.apache.org/jira/browse/CASSANDRA-7239
Mark
On Tue, Jul 29, 2014 at 5:32 PM, Jeronimo de A. Barros <
jeronimo.bar...@gmail.com> wrote:
> Hi everyone,
>
> After upgrade and clean install on a 5 nodes cluster:
>
> Datacenter: datacenter
unning
> on 1.2.12 and the other one on 2.0.5. I loaded data from first cluster
> (1.2.12) to the second one (2.0.5) by copying snapshots between
> corresponding nodes. I removed commitlogs, started second cluster and run
> nodetool upgradesstables.
> After this I expect that nodetool
upgradesstables.
After this I expect that nodetool status will give me the same results in
"Load" column on both clusters. Unfortunately it is completely different:
- old cluster: [728.02 GB, 558.24 GB, 787.08 GB, 555.1 GB]
- new cluster: [14.63 GB, 35.98 GB, 18 GB, 38.39 GB]
When I briefly
datacenter with virtual nodes enabled, the output of
nodetool status shows that nodes from the non-vnodes datacenter "owns" 0.0% of
the data, as shown below:
Datacenter: NonVnodesDC
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load
Hello,
After adding a new datacenter with virtual nodes enabled, the output of
nodetool status shows that nodes from the non-vnodes datacenter "owns" 0.0%
of the data, as shown below:
Datacenter: NonVnodesDC
=
Status=Up/Down
|/ State=Normal/Leaving/Join
Oh man, you know what my problem was, I was not specifying the keyspace
after nodetool status. After specifying the keyspace i get the 100%
ownership like I would expect.
nodetool status discsussions
ubuntu@prd-usw2b-pr-01-dscsapi-cadb-0002:~$ nodetool status discussions
Datacenter: us-east-1
RandomPartitioner was the default at < 1.2.*
It looks like since 1.2 the default is Murmur3..
Not sure that's your problem if you say you've upgraded from 1.2.*..
On Mon, Jan 6, 2014 at 3:42 AM, Rob Mullen wrote:
> Do you know of the default changed? I'm pretty sure I never changed that
> set
Do you know of the default changed? I'm pretty sure I never changed that
setting the the config file.
Sent from my iPhone
On Jan 4, 2014, at 11:22 PM, Or Sher wrote:
> Robert, is it possible you've changed the partitioner during the upgrade?
> (e.g. from RandomPartitioner to Murmur3Partitio
Robert, is it possible you've changed the partitioner during the upgrade?
(e.g. from RandomPartitioner to Murmur3Partitioner ?)
On Sat, Jan 4, 2014 at 9:32 PM, Mullen, Robert wrote:
> The nodetool repair command (which took about 8 hours) seems to have
> sync'd the data in us-east, all 3 nodes r
The nodetool repair command (which took about 8 hours) seems to have sync'd
the data in us-east, all 3 nodes returning 59 for the count now. I'm
wondering if this has more to do with changing the replication factor from
2 to 3 and how 2.0.2 reports the % owned rather than the upgrade itself. I
st
from cql
cqlsh>select count(*) from topics;
On Sat, Jan 4, 2014 at 12:18 PM, Robert Coli wrote:
> On Sat, Jan 4, 2014 at 11:10 AM, Mullen, Robert > wrote:
>
>> I have a column family called "topics" which has a count of 47 on one
>> node, 59 on another and 49 on another node. It was my unders
On Sat, Jan 4, 2014 at 11:10 AM, Mullen, Robert
wrote:
> I have a column family called "topics" which has a count of 47 on one
> node, 59 on another and 49 on another node. It was my understanding with a
> replication factor of 3 and 3 nodes in each ring that the nodes should be
> equal so I could
Hey Rob,
Thanks for the reply.
First, why would you upgrade to 2.0.2 when higher versions exist?
I upgraded a while ago when 2.0.2 was the latest version, haven't upgraded
since then as I'd like to figure out what's going on here before upgrading
again. I was on vacation for a while too, so am ju
On Fri, Jan 3, 2014 at 3:33 PM, Mullen, Robert wrote:
> I have a multi region cluster with 3 nodes in each data center, ec2
> us-east and and west. Prior to upgrading to 2.0.2 from 1.2.6, the owns %
> of each node was 100%, which made sense because I had a replication factor
> of 3 for each data
own about
17% of the data now.
:~$ nodetool status
Datacenter: us-west-2
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- AddressLoad Tokens Owns Host ID
Rack
UN 10.198.20.51 958.16 KB 256 16.9%
6a40b500-cff4-4513-b26b-ea33048c1590
es had died because
nodetool repair was failing due to a down replica. I run nodetool
status and sure enough, one of my nodes shows up as down.
When I looked on the actual box, the cassandra process was up and
running and everything in the logs looked sensible. The most
controversial thing I s
t see particularly heavy use, it's mostly a catch-all
cluster for environments which don't have a dedicated cluster to
themselves. I noticed today that one of the nodes had died because
nodetool repair was failing due to a down replica. I run nodetool
status and sure enough, one of my nodes shows up
t; For node2:
> ~/Cassandra$ cat /etc/cassandra/cassandra-rackdc.properties
> dc=DC2
> rack=RAC1
>
> When I call "nodetool status", it shows not 100% ownership of tokens for each
> DC:
> :~/Cassandra$ nodetool status
> Datacenter: DC1
> ===
> Status=
-rackdc.properties
dc=DC2
rack=RAC1
When I call "nodetool status", it shows not 100% ownership of tokens for
each DC:
:~/Cassandra$ nodetool status
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns
and #1 finished
>
> This is the status before the repair (by the way, after the datacenter
> has been bootstrapped from the remote one):
>
> [root@host:/etc/puppet] nodetool status
> Datacenter: us-east
> ===
> Status=Up/Down
> |/ State
:47:17,063] Repair command #1 finished
This is the status before the repair (by the way, after the datacenter
has been bootstrapped from the remote one):
[root@host:/etc/puppet] nodetool status
Datacenter: us-east
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
.XXX.XXX.XXX7.73 MB256 100.0%
> a336efae-8d9c-4562-8e2a-b766b479ecb4 1d
> UN XXX.XXX.XXX.XXX7.73 MB256 100.0%
> ab1bbf0a-8ddc-4a12-a925-b119bd2de98e 1d
> UN XXX.XXX.XXX.XXX 7.73 MB256 100.0%
> f53fd294-16cc-497e-9613-347f07ac3850 1d
>
>
-IOException-FAILED-TO-UNCOMPRESS-5-exception-when-running-nodetool-rebuild-td7586494.html
- I ran into this situation:
- all nodes have all data and agree on it:
[user@host1-dc1:~] nodetool status
Datacenter: na-prod
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address
92 matches
Mail list logo