Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-05-01 Thread Fred Habash
I, probably, should've been clearer in my inquiry ...

I'm investigating a scenario where our diagnostic data is tell us that a
small portion of application data has been lost. I mean, getsstables for
the keys returns zero on all cluster nodes.

The last pickle article below (which includes a case scenario described by
Jeff Jirsa) suggests possible data loss case when bootstrapping a new node
to extend the cluster. The new node may bootstrap from a stale SECONDARY
replica. A fix was made in Cassandra-2434.

However, the article, the Jira, and Jeff's example all describe the
scenario when extending a cluster.

I understand replacing a dead node does not involve range movement. But,
will the above fix force the bootstrap happening while replacing a dead
node from streaming the data from a secondary (potentially) stale node? In
other words, that fact that I was able to bootstrap multiple dead nodes,
does it mean it is safe to do so?

http://thelastpickle.com/blog/2017/05/23/auto-bootstrapping-part1.html

Thanks

On Tue, Apr 30, 2019 at 7:41 PM Alok Dwivedi 
wrote:

> When a new node joins the ring, it needs to own new token ranges. This
> should be unique to the new node and we don’t want to end up in a situation
> where two nodes joining simultaneously can own same range (and ideally
> evenly distributed). Cassandra has this 2 minute wait rule for gossip state
> to propagate before a node is added.  But this on its does not guarantees
> that token ranges can’t overlap. See this ticket for more details
> https://issues.apache.org/jira/browse/CASSANDRA-7069 To overcome this
> issue, the approach was to only allow one node joining at a time.
>
>
>
> When you replace a dead node the new token range selection does not
> applies as the replacing node just owns the token ranges of the dead node.
> I think that’s why the restriction of only replacing one node at a time
> does not applies in this case.
>
>
>
>
>
> Thanks
>
> Alok Dwivedi
>
> Senior Consultant
>
> https://www.instaclustr.com/platform/
>
>
>
>
>
>
>
>
>
>
>
> *From: *Fd Habash 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Wednesday, 1 May 2019 at 06:18
> *To: *"user@cassandra.apache.org" 
> *Subject: *Bootstrapping to Replace a Dead Node vs. Adding a New Node:
> Consistency Guarantees
>
>
>
> Reviewing the documentation &  based on my testing, using C* 2.2.8, I was
> not able to extend the cluster by adding multiple nodes simultaneously. I
> got an error message …
>
>
>
> Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while
> cassandra.consistent.rangemovement is true
>
>
>
> I understand this is to force a node to bootstrap from the former owner of
> the range when adding a node as part of extending the cluster.
>
>
>
> However, I was able to bootstrap multiple nodes to replace dead nodes. C*
> did not complain about it.
>
>
>
> Is consistent range movement & the guarantee it offers to bootstrap from
> primary range owner not applicable when bootstrapping to replace dead
> nodes?
>
>
>
> 
> Thank you
>
>
>


-- 


Thank you


Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-05-01 Thread Fred Habash
Thank you. 

Range movement is one reason this is enforced when adding a new node. But, what 
about forcing a consistent bootstrap i.e. bootstrapping from primary owner of 
the range and not a secondary replica. 

How’s consistent bootstrap enforced when replacing a dead node. 


-
Thank you. 

> On Apr 30, 2019, at 7:40 PM, Alok Dwivedi  
> wrote:
> 
> When a new node joins the ring, it needs to own new token ranges. This should 
> be unique to the new node and we don’t want to end up in a situation where 
> two nodes joining simultaneously can own same range (and ideally evenly 
> distributed). Cassandra has this 2 minute wait rule for gossip state to 
> propagate before a node is added.  But this on its does not guarantees that 
> token ranges can’t overlap. See this ticket for more details 
> https://issues.apache.org/jira/browse/CASSANDRA-7069 To overcome this  issue, 
> the approach was to only allow one node joining at a time. 
>  
> When you replace a dead node the new token range selection does not applies 
> as the replacing node just owns the token ranges of the dead node. I think 
> that’s why the restriction of only replacing one node at a time does not 
> applies in this case. 
>  
>  
> Thanks
> Alok Dwivedi
> Senior Consultant
> https://www.instaclustr.com/platform/
>  
>  
>  
>  
>  
> From: Fd Habash 
> Reply-To: "user@cassandra.apache.org" 
> Date: Wednesday, 1 May 2019 at 06:18
> To: "user@cassandra.apache.org" 
> Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node: 
> Consistency Guarantees
>  
> Reviewing the documentation &  based on my testing, using C* 2.2.8, I was not 
> able to extend the cluster by adding multiple nodes simultaneously. I got an 
> error message …
>  
> Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while 
> cassandra.consistent.rangemovement is true
>  
> I understand this is to force a node to bootstrap from the former owner of 
> the range when adding a node as part of extending the cluster.
>  
> However, I was able to bootstrap multiple nodes to replace dead nodes. C* did 
> not complain about it.
>  
> Is consistent range movement & the guarantee it offers to bootstrap from 
> primary range owner not applicable when bootstrapping to replace dead nodes?
>  
> 
> Thank you
>  


Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Alok Dwivedi
When a new node joins the ring, it needs to own new token ranges. This should 
be unique to the new node and we don’t want to end up in a situation where two 
nodes joining simultaneously can own same range (and ideally evenly 
distributed). Cassandra has this 2 minute wait rule for gossip state to 
propagate before a node is added.  But this on its does not guarantees that 
token ranges can’t overlap. See this ticket for more details 
https://issues.apache.org/jira/browse/CASSANDRA-7069 To overcome this  issue, 
the approach was to only allow one node joining at a time.

When you replace a dead node the new token range selection does not applies as 
the replacing node just owns the token ranges of the dead node. I think that’s 
why the restriction of only replacing one node at a time does not applies in 
this case.


Thanks
Alok Dwivedi
Senior Consultant
https://www.instaclustr.com/platform/





From: Fd Habash 
Reply-To: "user@cassandra.apache.org" 
Date: Wednesday, 1 May 2019 at 06:18
To: "user@cassandra.apache.org" 
Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node: 
Consistency Guarantees

Reviewing the documentation &  based on my testing, using C* 2.2.8, I was not 
able to extend the cluster by adding multiple nodes simultaneously. I got an 
error message …

Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while 
cassandra.consistent.rangemovement is true

I understand this is to force a node to bootstrap from the former owner of the 
range when adding a node as part of extending the cluster.

However, I was able to bootstrap multiple nodes to replace dead nodes. C* did 
not complain about it.

Is consistent range movement & the guarantee it offers to bootstrap from 
primary range owner not applicable when bootstrapping to replace dead nodes?


Thank you



Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Fd Habash
Reviewing the documentation &  based on my testing, using C* 2.2.8, I was not 
able to extend the cluster by adding multiple nodes simultaneously. I got an 
error message …

Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while 
cassandra.consistent.rangemovement is true

I understand this is to force a node to bootstrap from the former owner of the 
range when adding a node as part of extending the cluster.

However, I was able to bootstrap multiple nodes to replace dead nodes. C* did 
not complain about it.

Is consistent range movement & the guarantee it offers to bootstrap from 
primary range owner not applicable when bootstrapping to replace dead nodes? 


Thank you



Re: Problem adding a new node to a cluster

2017-12-18 Thread Jonathan Haddad
Definitely upgrade to 3.11.1.
On Sun, Dec 17, 2017 at 8:54 PM Pradeep Chhetri 
wrote:

> Hello Kurt,
>
> I realized it was because of RAM shortage which caused the issue. I bumped
> up the memory of the machine and node bootstrap started but this time i hit
> this bug of cassandra 3.9:
>
> https://issues.apache.org/jira/browse/CASSANDRA-12905
>
> I tried running nodetool bootstrap resume multiple times but every time it
> fails with exception after completing around 963%
>
> https://gist.github.com/chhetripradeep/93567ad24c44ba72d0753d4088a10ce4
>
> Do you think there is some workaround for this. Or do you suggest
> upgrading to v3.11 which has this fix.
>
> Also, can we just upgrade the cassandra from 3.9 -> 3.11 in rolling
> fashion or do we need to take care of something in case we have to upgrade.
>
> Thanks.
>
>
>
>
>
>
> On Mon, Dec 18, 2017 at 5:45 AM, kurt greaves 
> wrote:
>
>> You haven't provided enough logs for us to really tell what's wrong. I
>> suggest running *nodetool netstats* *| grep -v 100% *to see if any
>> streams are still ongoing, and also running *nodetool compactionstats -H* to
>> see if there are any index builds the node might be waiting for prior to
>> joining the ring.
>>
>> If neither of those provide any useful information, send us the full
>> system.log and debug.log
>>
>> On 17 December 2017 at 11:19, Pradeep Chhetri 
>> wrote:
>>
>>> Hello all,
>>>
>>> I am trying to add a 4th node to a 3-node cluster which is using
>>> SimpleSnitch. But this new node is stuck in Joining state for last 20
>>> hours. We have around 10GB data per node with RF as 3.
>>>
>>> Its mostly stuck in redistributing index summaries phase.
>>>
>>> Here are the logs:
>>>
>>> https://gist.github.com/chhetripradeep/37e4f232ddf0dd3b830091ca9829416d
>>>
>>> # nodetool status
>>> Datacenter: datacenter1
>>> ===
>>> Status=Up/Down
>>> |/ State=Normal/Leaving/Joining/Moving
>>> --  AddressLoad   Tokens   Owns (effective)  Host ID
>>>Rack
>>> UJ  10.42.187.43   9.73 GiB   256  ?
>>>  36384dc5-a183-4a5b-ae2d-ee67c897df3d  rack1
>>> UN  10.42.106.184  9.95 GiB   256  100.0%
>>> 42cd09e9-8efb-472f-ace6-c7bb98634887  rack1
>>> UN  10.42.169.195  10.35 GiB  256  100.0%
>>> 9fcc99a1-6334-4df8-818d-b097b1920bb9  rack1
>>> UN  10.42.209.245  8.54 GiB   256  100.0%
>>> 9b99d5d8-818e-4741-9533-259d0fc0e16d  rack1
>>>
>>> Not sure what is going here, will be very helpful if someone can help in
>>> identifying the issue.
>>>
>>> Thank you.
>>>
>>>
>>>
>>
>


Re: Problem adding a new node to a cluster

2017-12-17 Thread Pradeep Chhetri
Hello Kurt,

I realized it was because of RAM shortage which caused the issue. I bumped
up the memory of the machine and node bootstrap started but this time i hit
this bug of cassandra 3.9:

https://issues.apache.org/jira/browse/CASSANDRA-12905

I tried running nodetool bootstrap resume multiple times but every time it
fails with exception after completing around 963%

https://gist.github.com/chhetripradeep/93567ad24c44ba72d0753d4088a10ce4

Do you think there is some workaround for this. Or do you suggest upgrading
to v3.11 which has this fix.

Also, can we just upgrade the cassandra from 3.9 -> 3.11 in rolling fashion
or do we need to take care of something in case we have to upgrade.

Thanks.






On Mon, Dec 18, 2017 at 5:45 AM, kurt greaves  wrote:

> You haven't provided enough logs for us to really tell what's wrong. I
> suggest running *nodetool netstats* *| grep -v 100% *to see if any
> streams are still ongoing, and also running *nodetool compactionstats -H* to
> see if there are any index builds the node might be waiting for prior to
> joining the ring.
>
> If neither of those provide any useful information, send us the full
> system.log and debug.log
>
> On 17 December 2017 at 11:19, Pradeep Chhetri 
> wrote:
>
>> Hello all,
>>
>> I am trying to add a 4th node to a 3-node cluster which is using
>> SimpleSnitch. But this new node is stuck in Joining state for last 20
>> hours. We have around 10GB data per node with RF as 3.
>>
>> Its mostly stuck in redistributing index summaries phase.
>>
>> Here are the logs:
>>
>> https://gist.github.com/chhetripradeep/37e4f232ddf0dd3b830091ca9829416d
>>
>> # nodetool status
>> Datacenter: datacenter1
>> ===
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  AddressLoad   Tokens   Owns (effective)  Host ID
>>  Rack
>> UJ  10.42.187.43   9.73 GiB   256  ?
>>  36384dc5-a183-4a5b-ae2d-ee67c897df3d  rack1
>> UN  10.42.106.184  9.95 GiB   256  100.0%
>> 42cd09e9-8efb-472f-ace6-c7bb98634887  rack1
>> UN  10.42.169.195  10.35 GiB  256  100.0%
>> 9fcc99a1-6334-4df8-818d-b097b1920bb9  rack1
>> UN  10.42.209.245  8.54 GiB   256  100.0%
>> 9b99d5d8-818e-4741-9533-259d0fc0e16d  rack1
>>
>> Not sure what is going here, will be very helpful if someone can help in
>> identifying the issue.
>>
>> Thank you.
>>
>>
>>
>


Re: Problem adding a new node to a cluster

2017-12-17 Thread kurt greaves
You haven't provided enough logs for us to really tell what's wrong. I
suggest running *nodetool netstats* *| grep -v 100% *to see if any streams
are still ongoing, and also running *nodetool compactionstats -H* to see if
there are any index builds the node might be waiting for prior to joining
the ring.

If neither of those provide any useful information, send us the full
system.log and debug.log

On 17 December 2017 at 11:19, Pradeep Chhetri  wrote:

> Hello all,
>
> I am trying to add a 4th node to a 3-node cluster which is using
> SimpleSnitch. But this new node is stuck in Joining state for last 20
> hours. We have around 10GB data per node with RF as 3.
>
> Its mostly stuck in redistributing index summaries phase.
>
> Here are the logs:
>
> https://gist.github.com/chhetripradeep/37e4f232ddf0dd3b830091ca9829416d
>
> # nodetool status
> Datacenter: datacenter1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens   Owns (effective)  Host ID
>  Rack
> UJ  10.42.187.43   9.73 GiB   256  ?
>  36384dc5-a183-4a5b-ae2d-ee67c897df3d  rack1
> UN  10.42.106.184  9.95 GiB   256  100.0%
> 42cd09e9-8efb-472f-ace6-c7bb98634887  rack1
> UN  10.42.169.195  10.35 GiB  256  100.0%
> 9fcc99a1-6334-4df8-818d-b097b1920bb9  rack1
> UN  10.42.209.245  8.54 GiB   256  100.0%
> 9b99d5d8-818e-4741-9533-259d0fc0e16d  rack1
>
> Not sure what is going here, will be very helpful if someone can help in
> identifying the issue.
>
> Thank you.
>
>
>


Problem adding a new node to a cluster

2017-12-17 Thread Pradeep Chhetri
Hello all,

I am trying to add a 4th node to a 3-node cluster which is using
SimpleSnitch. But this new node is stuck in Joining state for last 20
hours. We have around 10GB data per node with RF as 3.

Its mostly stuck in redistributing index summaries phase.

Here are the logs:

https://gist.github.com/chhetripradeep/37e4f232ddf0dd3b830091ca9829416d

# nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UJ  10.42.187.43   9.73 GiB   256  ?
 36384dc5-a183-4a5b-ae2d-ee67c897df3d  rack1
UN  10.42.106.184  9.95 GiB   256  100.0%
42cd09e9-8efb-472f-ace6-c7bb98634887  rack1
UN  10.42.169.195  10.35 GiB  256  100.0%
9fcc99a1-6334-4df8-818d-b097b1920bb9  rack1
UN  10.42.209.245  8.54 GiB   256  100.0%
9b99d5d8-818e-4741-9533-259d0fc0e16d  rack1

Not sure what is going here, will be very helpful if someone can help in
identifying the issue.

Thank you.


Re: Adding a New Node

2017-10-24 Thread shalom sagges
Thanks Kurt!

That sorted things in my head. Much appreciated!



On Tue, Oct 24, 2017 at 12:29 PM, kurt greaves  wrote:

> Your node shouldn't show up in DC1 in nodetool status from the other
> nodes, this implies a configuration problem. Sounds like you haven't added
> the new node to all the existing nodes cassandra-topology.properties file.
> You don't need to do a rolling restart with PropertyFileSnitch, it should
> reload the cassandra-topology.properties file automatically every 5 seconds.
>
> With GPFS each node only needs to know about its own topology settings in
> cassandra-rackdc.properties, so the problem you point out in 2 goes away,
> as when adding a node you only need to specify its configuration and that
> will be propagated to the rest of the cluster through gossip.
>
> On 24 October 2017 at 07:13, shalom sagges  wrote:
>
>> Hi Everyone,
>>
>> I have 2 DCs (v2.0.14) with the following topology.properties:
>>
>> DC1:
>> xxx11=DC1:RAC1
>> xxx12=DC1:RAC1
>> xxx13=DC1:RAC1
>> xxx14=DC1:RAC1
>> xxx15=DC1:RAC1
>>
>>
>> DC2:
>> yyy11=DC2:RAC1
>> yyy12=DC2:RAC1
>> yyy13=DC2:RAC1
>> yyy14=DC2:RAC1
>> yyy15=DC2:RAC1
>>
>>
>> # default for unknown nodes
>> default=DC1:RAC1
>>
>> Now let's say that I want to add a new node yyy16 to DC2, and I've added
>> yyy16 to the topology properties file only on that specific node.
>>
>> What I saw is that during bootstrap, the new node is receiving data only
>> from DC2 nodes (which is what I want), but nodetool status on other nodes
>> shows that it was joining to DC1 (which is the default DC for unknown
>> nodes).
>>
>> So I have a few questions on this matter:
>>
>> 1) What are the implications of such a bootstrap, where the joining node
>> actually gets data from nodes in the right DC, but all nodes see it in the
>> default DC when running nodetool status?
>>
>> 2) I know that I must change the topology.properties file on all nodes to
>> be the same. If I do that, do I need to perform a rolling restart on all of
>> the cluster before each bootstrap (which is a real pain for large clusters)?
>>
>> 3) Regarding the Snitch, the docs say that the recommended snitch in
>> Production is the GossipingPropertyFileSnitch with
>> cassandra-rackdc.properties file.
>> What's the difference between the GossipingPropertyFileSnitchand and the
>> PropertyFileSnitch?
>> I currently use PropertyFileSnitch and cassandra-topology.properties.
>>
>>
>> Thanks!
>>
>>
>>
>>
>>
>>
>


Re: Adding a New Node

2017-10-24 Thread kurt greaves
Your node shouldn't show up in DC1 in nodetool status from the other nodes,
this implies a configuration problem. Sounds like you haven't added the new
node to all the existing nodes cassandra-topology.properties file. You
don't need to do a rolling restart with PropertyFileSnitch, it should
reload the cassandra-topology.properties file automatically every 5 seconds.

With GPFS each node only needs to know about its own topology settings in
cassandra-rackdc.properties, so the problem you point out in 2 goes away,
as when adding a node you only need to specify its configuration and that
will be propagated to the rest of the cluster through gossip.

On 24 October 2017 at 07:13, shalom sagges  wrote:

> Hi Everyone,
>
> I have 2 DCs (v2.0.14) with the following topology.properties:
>
> DC1:
> xxx11=DC1:RAC1
> xxx12=DC1:RAC1
> xxx13=DC1:RAC1
> xxx14=DC1:RAC1
> xxx15=DC1:RAC1
>
>
> DC2:
> yyy11=DC2:RAC1
> yyy12=DC2:RAC1
> yyy13=DC2:RAC1
> yyy14=DC2:RAC1
> yyy15=DC2:RAC1
>
>
> # default for unknown nodes
> default=DC1:RAC1
>
> Now let's say that I want to add a new node yyy16 to DC2, and I've added
> yyy16 to the topology properties file only on that specific node.
>
> What I saw is that during bootstrap, the new node is receiving data only
> from DC2 nodes (which is what I want), but nodetool status on other nodes
> shows that it was joining to DC1 (which is the default DC for unknown
> nodes).
>
> So I have a few questions on this matter:
>
> 1) What are the implications of such a bootstrap, where the joining node
> actually gets data from nodes in the right DC, but all nodes see it in the
> default DC when running nodetool status?
>
> 2) I know that I must change the topology.properties file on all nodes to
> be the same. If I do that, do I need to perform a rolling restart on all of
> the cluster before each bootstrap (which is a real pain for large clusters)?
>
> 3) Regarding the Snitch, the docs say that the recommended snitch in
> Production is the GossipingPropertyFileSnitch with
> cassandra-rackdc.properties file.
> What's the difference between the GossipingPropertyFileSnitchand and the
> PropertyFileSnitch?
> I currently use PropertyFileSnitch and cassandra-topology.properties.
>
>
> Thanks!
>
>
>
>
>
>


Adding a New Node

2017-10-24 Thread shalom sagges
Hi Everyone,

I have 2 DCs (v2.0.14) with the following topology.properties:

DC1:
xxx11=DC1:RAC1
xxx12=DC1:RAC1
xxx13=DC1:RAC1
xxx14=DC1:RAC1
xxx15=DC1:RAC1


DC2:
yyy11=DC2:RAC1
yyy12=DC2:RAC1
yyy13=DC2:RAC1
yyy14=DC2:RAC1
yyy15=DC2:RAC1


# default for unknown nodes
default=DC1:RAC1

Now let's say that I want to add a new node yyy16 to DC2, and I've added
yyy16 to the topology properties file only on that specific node.

What I saw is that during bootstrap, the new node is receiving data only
from DC2 nodes (which is what I want), but nodetool status on other nodes
shows that it was joining to DC1 (which is the default DC for unknown
nodes).

So I have a few questions on this matter:

1) What are the implications of such a bootstrap, where the joining node
actually gets data from nodes in the right DC, but all nodes see it in the
default DC when running nodetool status?

2) I know that I must change the topology.properties file on all nodes to
be the same. If I do that, do I need to perform a rolling restart on all of
the cluster before each bootstrap (which is a real pain for large clusters)?

3) Regarding the Snitch, the docs say that the recommended snitch in
Production is the GossipingPropertyFileSnitch with
cassandra-rackdc.properties file.
What's the difference between the GossipingPropertyFileSnitchand and the
PropertyFileSnitch?
I currently use PropertyFileSnitch and cassandra-topology.properties.


Thanks!


Re: Adding a new node with the double of disk space

2017-08-19 Thread Jeff Jirsa
You'd use different num_tokens only if you wanted an imbalance (e.g. New 
hardware specs where you wanted to use fewer, larger machines).

-- 
Jeff Jirsa


> On Aug 19, 2017, at 6:04 PM, Subroto Barua <sbarua...@yahoo.com.INVALID> 
> wrote:
> 
> Jeff,
> 
> is it ok to have different values of num_tokens per node in a cluster? won't 
> it create cluster imbalance? or it better to initiate it on a separate DC?
> 
> Subroto
> 
> 
> On Friday, August 18, 2017, 5:34:11 AM PDT, Durity, Sean R 
> <sean_r_dur...@homedepot.com> wrote:
> 
> 
> I am doing some on-the-job-learning on this newer feature of the 3.x line, 
> where the token generation algorithm will compensate for different size nodes 
> in a cluster. In fact, it is one of the main reasons I upgraded to 3.0.13, 
> because I have a number of original nodes in a cluster that are about half 
> the size of the newer nodes. With the same number of vnodes, they can get 
> overwhelmed with too much data and have to be rebuilt, etc.
> 
>  
> 
> So, I am cutting vnodes in half on those original nodes and rebuilding them. 
> So far, it is working as designed. The data size is about half on the smaller 
> nodes.
> 
>  
> 
> With the more current advice being to use less vnodes, for the original 
> question below, I might consider adding the new node in at 256 vnodes and 
> then rebuilding all the other nodes at 128. Of course the cluster size and 
> amount of data would be important factors, as well as the future growth of 
> the cluster and the expected size of any additional nodes.
> 
>  
> 
>  
> 
> Sean Durity
> 
>  
> 
> From: Jeff Jirsa [mailto:jji...@gmail.com] 
> Sent: Thursday, August 17, 2017 4:20 PM
> To: cassandra <user@cassandra.apache.org>
> Subject: Re: Adding a new node with the double of disk space
> 
>  
> 
> If you really double the hardware in every way, it's PROBABLY reasonable to 
> double num_tokens. It won't be quite the same as doubling all-the-things, 
> because you still have a single JVM, and you'll still have to deal with GC as 
> you're now reading twice as much and generating twice as much garbage, but 
> you can probably adjust the tuning of the heap to compensate.
> 
>  
> 
>  
> 
>  
> 
> On Thu, Aug 17, 2017 at 1:00 PM, Kevin O'Connor <ke...@reddit.com.invalid> 
> wrote:
> 
> Are you saying if a node had double the hardware capacity in every way it 
> would be a bad idea to up num_tokens? I thought that was the whole idea of 
> that setting though?
> 
>  
> 
> On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo <r...@pythian.com> wrote:
> 
> No.
> 
>  
> 
> If you would double all the hardware on that node vs the others would still 
> be a bad idea.
> 
> Keep the cluster uniform vnodes wise.
> 
> 
> Regards,
> 
>  
> 
> Carlos Juzarte Rolo
> 
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
> 
>  
> 
> Pythian - Love your data
> 
>  
> 
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin: 
> linkedin.com/in/carlosjuzarterolo 
> 
> Mobile: +351 918 918 100
> 
> www.pythian.com
> 
>  
> 
> On Thu, Aug 17, 2017 at 5:47 PM, Cogumelos Maravilha 
> <cogumelosmaravi...@sapo.pt> wrote:
> 
> Hi all,
> 
> I need to add a new node to my cluster but this time the new node will
> have the double of disk space comparing to the other nodes.
> 
> I'm using the default vnodes (num_tokens: 256). To fully use the disk
> space in the new node I just have to configure num_tokens: 512?
> 
> Thanks in advance.
> 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>  
> 
>  
> 
> --
> 
>  
> 
>  
> 
>  
> 
> 
> 
> The information in this Internet Email is confidential and may be legally 
> privileged. It is intended solely for the addressee. Access to this Email by 
> anyone else is unauthorized. If you are not the intended recipient, any 
> disclosure, copying, distribution or any action taken or omitted to be taken 
> in reliance on it, is prohibited and may be unlawful. When addressed to our 
> clients any opinions or advice contained in this Email are subject to the 
> terms and conditions expressed in any applicable governing The  Home Depot 
> terms of business or client engagement letter. The Home Depot disclaims all 
> responsibility and liability for the accuracy and content of this attachment 
> and for any damages or losses arising from any inaccuracies, errors, viruses, 
> e.g., worms, trojan horses, etc., or other items of a destructive nature, 
> which may be contained in this attachment and shall not be liable for direct, 
> indirect, consequential or special damages in connection with this e-mail 
> message or its attachment.
> 


Re: RE: Adding a new node with the double of disk space

2017-08-19 Thread Subroto Barua
Jeff,
is it ok to have different values of num_tokens per node in a cluster? won't it 
create cluster imbalance? or it better to initiate it on a separate DC?
Subroto

On Friday, August 18, 2017, 5:34:11 AM PDT, Durity, Sean R 
<sean_r_dur...@homedepot.com> wrote:

#yiv5432100827 #yiv5432100827 -- _filtered #yiv5432100827 {panose-1:2 4 5 3 5 4 
6 3 2 4;} _filtered #yiv5432100827 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 
3 2 4;}#yiv5432100827 #yiv5432100827 p.yiv5432100827MsoNormal, #yiv5432100827 
li.yiv5432100827MsoNormal, #yiv5432100827 div.yiv5432100827MsoNormal 
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv5432100827 a:link, 
#yiv5432100827 span.yiv5432100827MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv5432100827 a:visited, #yiv5432100827 
span.yiv5432100827MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv5432100827 p 
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv5432100827 
p.yiv5432100827msonormal0, #yiv5432100827 li.yiv5432100827msonormal0, 
#yiv5432100827 div.yiv5432100827msonormal0 
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv5432100827 
span.yiv5432100827EmailStyle19 {color:#1F497D;}#yiv5432100827 
.yiv5432100827MsoChpDefault {} _filtered #yiv5432100827 {margin:1.0in 1.0in 
1.0in 1.0in;}#yiv5432100827 div.yiv5432100827WordSection1 {}#yiv5432100827 
I am doing some on-the-job-learning on this newer feature of the 3.x line, 
where the token generation algorithm will compensate for different size nodes 
in a cluster. In fact, it is one of the main reasons I upgraded to 3.0.13, 
because I have a number of original nodes in a cluster that are about half the 
size of the newer nodes. With the same number of vnodes, they can get 
overwhelmed with too much data and have to be rebuilt, etc. 
 
  
 
So, I am cutting vnodes in half on those original nodes and rebuilding them. So 
far, it is working as designed. The data size is about half on the smaller 
nodes.
 
  
 
With the more current advice being to use less vnodes, for the original 
question below, I might consider adding the new node in at 256 vnodes and then 
rebuilding all the other nodes at 128. Of course the cluster size and amount of 
data would be important factors, as well as the future growth of the cluster 
and the expected size of any additional nodes.
 
  
 
  
 
Sean Durity
 
  
 
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Thursday, August 17, 2017 4:20 PM
To: cassandra <user@cassandra.apache.org>
Subject: Re: Adding a new node with the double of disk space
 
  
 
If you really double the hardware in every way, it's PROBABLY reasonable to 
double num_tokens. It won't be quite the same as doubling all-the-things, 
because you still have a single JVM, and you'll still have to deal with GC as 
you're now reading twice as much and generating twice as much garbage, but you 
can probably adjust the tuning of the heap to compensate.
 
  
 
  
 
  
 
On Thu, Aug 17, 2017 at 1:00 PM, Kevin O'Connor <ke...@reddit.com.invalid> 
wrote:
 

Are you saying if a node had double the hardware capacity in every way it would 
be a bad idea to up num_tokens? I thought that was the whole idea of that 
setting though?
 
  
 
On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo <r...@pythian.com> wrote:
 

No.
 
  
 
If you would double all the hardware on that node vs the others would still be 
a bad idea.
 
Keep the cluster uniform vnodes wise.
 


 
Regards,
 
  
 
Carlos Juzarte Rolo
 
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
 
 
 
Pythian - Love your data
 
  
 
rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin: 
linkedin.com/in/carlosjuzarterolo

 
Mobile: +351 918 918 100 
 
www.pythian.com
 
  
 
On Thu, Aug 17, 2017 at 5:47 PM, Cogumelos Maravilha 
<cogumelosmaravi...@sapo.pt> wrote:
 


Hi all,

I need to add a new node to my cluster but this time the new node will
have the double of disk space comparing to the other nodes.

I'm using the default vnodes (num_tokens: 256). To fully use the disk
space in the new node I just have to configure num_tokens: 512?

Thanks in advance.



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org 

  
 
  
 
--
 
  
 

  
 

  
 

The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility 

RE: Adding a new node with the double of disk space

2017-08-18 Thread Durity, Sean R
I am doing some on-the-job-learning on this newer feature of the 3.x line, 
where the token generation algorithm will compensate for different size nodes 
in a cluster. In fact, it is one of the main reasons I upgraded to 3.0.13, 
because I have a number of original nodes in a cluster that are about half the 
size of the newer nodes. With the same number of vnodes, they can get 
overwhelmed with too much data and have to be rebuilt, etc.

So, I am cutting vnodes in half on those original nodes and rebuilding them. So 
far, it is working as designed. The data size is about half on the smaller 
nodes.

With the more current advice being to use less vnodes, for the original 
question below, I might consider adding the new node in at 256 vnodes and then 
rebuilding all the other nodes at 128. Of course the cluster size and amount of 
data would be important factors, as well as the future growth of the cluster 
and the expected size of any additional nodes.


Sean Durity

From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Thursday, August 17, 2017 4:20 PM
To: cassandra <user@cassandra.apache.org>
Subject: Re: Adding a new node with the double of disk space

If you really double the hardware in every way, it's PROBABLY reasonable to 
double num_tokens. It won't be quite the same as doubling all-the-things, 
because you still have a single JVM, and you'll still have to deal with GC as 
you're now reading twice as much and generating twice as much garbage, but you 
can probably adjust the tuning of the heap to compensate.



On Thu, Aug 17, 2017 at 1:00 PM, Kevin O'Connor 
<ke...@reddit.com.invalid<mailto:ke...@reddit.com.invalid>> wrote:
Are you saying if a node had double the hardware capacity in every way it would 
be a bad idea to up num_tokens? I thought that was the whole idea of that 
setting though?

On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo 
<r...@pythian.com<mailto:r...@pythian.com>> wrote:
No.

If you would double all the hardware on that node vs the others would still be 
a bad idea.
Keep the cluster uniform vnodes wise.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin: 
linkedin.com/in/carlosjuzarterolo
<https://urldefense.proofpoint.com/v2/url?u=http-3A__linkedin.com_in_carlosjuzarterolo=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=rgNxV3xe5LAF1aAaZLAUezh4puIe3DKneEjHH-cf4tk=z8ZBxsxrkh0RG6ClNq3p1gk-3R8hVhVe7eoUOKurPgI=>
Mobile: +351 918 918 100<tel:+351%20918%20918%20100>
www.pythian.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.pythian.com_=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=rgNxV3xe5LAF1aAaZLAUezh4puIe3DKneEjHH-cf4tk=HD-QYimYZSKc1pzlsMXGp7te8RiXRqN1XLCuSZU1jos=>

On Thu, Aug 17, 2017 at 5:47 PM, Cogumelos Maravilha 
<cogumelosmaravi...@sapo.pt<mailto:cogumelosmaravi...@sapo.pt>> wrote:
Hi all,

I need to add a new node to my cluster but this time the new node will
have the double of disk space comparing to the other nodes.

I'm using the default vnodes (num_tokens: 256). To fully use the disk
space in the new node I just have to configure num_tokens: 512?

Thanks in advance.



-
To unsubscribe, e-mail: 
user-unsubscr...@cassandra.apache.org<mailto:user-unsubscr...@cassandra.apache.org>
For additional commands, e-mail: 
user-h...@cassandra.apache.org<mailto:user-h...@cassandra.apache.org>



--







The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.


Re: Adding a new node with the double of disk space

2017-08-18 Thread Carlos Rolo
I would preferably spin 2 JVMs inside the same hardware (if you double
everything) than having to deal with what Jeff stated.

Also certain operations are not really found of a large number of vnodes
(eg. repair). There was a lot of improvements in the 3.x release cycle, but
I do still tend to reduce vnodes number and not increase.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
*linkedin.com/in/carlosjuzarterolo
*
Mobile: +351 918 918 100
www.pythian.com

On Thu, Aug 17, 2017 at 9:19 PM, Jeff Jirsa  wrote:

> If you really double the hardware in every way, it's PROBABLY reasonable
> to double num_tokens. It won't be quite the same as doubling
> all-the-things, because you still have a single JVM, and you'll still have
> to deal with GC as you're now reading twice as much and generating twice as
> much garbage, but you can probably adjust the tuning of the heap to
> compensate.
>
>
>
> On Thu, Aug 17, 2017 at 1:00 PM, Kevin O'Connor 
> wrote:
>
>> Are you saying if a node had double the hardware capacity in every way it
>> would be a bad idea to up num_tokens? I thought that was the whole idea of
>> that setting though?
>>
>> On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo  wrote:
>>
>>> No.
>>>
>>> If you would double all the hardware on that node vs the others would
>>> still be a bad idea.
>>> Keep the cluster uniform vnodes wise.
>>>
>>> Regards,
>>>
>>> Carlos Juzarte Rolo
>>> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>>>
>>> Pythian - Love your data
>>>
>>> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
>>> *linkedin.com/in/carlosjuzarterolo
>>> *
>>> Mobile: +351 918 918 100 <+351%20918%20918%20100>
>>> www.pythian.com
>>>
>>> On Thu, Aug 17, 2017 at 5:47 PM, Cogumelos Maravilha <
>>> cogumelosmaravi...@sapo.pt> wrote:
>>>
 Hi all,

 I need to add a new node to my cluster but this time the new node will
 have the double of disk space comparing to the other nodes.

 I'm using the default vnodes (num_tokens: 256). To fully use the disk
 space in the new node I just have to configure num_tokens: 512?

 Thanks in advance.



 -
 To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
 For additional commands, e-mail: user-h...@cassandra.apache.org


>>>
>>> --
>>>
>>>
>>>
>>>
>>
>

-- 


--





Re: Adding a new node with the double of disk space

2017-08-17 Thread Jeff Jirsa
If you really double the hardware in every way, it's PROBABLY reasonable to
double num_tokens. It won't be quite the same as doubling all-the-things,
because you still have a single JVM, and you'll still have to deal with GC
as you're now reading twice as much and generating twice as much garbage,
but you can probably adjust the tuning of the heap to compensate.



On Thu, Aug 17, 2017 at 1:00 PM, Kevin O'Connor 
wrote:

> Are you saying if a node had double the hardware capacity in every way it
> would be a bad idea to up num_tokens? I thought that was the whole idea of
> that setting though?
>
> On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo  wrote:
>
>> No.
>>
>> If you would double all the hardware on that node vs the others would
>> still be a bad idea.
>> Keep the cluster uniform vnodes wise.
>>
>> Regards,
>>
>> Carlos Juzarte Rolo
>> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>>
>> Pythian - Love your data
>>
>> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
>> *linkedin.com/in/carlosjuzarterolo
>> *
>> Mobile: +351 918 918 100 <+351%20918%20918%20100>
>> www.pythian.com
>>
>> On Thu, Aug 17, 2017 at 5:47 PM, Cogumelos Maravilha <
>> cogumelosmaravi...@sapo.pt> wrote:
>>
>>> Hi all,
>>>
>>> I need to add a new node to my cluster but this time the new node will
>>> have the double of disk space comparing to the other nodes.
>>>
>>> I'm using the default vnodes (num_tokens: 256). To fully use the disk
>>> space in the new node I just have to configure num_tokens: 512?
>>>
>>> Thanks in advance.
>>>
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>>
>>>
>>
>> --
>>
>>
>>
>>
>


Re: Adding a new node with the double of disk space

2017-08-17 Thread Kevin O'Connor
Are you saying if a node had double the hardware capacity in every way it
would be a bad idea to up num_tokens? I thought that was the whole idea of
that setting though?

On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo  wrote:

> No.
>
> If you would double all the hardware on that node vs the others would
> still be a bad idea.
> Keep the cluster uniform vnodes wise.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
> *linkedin.com/in/carlosjuzarterolo
> *
> Mobile: +351 918 918 100 <+351%20918%20918%20100>
> www.pythian.com
>
> On Thu, Aug 17, 2017 at 5:47 PM, Cogumelos Maravilha <
> cogumelosmaravi...@sapo.pt> wrote:
>
>> Hi all,
>>
>> I need to add a new node to my cluster but this time the new node will
>> have the double of disk space comparing to the other nodes.
>>
>> I'm using the default vnodes (num_tokens: 256). To fully use the disk
>> space in the new node I just have to configure num_tokens: 512?
>>
>> Thanks in advance.
>>
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
> --
>
>
>
>


Re: Adding a new node with the double of disk space

2017-08-17 Thread Carlos Rolo
No.

If you would double all the hardware on that node vs the others would still
be a bad idea.
Keep the cluster uniform vnodes wise.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
*linkedin.com/in/carlosjuzarterolo
*
Mobile: +351 918 918 100
www.pythian.com

On Thu, Aug 17, 2017 at 5:47 PM, Cogumelos Maravilha <
cogumelosmaravi...@sapo.pt> wrote:

> Hi all,
>
> I need to add a new node to my cluster but this time the new node will
> have the double of disk space comparing to the other nodes.
>
> I'm using the default vnodes (num_tokens: 256). To fully use the disk
> space in the new node I just have to configure num_tokens: 512?
>
> Thanks in advance.
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

-- 


--





Adding a new node with the double of disk space

2017-08-17 Thread Cogumelos Maravilha
Hi all,

I need to add a new node to my cluster but this time the new node will
have the double of disk space comparing to the other nodes.

I'm using the default vnodes (num_tokens: 256). To fully use the disk
space in the new node I just have to configure num_tokens: 512?

Thanks in advance.



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Adding a New Node With The Same IP of an Old Node

2017-07-05 Thread Shalom Sagges
Thanks Nitan!

Eventually it was a firewall issue related to the Centos7 node.

Once fixed, the rolling restart resolved the issue completely.

Thanks again!



Shalom Sagges
DBA
T: +972-74-700-4035
 
 We Create Meaningful Connections



On Tue, Jul 4, 2017 at 7:52 PM, Nitan Kainth  wrote:

> Try rolling restart of cluster as solution for schema version mismatch.
>
>
> Sent from my iPhone
>
> On Jul 4, 2017, at 8:31 AM, Shalom Sagges  wrote:
>
> Hi Experts,
>
> My plan is to upgrade the C* nodes' OS from Centos6 to Centos7.
> Since an upgrade wasn't recommended I needed to install new machines with
> Centos7 and join them to the cluster.
> I didn't want to decommission/bootstrap dozens of nodes, so I decided to
> do the following:
>
>- Create a new machine
>- Copy all data, schema data and commit logs using rsync from the old
>node to the new one
>- Stop Cassandra on the old node, copy the delta and shut the node down
>- Change the hostname and IP address of the new node to the hostname
>and IP of the old node.
>- Start Cassandra on the new node.
>
> My thought was that the "new" node will join the cluster as the "old" node
> since it has all of its data and metadata, however, after Cassandra started
> up, it saw all its peers in DN state. The same goes for the other nodes,
> that saw the new node is DN state.
>
> I didn't find any errors or warnings in the logs, but I did see that the 
> *schema
> version* on the new node was different from the others( I'm assuming
> that's the issue here?)
>
>
> *Functioning node*
> nodetool describecluster
> Cluster Information:
> Name: MyCluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> 503bf151-c59c-35a0-8350-ca73e82098f5: [x.x.x.32,
> x.x.x.126, x.x.x.35, x.x.x.1, x.x.x.28, x.x.x.15, x.x.x.252, x.x.x.253,
> x.x.x.31]
>
> UNREACHABLE: [x.x.x.2]
>
>
> *New node*
> nodetool describecluster
> Cluster Information:
> Name: MyCluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> dab67163-65d4-3895-82e6-ffc07bf5c17a: [x.x.x.2]
>
> UNREACHABLE: [x.x.x.31, x.x.x.1, x.x.x.28, x.x.x.252,
> x.x.x.253, x.x.x.15, x.x.x.126, x.x.x.35, x.x.x.32]
>
>
> I'd really REALLY appreciate some guidance. Did I do something wrong? Is
> there a way to fix this?
>
>
> Thanks a lot!
>
> Shalom Sagges
> DBA
>  
>  We Create Meaningful Connections
>
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>

-- 
This message may contain confidential and/or privileged information. 
If you are not the addressee or authorized to receive this on behalf of the 
addressee you must not use, copy, disclose or take action based on this 
message or any information herein. 
If you have received this message in error, please advise the sender 
immediately by reply email and delete this message. Thank you.


Re: Adding a New Node With The Same IP of an Old Node

2017-07-04 Thread Nitan Kainth
Try rolling restart of cluster as solution for schema version mismatch.


Sent from my iPhone

> On Jul 4, 2017, at 8:31 AM, Shalom Sagges  wrote:
> 
> Hi Experts, 
> 
> My plan is to upgrade the C* nodes' OS from Centos6 to Centos7. 
> Since an upgrade wasn't recommended I needed to install new machines with 
> Centos7 and join them to the cluster. 
> I didn't want to decommission/bootstrap dozens of nodes, so I decided to do 
> the following:
> Create a new machine
> Copy all data, schema data and commit logs using rsync from the old node to 
> the new one
> Stop Cassandra on the old node, copy the delta and shut the node down
> Change the hostname and IP address of the new node to the hostname and IP of 
> the old node.
> Start Cassandra on the new node. 
> My thought was that the "new" node will join the cluster as the "old" node 
> since it has all of its data and metadata, however, after Cassandra started 
> up, it saw all its peers in DN state. The same goes for the other nodes, that 
> saw the new node is DN state. 
> 
> I didn't find any errors or warnings in the logs, but I did see that the 
> schema version on the new node was different from the others( I'm assuming 
> that's the issue here?) 
> 
> 
> Functioning node
> nodetool describecluster
> Cluster Information:
> Name: MyCluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> 503bf151-c59c-35a0-8350-ca73e82098f5: [x.x.x.32, x.x.x.126, 
> x.x.x.35, x.x.x.1, x.x.x.28, x.x.x.15, x.x.x.252, x.x.x.253, x.x.x.31]
> 
> UNREACHABLE: [x.x.x.2]
> 
> 
> New node
> nodetool describecluster
> Cluster Information:
> Name: MyCluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> dab67163-65d4-3895-82e6-ffc07bf5c17a: [x.x.x.2]
> 
> UNREACHABLE: [x.x.x.31, x.x.x.1, x.x.x.28, x.x.x.252, 
> x.x.x.253, x.x.x.15, x.x.x.126, x.x.x.35, x.x.x.32]
> 
> 
> I'd really REALLY appreciate some guidance. Did I do something wrong? Is 
> there a way to fix this? 
> 
> 
> Thanks a lot!
> 
> Shalom Sagges
> DBA
>   
> We Create Meaningful Connections
> 
>  
> 
> This message may contain confidential and/or privileged information. 
> If you are not the addressee or authorized to receive this on behalf of the 
> addressee you must not use, copy, disclose or take action based on this 
> message or any information herein. 
> If you have received this message in error, please advise the sender 
> immediately by reply email and delete this message. Thank you.


Adding a New Node With The Same IP of an Old Node

2017-07-04 Thread Shalom Sagges
Hi Experts,

My plan is to upgrade the C* nodes' OS from Centos6 to Centos7.
Since an upgrade wasn't recommended I needed to install new machines with
Centos7 and join them to the cluster.
I didn't want to decommission/bootstrap dozens of nodes, so I decided to do
the following:

   - Create a new machine
   - Copy all data, schema data and commit logs using rsync from the old
   node to the new one
   - Stop Cassandra on the old node, copy the delta and shut the node down
   - Change the hostname and IP address of the new node to the hostname and
   IP of the old node.
   - Start Cassandra on the new node.

My thought was that the "new" node will join the cluster as the "old" node
since it has all of its data and metadata, however, after Cassandra started
up, it saw all its peers in DN state. The same goes for the other nodes,
that saw the new node is DN state.

I didn't find any errors or warnings in the logs, but I did see that
the *schema
version* on the new node was different from the others( I'm assuming that's
the issue here?)


*Functioning node*
nodetool describecluster
Cluster Information:
Name: MyCluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
503bf151-c59c-35a0-8350-ca73e82098f5: [x.x.x.32, x.x.x.126,
x.x.x.35, x.x.x.1, x.x.x.28, x.x.x.15, x.x.x.252, x.x.x.253, x.x.x.31]

UNREACHABLE: [x.x.x.2]


*New node*
nodetool describecluster
Cluster Information:
Name: MyCluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
dab67163-65d4-3895-82e6-ffc07bf5c17a: [x.x.x.2]

UNREACHABLE: [x.x.x.31, x.x.x.1, x.x.x.28, x.x.x.252,
x.x.x.253, x.x.x.15, x.x.x.126, x.x.x.35, x.x.x.32]


I'd really REALLY appreciate some guidance. Did I do something wrong? Is
there a way to fix this?


Thanks a lot!

Shalom Sagges
DBA
 
 We Create Meaningful Connections

-- 
This message may contain confidential and/or privileged information. 
If you are not the addressee or authorized to receive this on behalf of the 
addressee you must not use, copy, disclose or take action based on this 
message or any information herein. 
If you have received this message in error, please advise the sender 
immediately by reply email and delete this message. Thank you.


Re: Error when running nodetool cleanup after adding a new node to a cluster

2017-02-13 Thread Jerold Kinder
While you folks are solving problems, I'm looking for expertise that sorely
needed at the Architect level that can consult and mentor.  Sorry for the
interruption but our need is getting desperate.  Thanks for your patience.

Regards

On Fri, Feb 10, 2017 at 2:14 AM, Srinath Reddy <ksre...@gmail.com> wrote:

> The nodetool cleanup ran successfully after setting the CLASSPATH
> variable to the kubernetes-cassandra.jar.
>
> Thanks.
>
> On 09-Feb-2017, at 2:23 PM, Srinath Reddy <ksre...@gmail.com> wrote:
>
> Alex,
>
> Thanks for reply.  I will try the workaround and post an update.
>
> Regards,
>
> Srinath Reddy
>
> On 09-Feb-2017, at 1:44 PM, Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
> On Thu, Feb 9, 2017 at 6:13 AM, Srinath Reddy <ksre...@gmail.com> wrote:
>
>> Hi,
>>
>> Trying to re-balacne a Cassandra cluster after adding a new node and I'm
>> getting this error when running nodetool cleanup. The Cassandra cluster
>> is running in a Kubernetes cluster.
>>
>> Cassandra version is 2.2.8
>>
>> nodetool cleanup
>> error: io.k8s.cassandra.KubernetesSeedProvider
>> Fatal configuration error; unable to start server.  See log for
>> stacktrace.
>> -- StackTrace --
>> org.apache.cassandra.exceptions.ConfigurationException:
>> io.k8s.cassandra.KubernetesSeedProvider
>> Fatal configuration error; unable to start server.  See log for
>> stacktrace.
>> at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(D
>> atabaseDescriptor.java:676)
>> at org.apache.cassandra.config.DatabaseDescriptor.(Data
>> baseDescriptor.java:119)
>> at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:256)
>> at org.apache.cassandra.tools.NodeProbe.forceKeyspaceCleanup(No
>> deProbe.java:262)
>> at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:55)
>> at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:244)
>> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:158)
>>
>
> Hi,
>
> From the above stacktrace it looks like you're hitting the following TODO
> item:
>
> https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e
> 9b18df3ce9/src/java/org/apache/cassandra/tools/NodeProbe.java#L282
>
> That is, nodetool needs to know concurrent_compactors setting's value
> before starting cleanup, but doesn't use JMX and tries to parse the
> configuration file instead.  That fails because your custom SeedProvider
> class is not on classpath for nodetool.
>
> A workaround: make sure io.k8s.cassandra.KubernetesSeedProvider can be
> found by java when running nodetool script, see https://github.com/apache/
> cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/bin/nodetool#L108
>
> Proper fix: get rid of the TODO and really query the value using JMX,
> especially since the latest tick-tock release of Cassandra (3.10) added a
> way to modify it with JMX.
>
> --
> Alex
>
>
>
>


Re: Error when running nodetool cleanup after adding a new node to a cluster

2017-02-10 Thread Srinath Reddy
The nodetool cleanup ran successfully after setting the CLASSPATH variable to 
the kubernetes-cassandra.jar.

Thanks.

> On 09-Feb-2017, at 2:23 PM, Srinath Reddy <ksre...@gmail.com> wrote:
> 
> Alex,
> 
> Thanks for reply.  I will try the workaround and post an update.
> 
> Regards,
> 
> Srinath Reddy
> 
>> On 09-Feb-2017, at 1:44 PM, Oleksandr Shulgin <oleksandr.shul...@zalando.de 
>> <mailto:oleksandr.shul...@zalando.de>> wrote:
>> 
>> On Thu, Feb 9, 2017 at 6:13 AM, Srinath Reddy <ksre...@gmail.com 
>> <mailto:ksre...@gmail.com>> wrote:
>> Hi,
>> 
>> Trying to re-balacne a Cassandra cluster after adding a new node and I'm 
>> getting this error when running nodetool cleanup. The Cassandra cluster is 
>> running in a Kubernetes cluster.
>> 
>> Cassandra version is 2.2.8
>> 
>> nodetool cleanup
>> error: io.k8s.cassandra.KubernetesSeedProvider
>> Fatal configuration error; unable to start server.  See log for stacktrace.
>> -- StackTrace --
>> org.apache.cassandra.exceptions.ConfigurationException: 
>> io.k8s.cassandra.KubernetesSeedProvider
>> Fatal configuration error; unable to start server.  See log for stacktrace.
>>  at 
>> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:676)
>>  at 
>> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:119)
>>  at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:256)
>>  at 
>> org.apache.cassandra.tools.NodeProbe.forceKeyspaceCleanup(NodeProbe.java:262)
>>  at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:55)
>>  at 
>> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:244)
>>  at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:158)
>> 
>> Hi,
>> 
>> From the above stacktrace it looks like you're hitting the following TODO 
>> item:
>> 
>> https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/src/java/org/apache/cassandra/tools/NodeProbe.java#L282
>>  
>> <https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/src/java/org/apache/cassandra/tools/NodeProbe.java#L282>
>> 
>> That is, nodetool needs to know concurrent_compactors setting's value before 
>> starting cleanup, but doesn't use JMX and tries to parse the configuration 
>> file instead.  That fails because your custom SeedProvider class is not on 
>> classpath for nodetool.
>> 
>> A workaround: make sure io.k8s.cassandra.KubernetesSeedProvider can be found 
>> by java when running nodetool script, see 
>> https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/bin/nodetool#L108
>>  
>> <https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/bin/nodetool#L108>
>> 
>> Proper fix: get rid of the TODO and really query the value using JMX, 
>> especially since the latest tick-tock release of Cassandra (3.10) added a 
>> way to modify it with JMX.
>> 
>> --
>> Alex
> 



signature.asc
Description: Message signed with OpenPGP


Re: Error when running nodetool cleanup after adding a new node to a cluster

2017-02-09 Thread Srinath Reddy
Alex,

Thanks for reply.  I will try the workaround and post an update.

Regards,

Srinath Reddy

> On 09-Feb-2017, at 1:44 PM, Oleksandr Shulgin <oleksandr.shul...@zalando.de> 
> wrote:
> 
> On Thu, Feb 9, 2017 at 6:13 AM, Srinath Reddy <ksre...@gmail.com 
> <mailto:ksre...@gmail.com>> wrote:
> Hi,
> 
> Trying to re-balacne a Cassandra cluster after adding a new node and I'm 
> getting this error when running nodetool cleanup. The Cassandra cluster is 
> running in a Kubernetes cluster.
> 
> Cassandra version is 2.2.8
> 
> nodetool cleanup
> error: io.k8s.cassandra.KubernetesSeedProvider
> Fatal configuration error; unable to start server.  See log for stacktrace.
> -- StackTrace --
> org.apache.cassandra.exceptions.ConfigurationException: 
> io.k8s.cassandra.KubernetesSeedProvider
> Fatal configuration error; unable to start server.  See log for stacktrace.
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:676)
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:119)
>   at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:256)
>   at 
> org.apache.cassandra.tools.NodeProbe.forceKeyspaceCleanup(NodeProbe.java:262)
>   at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:55)
>   at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:244)
>   at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:158)
> 
> Hi,
> 
> From the above stacktrace it looks like you're hitting the following TODO 
> item:
> 
> https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/src/java/org/apache/cassandra/tools/NodeProbe.java#L282
>  
> <https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/src/java/org/apache/cassandra/tools/NodeProbe.java#L282>
> 
> That is, nodetool needs to know concurrent_compactors setting's value before 
> starting cleanup, but doesn't use JMX and tries to parse the configuration 
> file instead.  That fails because your custom SeedProvider class is not on 
> classpath for nodetool.
> 
> A workaround: make sure io.k8s.cassandra.KubernetesSeedProvider can be found 
> by java when running nodetool script, see 
> https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/bin/nodetool#L108
>  
> <https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/bin/nodetool#L108>
> 
> Proper fix: get rid of the TODO and really query the value using JMX, 
> especially since the latest tick-tock release of Cassandra (3.10) added a way 
> to modify it with JMX.
> 
> --
> Alex



signature.asc
Description: Message signed with OpenPGP


Re: Error when running nodetool cleanup after adding a new node to a cluster

2017-02-09 Thread Oleksandr Shulgin
On Thu, Feb 9, 2017 at 6:13 AM, Srinath Reddy <ksre...@gmail.com> wrote:

> Hi,
>
> Trying to re-balacne a Cassandra cluster after adding a new node and I'm
> getting this error when running nodetool cleanup. The Cassandra cluster
> is running in a Kubernetes cluster.
>
> Cassandra version is 2.2.8
>
> nodetool cleanup
> error: io.k8s.cassandra.KubernetesSeedProvider
> Fatal configuration error; unable to start server.  See log for stacktrace.
> -- StackTrace --
> org.apache.cassandra.exceptions.ConfigurationException: io.k8s.cassandra.
> KubernetesSeedProvider
> Fatal configuration error; unable to start server.  See log for stacktrace.
> at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(
> DatabaseDescriptor.java:676)
> at org.apache.cassandra.config.DatabaseDescriptor.(
> DatabaseDescriptor.java:119)
> at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:256)
> at org.apache.cassandra.tools.NodeProbe.forceKeyspaceCleanup(
> NodeProbe.java:262)
> at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:55)
> at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:244)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:158)
>

Hi,

>From the above stacktrace it looks like you're hitting the following TODO
item:

https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/src/java/org/apache/cassandra/tools/NodeProbe.java#L282

That is, nodetool needs to know concurrent_compactors setting's value
before starting cleanup, but doesn't use JMX and tries to parse the
configuration file instead.  That fails because your custom SeedProvider
class is not on classpath for nodetool.

A workaround: make sure io.k8s.cassandra.KubernetesSeedProvider can be
found by java when running nodetool script, see
https://github.com/apache/cassandra/blob/98d74ed998706e9e047dc0f7886a1e9b18df3ce9/bin/nodetool#L108

Proper fix: get rid of the TODO and really query the value using JMX,
especially since the latest tick-tock release of Cassandra (3.10) added a
way to modify it with JMX.

--
Alex


Re: Error when running nodetool cleanup after adding a new node to a cluster

2017-02-08 Thread Srinath Reddy
Yes, I ran the nodetool cleanup on the other nodes and got the error.

Thanks.

> On 09-Feb-2017, at 11:12 AM, Harikrishnan Pillai <hpil...@walmartlabs.com> 
> wrote:
> 
> The cleanup has to run on other nodes
> 
> Sent from my iPhone
> 
> On Feb 8, 2017, at 9:14 PM, Srinath Reddy <ksre...@gmail.com 
> <mailto:ksre...@gmail.com>> wrote:
> 
>> Hi,
>> 
>> Trying to re-balacne a Cassandra cluster after adding a new node and I'm 
>> getting this error when running nodetool cleanup. The Cassandra cluster is 
>> running in a Kubernetes cluster.
>> 
>> Cassandra version is 2.2.8
>> 
>> nodetool cleanup
>> error: io.k8s.cassandra.KubernetesSeedProvider
>> Fatal configuration error; unable to start server.  See log for stacktrace.
>> -- StackTrace --
>> org.apache.cassandra.exceptions.ConfigurationException: 
>> io.k8s.cassandra.KubernetesSeedProvider
>> Fatal configuration error; unable to start server.  See log for stacktrace.
>> at 
>> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:676)
>> at 
>> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:119)
>> at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:256)
>> at 
>> org.apache.cassandra.tools.NodeProbe.forceKeyspaceCleanup(NodeProbe.java:262)
>> at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:55)
>> at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:244)
>> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:158)
>> 
>> nodetool status
>> Datacenter: datacenter1
>> ===
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address  Load   Tokens   Owns (effective)  Host ID   
>> Rack
>> UN  10.244.3.4   6.91 GB256  60.8% 
>> bad1c6c6-8c2e-4f0c-9aea-0d63b451e7a1  rack1
>> UN  10.244.0.3   6.22 GB256  60.2% 
>> 936cb0c0-d14f-4ddd-bfde-3865b922e267  rack1
>> UN  10.244.1.3   6.12 GB256  59.4% 
>> 0cb43711-b155-449c-83ba-00ed2a97affe  rack1
>> UN  10.244.4.3   632.43 MB  256  57.8% 
>> 55095c75-26df-4180-9004-9fabf88faacc  rack1
>> UN  10.244.2.10  6.08 GB256  61.8% 
>> 32e32bd2-364f-4b6f-b13a-8814164ed160  rack1
>> 
>> 
>> Any suggestions on what is needed to re-balance the cluster after adding the 
>> new node? I have run nodetool repair but not able to run nodetool cleanup.
>> 
>> Thanks.
>> 
>> 



signature.asc
Description: Message signed with OpenPGP


Re: Error when running nodetool cleanup after adding a new node to a cluster

2017-02-08 Thread Harikrishnan Pillai
The cleanup has to run on other nodes

Sent from my iPhone

On Feb 8, 2017, at 9:14 PM, Srinath Reddy 
<ksre...@gmail.com<mailto:ksre...@gmail.com>> wrote:

Hi,

Trying to re-balacne a Cassandra cluster after adding a new node and I'm 
getting this error when running nodetool cleanup. The Cassandra cluster is 
running in a Kubernetes cluster.

Cassandra version is 2.2.8

nodetool cleanup
error: io.k8s.cassandra.KubernetesSeedProvider
Fatal configuration error; unable to start server.  See log for stacktrace.
-- StackTrace --
org.apache.cassandra.exceptions.ConfigurationException: 
io.k8s.cassandra.KubernetesSeedProvider
Fatal configuration error; unable to start server.  See log for stacktrace.
at 
org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:676)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:119)
at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:256)
at org.apache.cassandra.tools.NodeProbe.forceKeyspaceCleanup(NodeProbe.java:262)
at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:55)
at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:244)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:158)

nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address  Load   Tokens   Owns (effective)  Host ID  
 Rack
UN  10.244.3.4   6.91 GB256  60.8% 
bad1c6c6-8c2e-4f0c-9aea-0d63b451e7a1  rack1
UN  10.244.0.3   6.22 GB256  60.2% 
936cb0c0-d14f-4ddd-bfde-3865b922e267  rack1
UN  10.244.1.3   6.12 GB256  59.4% 
0cb43711-b155-449c-83ba-00ed2a97affe  rack1
UN  10.244.4.3   632.43 MB  256  57.8% 
55095c75-26df-4180-9004-9fabf88faacc  rack1
UN  10.244.2.10  6.08 GB256  61.8% 
32e32bd2-364f-4b6f-b13a-8814164ed160  rack1


Any suggestions on what is needed to re-balance the cluster after adding the 
new node? I have run nodetool repair but not able to run nodetool cleanup.

Thanks.




Error when running nodetool cleanup after adding a new node to a cluster

2017-02-08 Thread Srinath Reddy
Hi,

Trying to re-balacne a Cassandra cluster after adding a new node and I'm 
getting this error when running nodetool cleanup. The Cassandra cluster is 
running in a Kubernetes cluster.

Cassandra version is 2.2.8

nodetool cleanup
error: io.k8s.cassandra.KubernetesSeedProvider
Fatal configuration error; unable to start server.  See log for stacktrace.
-- StackTrace --
org.apache.cassandra.exceptions.ConfigurationException: 
io.k8s.cassandra.KubernetesSeedProvider
Fatal configuration error; unable to start server.  See log for stacktrace.
at 
org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:676)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:119)
at org.apache.cassandra.tools.NodeProbe.checkJobs(NodeProbe.java:256)
at 
org.apache.cassandra.tools.NodeProbe.forceKeyspaceCleanup(NodeProbe.java:262)
at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:55)
at 
org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:244)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:158)

nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address  Load   Tokens   Owns (effective)  Host ID  
 Rack
UN  10.244.3.4   6.91 GB256  60.8% 
bad1c6c6-8c2e-4f0c-9aea-0d63b451e7a1  rack1
UN  10.244.0.3   6.22 GB256  60.2% 
936cb0c0-d14f-4ddd-bfde-3865b922e267  rack1
UN  10.244.1.3   6.12 GB256  59.4% 
0cb43711-b155-449c-83ba-00ed2a97affe  rack1
UN  10.244.4.3   632.43 MB  256  57.8% 
55095c75-26df-4180-9004-9fabf88faacc  rack1
UN  10.244.2.10  6.08 GB256  61.8% 
32e32bd2-364f-4b6f-b13a-8814164ed160  rack1


Any suggestions on what is needed to re-balance the cluster after adding the 
new node? I have run nodetool repair but not able to run nodetool cleanup.

Thanks.




signature.asc
Description: Message signed with OpenPGP


Re: Read operations freeze for a few second while adding a new node

2016-01-28 Thread Jeff Jirsa
Is this during streaming plan setup (is your 10-20 second time of impact 
approximately 30 seconds from the time you start the node that’s joining the 
ring), or does it happen for the entire time you’re joining the node to the 
ring?

If so, there’s a chance it’s GC related – the streaming plan code used to 
instantiate ALL of the compression metadata chunks in order to calculate, which 
creates a fair amount of garbage, which creates some GC activity. 
https://issues.apache.org/jira/browse/CASSANDRA-10680 was created due to some 
edge cases (very small compression chunk size + 3T of data per node = hundreds 
of millions of objects), but it’s possible that you’re seeing a less-extreme 
version of that same behavior.



From:  Lorand Kasler
Reply-To:  "user@cassandra.apache.org"
Date:  Thursday, January 28, 2016 at 8:11 AM
To:  "user@cassandra.apache.org"
Subject:  Read operations freeze for a few second while adding a new node

Hi, 

We are struggling with a problem that when adding nodes around 5% read 
operations freeze (aka time out after 1 second) for a few seconds (10-20 
seconds). It might not seems much, but at the order of 200k requests per second 
that's quite big of disruption.  It is well documented and known that adding 
nodes *has* impact on the latency or the completion of the requests but is 
there a way to lessen that? 
It is completely okay for write operations to fail or get blocked while adding 
nodes, but having the read path also impacted by this much (going from 30 
millisecond 99 percentile latency to above 1 second) is what puzzles us.

We have a 36 node cluster, every node owning ~120 GB of data. We are using 
Cassandra version 2.0.14 with vnodes and we are in the process of increasing 
capacity of the cluster, by roughly doubling the nodes.  They have SSDs and 
have peak IO usage of ~30%. 

Apart from the latency metrics only FlushWrites are blocked 18% of the time 
(based on the tpstats counters), but that can only lead to blocking writes and 
not reads? 

Thank you 



smime.p7s
Description: S/MIME cryptographic signature


Re: Read operations freeze for a few second while adding a new node

2016-01-28 Thread Anuj Wadehra
Hi Lorand,
Do you see any different gc pattern during these 20 seconds?
In 2.0.x, memtable create lot of heap pressure. So in a way, reads are not 
isolated from writes.
Frankly speaking, I would have accepted 20 second slowness as scaling is one 
time activity. But may be your business case doesnt make that acceptable. 
Such tough requirements often drive improvements.. 

ThanksAnuj

Sent from Yahoo Mail on Android 
 
  On Thu, 28 Jan, 2016 at 9:41 pm, Lorand Kasler 
wrote:   Hi,
We are struggling with a problem that when adding nodes around 5% read 
operations freeze (aka time out after 1 second) for a few seconds (10-20 
seconds). It might not seems much, but at the order of 200k requests per second 
that's quite big of disruption.  It is well documented and known that adding 
nodes *has* impact on the latency or the completion of the requests but is 
there a way to lessen that? It is completely okay for write operations to fail 
or get blocked while adding nodes, but having the read path also impacted by 
this much (going from 30 millisecond 99 percentile latency to above 1 second) 
is what puzzles us.
We have a 36 node cluster, every node owning ~120 GB of data. We are using 
Cassandra version 2.0.14 with vnodes and we are in the process of increasing 
capacity of the cluster, by roughly doubling the nodes.  They have SSDs and 
have peak IO usage of ~30%. 
Apart from the latency metrics only FlushWrites are blocked 18% of the time 
(based on the tpstats counters), but that can only lead to blocking writes and 
not reads? 
Thank you   


Re: Read operations freeze for a few second while adding a new node

2016-01-28 Thread Jonathan Haddad
If you've got a read heavy workload you should check out
http://blakeeggleston.com/cassandra-tuning-the-jvm-for-read-heavy-workloads.html



On Thu, Jan 28, 2016 at 8:11 AM Lorand Kasler 
wrote:

> Hi,
>
> We are struggling with a problem that when adding nodes around 5% read
> operations freeze (aka time out after 1 second) for a few seconds (10-20
> seconds). It might not seems much, but at the order of 200k requests per
> second that's quite big of disruption.  It is well documented and known
> that adding nodes *has* impact on the latency or the completion of the
> requests but is there a way to lessen that?
> It is completely okay for write operations to fail or get blocked while
> adding nodes, but having the read path also impacted by this much (going
> from 30 millisecond 99 percentile latency to above 1 second) is what
> puzzles us.
>
> We have a 36 node cluster, every node owning ~120 GB of data. We are using
> Cassandra version 2.0.14 with vnodes and we are in the process of
> increasing capacity of the cluster, by roughly doubling the nodes.  They
> have SSDs and have peak IO usage of ~30%.
>
> Apart from the latency metrics only FlushWrites are blocked 18% of the
> time (based on the tpstats counters), but that can only lead to blocking
> writes and not reads?
>
> Thank you
>


Read operations freeze for a few second while adding a new node

2016-01-28 Thread Lorand Kasler
Hi,

We are struggling with a problem that when adding nodes around 5% read
operations freeze (aka time out after 1 second) for a few seconds (10-20
seconds). It might not seems much, but at the order of 200k requests per
second that's quite big of disruption.  It is well documented and known
that adding nodes *has* impact on the latency or the completion of the
requests but is there a way to lessen that?
It is completely okay for write operations to fail or get blocked while
adding nodes, but having the read path also impacted by this much (going
from 30 millisecond 99 percentile latency to above 1 second) is what
puzzles us.

We have a 36 node cluster, every node owning ~120 GB of data. We are using
Cassandra version 2.0.14 with vnodes and we are in the process of
increasing capacity of the cluster, by roughly doubling the nodes.  They
have SSDs and have peak IO usage of ~30%.

Apart from the latency metrics only FlushWrites are blocked 18% of the time
(based on the tpstats counters), but that can only lead to blocking writes
and not reads?

Thank you


Re: Error while adding a new node.

2015-07-02 Thread Neha Trivedi
any help?

On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi nehajtriv...@gmail.com wrote:

 also:
 root@cas03:~# sudo service cassandra start
 root@cas03:~# lsof -n | grep java | wc -l
 5315
 root@cas03:~# lsof -n | grep java | wc -l
 977317
 root@cas03:~# lsof -n | grep java | wc -l
 880240
 root@cas03:~# lsof -n | grep java | wc -l
 882402


 On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 One of the column family has SStable count as under :
 SSTable count: 98506

 Can it be because of 2.1.3 version of cassandra..
 I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

 regards
 Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending
 (Opscenter / nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error,
 but after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service
 cassandra start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the
 ulimit for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Index.db (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 













Re: [MASSMAIL]Re: Error while adding a new node.

2015-07-02 Thread Neha Trivedi
thanks for the reply.!!
I will update it to 2.1.7 and checkout.

On Thu, Jul 2, 2015 at 6:59 PM, Carlos Rolo r...@pythian.com wrote:

 Marco you should also avoid 2.1.5 and 2.1.6 because of
 https://issues.apache.org/jira/browse/CASSANDRA-9549

 I know (And often don't recommend last versions, I'm still recommending
 2.0.x series unless someone is already in 2.1.x) but given the above bug,
 2.1.7 is the best option.

 Regards,

 Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

 rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jul 2, 2015 at 3:20 PM, Marcos Ortiz mlor...@uci.cu wrote:

  The recommended version to use is 2.1.5 because, like you Carlos said,
 2.1.6 and 2.1.7 are very new to consider them like
 stable.

 On 02/07/15 08:55, Carlos Rolo wrote:

  Indeed you should upgrade to 2.1.7.

  And then report if you are still facing problems. Versions up to 2.1.5
 (in the 2.1.x series) are not considered stable.

Regards,

  Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

  rolo@pythian | Twitter: cjrolo | Linkedin: 
 *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jul 2, 2015 at 11:40 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 any help?

 On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 also:
 root@cas03:~# sudo service cassandra start
 root@cas03:~# lsof -n | grep java | wc -l
 5315
 root@cas03:~# lsof -n | grep java | wc -l
 977317
 root@cas03:~# lsof -n | grep java | wc -l
 880240
 root@cas03:~# lsof -n | grep java | wc -l
 882402


 On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

   One of the column family has SStable count as under :
 SSTable count: 98506

  Can it be because of 2.1.3 version of cassandra..
  I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

  regards
  Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

  Hey..
 nodetool compactionstats
 pending tasks: 0

  no pending tasks.

  Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending
 (Opscenter / nodetool compactionstats).

  Also you can monitor the number of sstables.

  C*heers

  Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

   Thanks I will checkout.
  I increased the ulimit to 10, but I am getting the same
 error, but after a while.
  regards
  Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ 
 arodr...@gmail.com wrote:

  Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

  C*heers,

  Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

   Arun,
  I am logging on to Server as root and running (sudo service
 cassandra start)

  regards
  Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com
 wrote:

 Looks like you have too many open files issue. Increase the
 ulimit for the user.

  If you are starting the cassandra daemon using user
 cassandra, increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi 
 nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get
 the following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be 
 unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Index.db (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3
 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 













 --




 --
 Marcos Ortiz http://about.me/marcosortiz, Sr. Product Manager (Data
 Infrastructure) at UCI
 @marcosluis2186 http://twitter.com/marcosluis2186




 --






Re: Error while adding a new node.

2015-07-02 Thread Carlos Rolo
Indeed you should upgrade to 2.1.7.

And then report if you are still facing problems. Versions up to 2.1.5 (in
the 2.1.x series) are not considered stable.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
http://linkedin.com/in/carlosjuzarterolo*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Thu, Jul 2, 2015 at 11:40 AM, Neha Trivedi nehajtriv...@gmail.com
wrote:

 any help?

 On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 also:
 root@cas03:~# sudo service cassandra start
 root@cas03:~# lsof -n | grep java | wc -l
 5315
 root@cas03:~# lsof -n | grep java | wc -l
 977317
 root@cas03:~# lsof -n | grep java | wc -l
 880240
 root@cas03:~# lsof -n | grep java | wc -l
 882402


 On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 One of the column family has SStable count as under :
 SSTable count: 98506

 Can it be because of 2.1.3 version of cassandra..
 I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

 regards
 Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending
 (Opscenter / nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error,
 but after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service
 cassandra start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com
 wrote:

 Looks like you have too many open files issue. Increase the
 ulimit for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi 
 nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Index.db (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 













-- 


--





Re: [MASSMAIL]Re: Error while adding a new node.

2015-07-02 Thread Marcos Ortiz
The recommended version to use is 2.1.5 because, like you Carlos said, 
2.1.6 and 2.1.7 are very new to consider them like

stable.

On 02/07/15 08:55, Carlos Rolo wrote:

Indeed you should upgrade to 2.1.7.

And then report if you are still facing problems. Versions up to 2.1.5 
(in the 2.1.x series) are not considered stable.


Regards,

Carlos Juzarte Rolo
Cassandra Consultant
Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: 
_linkedin.com/in/carlosjuzarterolo 
http://linkedin.com/in/carlosjuzarterolo_

Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com http://www.pythian.com/

On Thu, Jul 2, 2015 at 11:40 AM, Neha Trivedi nehajtriv...@gmail.com 
mailto:nehajtriv...@gmail.com wrote:


any help?

On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi
nehajtriv...@gmail.com mailto:nehajtriv...@gmail.com wrote:

also:
root@cas03:~# sudo service cassandra start
root@cas03:~# lsof -n | grep java | wc -l
5315
root@cas03:~# lsof -n | grep java | wc -l
977317
root@cas03:~# lsof -n | grep java | wc -l
880240
root@cas03:~# lsof -n | grep java | wc -l
882402


On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi
nehajtriv...@gmail.com mailto:nehajtriv...@gmail.com wrote:

One of the column family has SStable count as under :
SSTable count: 98506

Can it be because of 2.1.3 version of cassandra..
I found this :
https://issues.apache.org/jira/browse/CASSANDRA-8964

regards
Neha


On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee
peich...@gmail.com mailto:peich...@gmail.com wrote:

nodetool cfstats?

On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi
nehajtriv...@gmail.com
mailto:nehajtriv...@gmail.com wrote:

Hey..
nodetool compactionstats
pending tasks: 0

no pending tasks.

Dont have opscenter. how do I monitor sstables?


On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ
arodr...@gmail.com mailto:arodr...@gmail.com
wrote:

You also might want to check if you have
compactions pending (Opscenter / nodetool
compactionstats).

Also you can monitor the number of sstables.

C*heers

Alain

2015-07-01 11:53 GMT+02:00 Neha Trivedi
nehajtriv...@gmail.com
mailto:nehajtriv...@gmail.com:

Thanks I will checkout.
I increased the ulimit to 10, but I am
getting the same error, but after a while.
regards
Neha


On Wed, Jul 1, 2015 at 2:22 PM, Alain
RODRIGUEZ arodr...@gmail.com
mailto:arodr...@gmail.com wrote:

Just check the process owner to be
sure (top, htop, ps, ...)


http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

C*heers,

Alain

2015-07-01 7:33 GMT+02:00 Neha Trivedi
nehajtriv...@gmail.com
mailto:nehajtriv...@gmail.com:

Arun,
I am logging on to Server as root
and running (sudo service
cassandra start)

regards
Neha

On Wed, Jul 1, 2015 at 11:00 AM,
Neha Trivedi
nehajtriv...@gmail.com
mailto:nehajtriv...@gmail.com
wrote:

Thanks Arun ! I will try and
get back !

On Wed, Jul 1, 2015 at 10:32
AM, Arun arunsi...@gmail.com
mailto:arunsi...@gmail.com
wrote:

Looks like you have too
many open files issue.
Increase the ulimit for

Re: [MASSMAIL]Re: Error while adding a new node.

2015-07-02 Thread Carlos Rolo
Marco you should also avoid 2.1.5 and 2.1.6 because of
https://issues.apache.org/jira/browse/CASSANDRA-9549

I know (And often don't recommend last versions, I'm still recommending
2.0.x series unless someone is already in 2.1.x) but given the above bug,
2.1.7 is the best option.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
http://linkedin.com/in/carlosjuzarterolo*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Thu, Jul 2, 2015 at 3:20 PM, Marcos Ortiz mlor...@uci.cu wrote:

  The recommended version to use is 2.1.5 because, like you Carlos said,
 2.1.6 and 2.1.7 are very new to consider them like
 stable.

 On 02/07/15 08:55, Carlos Rolo wrote:

  Indeed you should upgrade to 2.1.7.

  And then report if you are still facing problems. Versions up to 2.1.5
 (in the 2.1.x series) are not considered stable.

Regards,

  Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

  rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jul 2, 2015 at 11:40 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 any help?

 On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 also:
 root@cas03:~# sudo service cassandra start
 root@cas03:~# lsof -n | grep java | wc -l
 5315
 root@cas03:~# lsof -n | grep java | wc -l
 977317
 root@cas03:~# lsof -n | grep java | wc -l
 880240
 root@cas03:~# lsof -n | grep java | wc -l
 882402


 On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

   One of the column family has SStable count as under :
 SSTable count: 98506

  Can it be because of 2.1.3 version of cassandra..
  I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

  regards
  Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

  Hey..
 nodetool compactionstats
 pending tasks: 0

  no pending tasks.

  Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending
 (Opscenter / nodetool compactionstats).

  Also you can monitor the number of sstables.

  C*heers

  Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

   Thanks I will checkout.
  I increased the ulimit to 10, but I am getting the same error,
 but after a while.
  regards
  Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
  wrote:

  Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

  C*heers,

  Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

   Arun,
  I am logging on to Server as root and running (sudo service
 cassandra start)

  regards
  Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com
 wrote:

 Looks like you have too many open files issue. Increase the
 ulimit for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi 
 nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be 
 unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Index.db (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3
 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 













 --




 --
 Marcos Ortiz http://about.me/marcosortiz, Sr. Product Manager (Data
 Infrastructure) at UCI
 @marcosluis2186 http://twitter.com/marcosluis2186




-- 


--





Re: Error while adding a new node.

2015-07-01 Thread Alain RODRIGUEZ
Just check the process owner to be sure (top, htop, ps, ...)

http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

C*heers,

Alain

2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for
 the user.

  If you are starting the cassandra daemon using user cassandra, increase
 the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too
 many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 






Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
One of the column family has SStable count as under :
SSTable count: 98506

Can it be because of 2.1.3 version of cassandra..
I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

regards
Neha


On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending (Opscenter
 / nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
  wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit
 for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 











Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
Hey..
nodetool compactionstats
pending tasks: 0

no pending tasks.

Dont have opscenter. how do I monitor sstables?


On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com wrote:

 You also might want to check if you have compactions pending (Opscenter /
 nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit
 for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 









Re: Error while adding a new node.

2015-07-01 Thread Alain RODRIGUEZ
You also might want to check if you have compactions pending (Opscenter /
nodetool compactionstats).

Also you can monitor the number of sstables.

C*heers

Alain

2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for
 the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17)
 ?
 
  regards
  N
 
 
 
 
 
 
 








Re: Error while adding a new node.

2015-07-01 Thread Jason Wee
nodetool cfstats?

On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending (Opscenter /
 nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit
 for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 










Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
Thanks I will checkout.
I increased the ulimit to 10, but I am getting the same error, but
after a while.
regards
Neha


On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for
 the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too
 many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 







RE: Stream failure while adding a new node

2015-07-01 Thread David CHARBONNIER
Hi Alain,

We still have the timeout problem in OPSCenter and we still didn’t solve this 
problem so no we didn’t ran an entire repair with the repair service.
And yes, during this try, we’ve set auto_bootstrap to true and ran a repair on 
the 9th node after it finished streaming.

Thank you for your help.

Best regards,

[cid:image001.png@01D0B41D.54B89AA0]

David CHARBONNIER

Sysadmin

T : +33 411 934 200

david.charbonn...@rgsystem.commailto:david.charbonn...@rgsystem.com


ZAC Aéroport

125 Impasse Adam Smith

34470 Pérols - France

www.rgsystem.comhttp://www.rgsystem.com/



[cid:image003.png@01D0B41D.54B89AA0]



De : Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Envoyé : mardi 30 juin 2015 15:18
À : user@cassandra.apache.org
Objet : Re: Stream failure while adding a new node

Hi David,

Are you sure you ran the repair entirely (9 days + repair logs ok on opscenter 
server) before adding the 10th node ? This is important to avoid potential data 
loss ! Did you set auto_bootstrap to true on this 10th node ?

C*heers,

Alain



2015-06-29 14:54 GMT+02:00 David CHARBONNIER 
david.charbonn...@rgsystem.commailto:david.charbonn...@rgsystem.com:
Hi,

We’re using Cassandra 2.0.8.39 through Datastax Enterprise 4.5.1 with a 9 nodes 
cluster.
We need to add a few new nodes to the cluster but we’re experiencing an issue 
we don’t know how to solve.
Here is exactly what we did :

-  We had 8 nodes and need to add a few ones

-  We tried to add 9th node but stream stucked a very long time and 
bootstrap never finish (related to streaming_socket_timeout_in_ms default value 
in cassandra.yaml)

-  We ran a solution given by a Datastax’s architect : restart the node 
with auto_bootstrap set to false and run a repair

-  After this issue, we ran into pathing the default configuration on 
all our nodes to avoid this problem and made a rolling restart of the cluster

-  Then, we tried adding a 10th node but it receives stream from only 
one node (node2).

Here is the logs on this problematic node (node10) :
INFO [main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 87) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Executing streaming plan for Bootstrap
INFO [main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node6
INFO [main] 2015-06-26 15:25:59,491 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node5
INFO [main] 2015-06-26 15:25:59,492 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node4
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node3
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node9
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node8
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node7
INFO [main] 2015-06-26 15:25:59,494 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node1
INFO [main] 2015-06-26 15:25:59,494 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node2
INFO [STREAM-IN-/node6] 2015-06-26 15:25:59,515 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node6 is 
complete
INFO [STREAM-IN-/node4] 2015-06-26 15:25:59,516 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node4 is 
complete
INFO [STREAM-IN-/node5] 2015-06-26 15:25:59,517 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node5 is 
complete
INFO [STREAM-IN-/node3] 2015-06-26 15:25:59,527 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node3 is 
complete
INFO [STREAM-IN-/node1] 2015-06-26 15:25:59,528 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node1 is 
complete
INFO [STREAM-IN-/node8] 2015-06-26 15:25:59,530 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node8 is 
complete
INFO [STREAM-IN-/node7] 2015-06-26 15:25:59,531 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node7 is 
complete
INFO [STREAM-IN-/node9] 2015-06-26 15:25:59,533 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node9 is 
complete
INFO [STREAM-IN-/node2] 2015-06-26 15:26:04,874

Re: Stream failure while adding a new node

2015-07-01 Thread Jan
David ;
bring down all the nodes with the exception of the 'seed' node.Now bring up the 
10th node.   Run 'nodetool status'  and wait until this 10th node is UP. Bring 
up the rest of the nodes after that. Run  'nodetool status'  again and check 
that all the nodes are UP.  
Alternatively;decommission the 10th node completely.drop it from the Cluster.  
Build a new node with the same IP and hostname  and have it join the running 
cluster. 
hope this helpsJan


 


 On Wednesday, July 1, 2015 7:56 AM, David CHARBONNIER 
david.charbonn...@rgsystem.com wrote:
   

 #yiv2507924157 #yiv2507924157 -- _filtered #yiv2507924157 {panose-1:2 4 5 3 5 
4 6 3 2 4;} _filtered #yiv2507924157 {font-family:Calibri;panose-1:2 15 5 2 2 2 
4 3 2 4;} _filtered #yiv2507924157 {font-family:Tahoma;panose-1:2 11 6 4 3 5 4 
4 2 4;}#yiv2507924157 #yiv2507924157 p.yiv2507924157MsoNormal, #yiv2507924157 
li.yiv2507924157MsoNormal, #yiv2507924157 div.yiv2507924157MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv2507924157 a:link, 
#yiv2507924157 span.yiv2507924157MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv2507924157 a:visited, #yiv2507924157 
span.yiv2507924157MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv2507924157 p 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;}#yiv2507924157 
p.yiv2507924157MsoAcetate, #yiv2507924157 li.yiv2507924157MsoAcetate, 
#yiv2507924157 div.yiv2507924157MsoAcetate 
{margin:0cm;margin-bottom:.0001pt;font-size:8.0pt;}#yiv2507924157 
span.yiv2507924157EmailStyle18 {color:#1F497D;}#yiv2507924157 
span.yiv2507924157TextedebullesCar {}#yiv2507924157 .yiv2507924157MsoChpDefault 
{} _filtered #yiv2507924157 {margin:70.85pt 70.85pt 70.85pt 
70.85pt;}#yiv2507924157 div.yiv2507924157WordSection1 {}#yiv2507924157 Hi 
Alain,    We still have the timeout problem in OPSCenter and we still didn’t 
solve this problem so no we didn’t ran an entire repair with the repair 
service. And yes, during this try, we’ve set auto_bootstrap to true and ran a 
repair on the 9th node after it finished streaming.    Thank you for your help. 
   Best regards,    
|   | 
| David CHARBONNIER  |
| Sysadmin  |
| T : +33 411 934 200  |
| david.charbonn...@rgsystem.com  |

 |  | 
| ZAC Aéroport  |
| 125 Impasse Adam Smith  |
| 34470 Pérols - France  |
| www.rgsystem.com  |

 |

   
|   |

      De : Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Envoyé : mardi 30 juin 2015 15:18
À : user@cassandra.apache.org
Objet : Re: Stream failure while adding a new node    Hi David,    Are you sure 
you ran the repair entirely (9 days + repair logs ok on opscenter server) 
before adding the 10th node ? This is important to avoid potential data loss ! 
Did you set auto_bootstrap to true on this 10th node ?    C*heers,    Alain     
     2015-06-29 14:54 GMT+02:00 David CHARBONNIER 
david.charbonn...@rgsystem.com: Hi,   We’re using Cassandra 2.0.8.39 through 
Datastax Enterprise 4.5.1 with a 9 nodes cluster. We need to add a few new 
nodes to the cluster but we’re experiencing an issue we don’t know how to 
solve. Here is exactly what we did : - We had 8 nodes and need to add a 
few ones - We tried to add 9th node but stream stucked a very long time 
and bootstrap never finish (related to streaming_socket_timeout_in_ms default 
value in cassandra.yaml) - We ran a solution given by a Datastax’s 
architect : restart the node with auto_bootstrap set to false and run a repair 
- After this issue, we ran into pathing the default configuration on 
all our nodes to avoid this problem and made a rolling restart of the cluster - 
Then, we tried adding a 10th node but it receives stream from only one 
node (node2).   Here is the logs on this problematic node (node10) : INFO 
[main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 87) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Executing streaming plan for Bootstrap 
INFO [main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node6 
INFO [main] 2015-06-26 15:25:59,491 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node5 
INFO [main] 2015-06-26 15:25:59,492 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node4 
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node3 
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node9 
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node8 
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning

Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
also:
root@cas03:~# sudo service cassandra start
root@cas03:~# lsof -n | grep java | wc -l
5315
root@cas03:~# lsof -n | grep java | wc -l
977317
root@cas03:~# lsof -n | grep java | wc -l
880240
root@cas03:~# lsof -n | grep java | wc -l
882402


On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com wrote:

 One of the column family has SStable count as under :
 SSTable count: 98506

 Can it be because of 2.1.3 version of cassandra..
 I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

 regards
 Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending (Opscenter
 / nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service
 cassandra start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit
 for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 












Re: Error while adding a new node.

2015-06-30 Thread Arun
Looks like you have too many open files issue. Increase the ulimit for the user.

 If you are starting the cassandra daemon using user cassandra, increase the 
ulimit for that user.


 On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com wrote:
 
 Hello,
 I have a 4 node cluster with SimpleSnitch.
 Cassandra :  Cassandra 2.1.3 
 
 I am trying to add a new node (cassandra 2.1.7) and I get the following error.
 
 ERROR [STREAM-IN-] 2015-06-30 05:13:48,516 JVMStabilityInspector.java:94 - 
 JVM state determined to be unstable.  Exiting forcefully due to:
 java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too many 
 open files)
 
 I increased the MAX_HEAP_SIZE then I get :
 ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792 CassandraDaemon.java:223 
 - Exception in thread Thread[CompactionExecutor:9,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /var/lib/cassandra/data/-Data.db (Too many open files)
 at 
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
  ~[apache-cassandra-2.1.7.jar:2.1.7]
 
 Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
 regards
 N
 
 
 
 
 
 
 


Re: Error while adding a new node.

2015-06-30 Thread Neha Trivedi
Thanks Arun ! I will try and get back !

On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for the
 user.

  If you are starting the cassandra daemon using user cassandra, increase
 the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the following
 error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516 JVMStabilityInspector.java:94
 - JVM state determined to be unstable.  Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too
 many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 



Re: Stream failure while adding a new node

2015-06-30 Thread Alain RODRIGUEZ
Hi David,

Are you sure you ran the repair entirely (9 days + repair logs ok on
opscenter server) before adding the 10th node ? This is important to avoid
potential data loss ! Did you set auto_bootstrap to true on this 10th node ?

C*heers,

Alain



2015-06-29 14:54 GMT+02:00 David CHARBONNIER david.charbonn...@rgsystem.com
:

  Hi,



 We’re using Cassandra 2.0.8.39 through Datastax Enterprise 4.5.1 with a 9
 nodes cluster.

 We need to add a few new nodes to the cluster but we’re experiencing an
 issue we don’t know how to solve.

 Here is exactly what we did :

 -  We had 8 nodes and need to add a few ones

 -  We tried to add 9th node but stream stucked a very long time
 and bootstrap never finish (related to streaming_socket_timeout_in_ms
 default value in cassandra.yaml)

 -  We ran a solution given by a Datastax’s architect : restart
 the node with auto_bootstrap set to false and run a repair

 -  After this issue, we ran into pathing the default
 configuration on all our nodes to avoid this problem and made a rolling
 restart of the cluster

 -  Then, we tried adding a 10th node but it receives stream from
 only one node (node2).



 Here is the logs on this problematic node (node10) :

 INFO [main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 87)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Executing streaming plan for
 Bootstrap

 INFO [main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node6

 INFO [main] 2015-06-26 15:25:59,491 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node5

 INFO [main] 2015-06-26 15:25:59,492 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node4

 INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node3

 INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node9

 INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node8

 INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node7

 INFO [main] 2015-06-26 15:25:59,494 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node1

 INFO [main] 2015-06-26 15:25:59,494 StreamResultFuture.java (line 91)
 [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session
 with /node2

 INFO [STREAM-IN-/node6] 2015-06-26 15:25:59,515 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node6 is complete

 INFO [STREAM-IN-/node4] 2015-06-26 15:25:59,516 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node4 is complete

 INFO [STREAM-IN-/node5] 2015-06-26 15:25:59,517 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node5 is complete

 INFO [STREAM-IN-/node3] 2015-06-26 15:25:59,527 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node3 is complete

 INFO [STREAM-IN-/node1] 2015-06-26 15:25:59,528 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node1 is complete

 INFO [STREAM-IN-/node8] 2015-06-26 15:25:59,530 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node8 is complete

 INFO [STREAM-IN-/node7] 2015-06-26 15:25:59,531 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node7 is complete

 INFO [STREAM-IN-/node9] 2015-06-26 15:25:59,533 StreamResultFuture.java
 (line 186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with
 /node9 is complete

 INFO [STREAM-IN-/node2] 2015-06-26 15:26:04,874 StreamResultFuture.java
 (line 173) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Prepare
 completed. Receiving 171 files(14844054090 bytes), sending 0 files(0 bytes)



 On the other nodes (not node2 which streams data), there is an error
 telling that node10 has no hostID.



 Did you ran into this issue or do you have any idea on how to solve this ?



 Thank you for your help.



 Best regards,



 *David CHARBONNIER*

 Sysadmin

 T : +33 411 934 200

 david.charbonn...@rgsystem.com

 ZAC Aéroport

 125 Impasse Adam Smith

 34470 Pérols - France

 *www.rgsystem.com* http://www.rgsystem.com/









Error while adding a new node.

2015-06-30 Thread Neha Trivedi
Hello,
I have a 4 node cluster with SimpleSnitch.
Cassandra :  Cassandra 2.1.3

I am trying to add a new node (cassandra 2.1.7) and I get the following
error.

ERROR [STREAM-IN-] 2015-06-30 05:13:48,516 JVMStabilityInspector.java:94 -
JVM state determined to be unstable.  Exiting forcefully due to:
java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too many
open files)

I increased the MAX_HEAP_SIZE then I get :
ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
CassandraDaemon.java:223 - Exception in thread
Thread[CompactionExecutor:9,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException:
/var/lib/cassandra/data/-Data.db (Too many open files)
at
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
~[apache-cassandra-2.1.7.jar:2.1.7]

Is it because of the different version of Cassandra (2.1.3 and 2.17) ?

regards
N


Re: Error while adding a new node.

2015-06-30 Thread Neha Trivedi
Arun,
I am logging on to Server as root and running (sudo service cassandra start)

regards
Neha

On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for
 the user.

  If you are starting the cassandra daemon using user cassandra, increase
 the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the following
 error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too
 many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 





Stream failure while adding a new node

2015-06-29 Thread David CHARBONNIER
Hi,

We're using Cassandra 2.0.8.39 through Datastax Enterprise 4.5.1 with a 9 nodes 
cluster.
We need to add a few new nodes to the cluster but we're experiencing an issue 
we don't know how to solve.
Here is exactly what we did :

-  We had 8 nodes and need to add a few ones

-  We tried to add 9th node but stream stucked a very long time and 
bootstrap never finish (related to streaming_socket_timeout_in_ms default value 
in cassandra.yaml)

-  We ran a solution given by a Datastax's architect : restart the node 
with auto_bootstrap set to false and run a repair

-  After this issue, we ran into pathing the default configuration on 
all our nodes to avoid this problem and made a rolling restart of the cluster

-  Then, we tried adding a 10th node but it receives stream from only 
one node (node2).

Here is the logs on this problematic node (node10) :
INFO [main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 87) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Executing streaming plan for Bootstrap
INFO [main] 2015-06-26 15:25:59,490 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node6
INFO [main] 2015-06-26 15:25:59,491 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node5
INFO [main] 2015-06-26 15:25:59,492 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node4
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node3
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node9
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node8
INFO [main] 2015-06-26 15:25:59,493 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node7
INFO [main] 2015-06-26 15:25:59,494 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node1
INFO [main] 2015-06-26 15:25:59,494 StreamResultFuture.java (line 91) [Stream 
#a5226b30-1c17-11e5-a58b-e35f08264ca1] Beginning stream session with /node2
INFO [STREAM-IN-/node6] 2015-06-26 15:25:59,515 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node6 is 
complete
INFO [STREAM-IN-/node4] 2015-06-26 15:25:59,516 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node4 is 
complete
INFO [STREAM-IN-/node5] 2015-06-26 15:25:59,517 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node5 is 
complete
INFO [STREAM-IN-/node3] 2015-06-26 15:25:59,527 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node3 is 
complete
INFO [STREAM-IN-/node1] 2015-06-26 15:25:59,528 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node1 is 
complete
INFO [STREAM-IN-/node8] 2015-06-26 15:25:59,530 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node8 is 
complete
INFO [STREAM-IN-/node7] 2015-06-26 15:25:59,531 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node7 is 
complete
INFO [STREAM-IN-/node9] 2015-06-26 15:25:59,533 StreamResultFuture.java (line 
186) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Session with /node9 is 
complete
INFO [STREAM-IN-/node2] 2015-06-26 15:26:04,874 StreamResultFuture.java (line 
173) [Stream #a5226b30-1c17-11e5-a58b-e35f08264ca1] Prepare completed. 
Receiving 171 files(14844054090 bytes), sending 0 files(0 bytes)

On the other nodes (not node2 which streams data), there is an error telling 
that node10 has no hostID.

Did you ran into this issue or do you have any idea on how to solve this ?

Thank you for your help.

Best regards,

[cid:image001.png@01D0B276.9AED4370]

David CHARBONNIER

Sysadmin

T : +33 411 934 200

david.charbonn...@rgsystem.commailto:david.charbonn...@rgsystem.com


ZAC Aéroport

125 Impasse Adam Smith

34470 Pérols - France

www.rgsystem.comhttp://www.rgsystem.com/



[cid:image003.png@01D0B27A.D02934A0]





Problem in adding a new node

2012-06-08 Thread Prakrati Agrawal
Dear all,

I had a 1 node cluster of Cassandra. Then I added one more node to it and 
started Cassandra on it. I got the following error:

INFO 12:44:49,588 Loading persisted ring state
ERROR 12:44:49,613 Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
java.io.IOError: java.io.IOException: Map failed
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:127)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$3.run(CommitLogAllocator.java:191)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:119)
... 4 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)

Please tell me what is the reason for this error and how should I rectify it.

Thanks and Regards
Prakrati



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


RE: Problem in adding a new node

2012-06-08 Thread MOHD ARSHAD SALEEM
Hi,

the node which you added in that (cassandra.yaml file)give the ip address of 
1st node in seeds option.

Regards
Arshad

From: Prakrati Agrawal [prakrati.agra...@mu-sigma.com]
Sent: Friday, June 08, 2012 12:44 PM
To: user@cassandra.apache.org
Subject: Problem in adding a new node

Dear all,

I had a 1 node cluster of Cassandra. Then I added one more node to it and 
started Cassandra on it. I got the following error:

INFO 12:44:49,588 Loading persisted ring state
ERROR 12:44:49,613 Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
java.io.IOError: java.io.IOException: Map failed
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:127)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$3.run(CommitLogAllocator.java:191)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:119)
... 4 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)

Please tell me what is the reason for this error and how should I rectify it.

Thanks and Regards
Prakrati



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


RE: Problem in adding a new node

2012-06-08 Thread Prakrati Agrawal
Yes I gave the ip address of the 1st node in the seeds option

Thanks and Regards
Prakrati
From: MOHD ARSHAD SALEEM [mailto:marshadsal...@tataelxsi.co.in]
Sent: Friday, June 08, 2012 12:51 PM
To: user@cassandra.apache.org
Subject: RE: Problem in adding a new node

Hi,

the node which you added in that (cassandra.yaml file)give the ip address of 
1st node in seeds option.

Regards
Arshad

From: Prakrati Agrawal [prakrati.agra...@mu-sigma.com]
Sent: Friday, June 08, 2012 12:44 PM
To: user@cassandra.apache.org
Subject: Problem in adding a new node
Dear all,

I had a 1 node cluster of Cassandra. Then I added one more node to it and 
started Cassandra on it. I got the following error:

INFO 12:44:49,588 Loading persisted ring state
ERROR 12:44:49,613 Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
java.io.IOError: java.io.IOException: Map failed
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:127)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$3.run(CommitLogAllocator.java:191)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:119)
... 4 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)

Please tell me what is the reason for this error and how should I rectify it.

Thanks and Regards
Prakrati



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


Re: Problem in adding a new node

2012-06-08 Thread Sylvain Lebresne
Do you use a 32 bit JVM ? If so I refer you to the following thread:
http://mail-archives.apache.org/mod_mbox/cassandra-user/201204.mbox/%3ccaldd-zgthksc2bikp3h4trjxo5vcnhkl2wpwclsf+d9sqty...@mail.gmail.com%3E

In short, avoids 32 bits, but if you really cannot, set
commitlog_total_space_in_mb to a low value (128-256MB)

--
Sylvain

On Fri, Jun 8, 2012 at 9:14 AM, Prakrati Agrawal
prakrati.agra...@mu-sigma.com wrote:
 Dear all,



 I had a 1 node cluster of Cassandra. Then I added one more node to it and
 started Cassandra on it. I got the following error:



 INFO 12:44:49,588 Loading persisted ring state

 ERROR 12:44:49,613 Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]

 java.io.IOError: java.io.IOException: Map failed

     at
 org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:127)

     at
 org.apache.cassandra.db.commitlog.CommitLogAllocator$3.run(CommitLogAllocator.java:191)

     at
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)

     at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)

     at java.lang.Thread.run(Thread.java:662)

 Caused by: java.io.IOException: Map failed

     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)

     at
 org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:119)

     ... 4 more

 Caused by: java.lang.OutOfMemoryError: Map failed

     at sun.nio.ch.FileChannelImpl.map0(Native Method)

     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)



 Please tell me what is the reason for this error and how should I rectify
 it.



 Thanks and Regards

 Prakrati




 
 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient is
 prohibited and may be illegal. If you received this in error, please contact
 the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet accessibility,
 the Company cannot accept liability for any virus introduced by this e-mail
 or any attachment and you are advised to use up-to-date virus checking
 software.


Adding a new node to Cassandra cluster

2012-06-04 Thread Prakrati Agrawal
Dear all

I successfully added a new node to my cluster so now it's a 2 node cluster. But 
how do I mention it in my Java code as when I am retrieving data its retrieving 
only for one node that I am specifying in the localhost. How do I specify more 
than one node in the localhost.

Please help me

Thanks and Regards

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | www.mu-sigma.com



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


Re: Adding a new node to Cassandra cluster

2012-06-04 Thread R. Verlangen
Hi there,

When you speak to one node it will internally redirect the request to the
proper node (local / external): but you won't be able to failover on a
crash of the localhost.
For adding another node to the connection pool you should take a look at
the documentation of your java client.

Good luck!

2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

  Dear all

 ** **

 I successfully added a new node to my cluster so now it’s a 2 node
 cluster. But how do I mention it in my Java code as when I am retrieving
 data its retrieving only for one node that I am specifying in the
 localhost. How do I specify more than one node in the localhost.

 ** **

 Please help me

 ** **

 Thanks and Regards

 ** **

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

 ** **

 --
 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient
 is prohibited and may be illegal. If you received this in error, please
 contact the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet
 accessibility, the Company cannot accept liability for any virus introduced
 by this e-mail or any attachment and you are advised to use up-to-date
 virus checking software.




-- 
With kind regards,

Robin Verlangen
*Software engineer*
*
*
W www.robinverlangen.nl
E ro...@us2.nl

Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the named addressee and may be
confidential. If you are not the intended recipient, you are reminded that
the information remains the property of the sender. You must not use,
disclose, distribute, copy, print or rely on this e-mail. If you have
received this message in error, please contact the sender immediately and
irrevocably delete this message and any copies.


RE: Adding a new node to Cassandra cluster

2012-06-04 Thread Prakrati Agrawal
Hi,

I am using Thrift API and I am not able to find anything on the internet about 
how to configure it for multiple nodes. I am not using any proper client like 
Hector.

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | www.mu-sigma.com

From: R. Verlangen [mailto:ro...@us2.nl]
Sent: Monday, June 04, 2012 2:44 PM
To: user@cassandra.apache.org
Subject: Re: Adding a new node to Cassandra cluster

Hi there,

When you speak to one node it will internally redirect the request to the 
proper node (local / external): but you won't be able to failover on a crash of 
the localhost.
For adding another node to the connection pool you should take a look at the 
documentation of your java client.

Good luck!

2012/6/4 Prakrati Agrawal 
prakrati.agra...@mu-sigma.commailto:prakrati.agra...@mu-sigma.com
Dear all

I successfully added a new node to my cluster so now it's a 2 node cluster. But 
how do I mention it in my Java code as when I am retrieving data its retrieving 
only for one node that I am specifying in the localhost. How do I specify more 
than one node in the localhost.

Please help me

Thanks and Regards

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | 
www.mu-sigma.comhttp://www.mu-sigma.com



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



--
With kind regards,

Robin Verlangen
Software engineer

W www.robinverlangen.nlhttp://www.robinverlangen.nl
E ro...@us2.nlmailto:ro...@us2.nl

Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use of the named addressee and may be 
confidential. If you are not the intended recipient, you are reminded that the 
information remains the property of the sender. You must not use, disclose, 
distribute, copy, print or rely on this e-mail. If you have received this 
message in error, please contact the sender immediately and irrevocably delete 
this message and any copies.



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


Re: Adding a new node to Cassandra cluster

2012-06-04 Thread R. Verlangen
You might consider using a higher level client (like Hector indeed). If you
don't want this you will have to write your own connection pool. For start
take a look at Hector. But keep in mind that you might be reinventing the
wheel.

2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

  Hi,

 ** **

 I am using Thrift API and I am not able to find anything on the internet
 about how to configure it for multiple nodes. I am not using any proper
 client like Hector.

 ** **

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

 ** **

 *From:* R. Verlangen [mailto:ro...@us2.nl]
 *Sent:* Monday, June 04, 2012 2:44 PM
 *To:* user@cassandra.apache.org
 *Subject:* Re: Adding a new node to Cassandra cluster

 ** **

 Hi there,

 ** **

 When you speak to one node it will internally redirect the request to the
 proper node (local / external): but you won't be able to failover on a
 crash of the localhost.

 For adding another node to the connection pool you should take a look at
 the documentation of your java client.

 ** **

 Good luck!

 ** **

 2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

 Dear all

  

 I successfully added a new node to my cluster so now it’s a 2 node
 cluster. But how do I mention it in my Java code as when I am retrieving
 data its retrieving only for one node that I am specifying in the
 localhost. How do I specify more than one node in the localhost.

  

 Please help me

  

 Thanks and Regards

  

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

  

 ** **
  --

 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient
 is prohibited and may be illegal. If you received this in error, please
 contact the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet
 accessibility, the Company cannot accept liability for any virus introduced
 by this e-mail or any attachment and you are advised to use up-to-date
 virus checking software.



 

 ** **

 --
 With kind regards,

 ** **

 Robin Verlangen

 *Software engineer*

 ** **

 W www.robinverlangen.nl

 E ro...@us2.nl

 ** **

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.

 ** **

 --
 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient
 is prohibited and may be illegal. If you received this in error, please
 contact the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet
 accessibility, the Company cannot accept liability for any virus introduced
 by this e-mail or any attachment and you are advised to use up-to-date
 virus checking software.




-- 
With kind regards,

Robin Verlangen
*Software engineer*
*
*
W www.robinverlangen.nl
E ro...@us2.nl

Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the named addressee and may be
confidential. If you are not the intended recipient, you are reminded that
the information remains the property of the sender. You must not use,
disclose, distribute, copy, print or rely on this e-mail. If you have
received this message in error, please contact the sender immediately and
irrevocably delete this message and any copies.


Re: Adding a new node to Cassandra cluster

2012-06-04 Thread Roshni Rajagopal
Prakrati,

I believe even though you would specify one node in your code, internally the 
request would be going to  any – perhaps more than 1 node based on your 
replication factors  consistency level settings.
You can try this  by connecting to one node and writing to it and then reading 
the same data from another node. You can see this replication happening via CLI 
as well.

Regards,
Roshni


From: R. Verlangen ro...@us2.nlmailto:ro...@us2.nl
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Mon, 4 Jun 2012 02:30:40 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Adding a new node to Cassandra cluster

You might consider using a higher level client (like Hector indeed). If you 
don't want this you will have to write your own connection pool. For start take 
a look at Hector. But keep in mind that you might be reinventing the wheel.

2012/6/4 Prakrati Agrawal 
prakrati.agra...@mu-sigma.commailto:prakrati.agra...@mu-sigma.com
Hi,

I am using Thrift API and I am not able to find anything on the internet about 
how to configure it for multiple nodes. I am not using any proper client like 
Hector.

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | 
www.mu-sigma.comhttp://www.mu-sigma.com

From: R. Verlangen [mailto:ro...@us2.nlmailto:ro...@us2.nl]
Sent: Monday, June 04, 2012 2:44 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Adding a new node to Cassandra cluster

Hi there,

When you speak to one node it will internally redirect the request to the 
proper node (local / external): but you won't be able to failover on a crash of 
the localhost.
For adding another node to the connection pool you should take a look at the 
documentation of your java client.

Good luck!

2012/6/4 Prakrati Agrawal 
prakrati.agra...@mu-sigma.commailto:prakrati.agra...@mu-sigma.com
Dear all

I successfully added a new node to my cluster so now it’s a 2 node cluster. But 
how do I mention it in my Java code as when I am retrieving data its retrieving 
only for one node that I am specifying in the localhost. How do I specify more 
than one node in the localhost.

Please help me

Thanks and Regards

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | 
www.mu-sigma.comhttp://www.mu-sigma.com



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



--
With kind regards,

Robin Verlangen
Software engineer

W www.robinverlangen.nlhttp://www.robinverlangen.nl
E ro...@us2.nlmailto:ro...@us2.nl

Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use of the named addressee and may be 
confidential. If you are not the intended recipient, you are reminded that the 
information remains the property of the sender. You must not use, disclose, 
distribute, copy, print or rely on this e-mail. If you have received this 
message in error, please contact the sender immediately and irrevocably delete 
this message and any copies.



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



--
With kind regards,

Robin Verlangen
Software engineer

W www.robinverlangen.nlhttp://www.robinverlangen.nl
E ro...@us2.nlmailto:ro...@us2.nl

Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use

Re: Adding a new node to Cassandra cluster

2012-06-04 Thread samal
If you use thrift API, you have to maintain lot of low level code by
yourself which is already being polished by HLC  hector, pycassa also with
HLC your can easily switch between thrift and growing CQL.

On Mon, Jun 4, 2012 at 3:00 PM, R. Verlangen ro...@us2.nl wrote:

 You might consider using a higher level client (like Hector indeed). If
 you don't want this you will have to write your own connection pool. For
 start take a look at Hector. But keep in mind that you might be
 reinventing the wheel.


 2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

  Hi,

 ** **

 I am using Thrift API and I am not able to find anything on the internet
 about how to configure it for multiple nodes. I am not using any proper
 client like Hector.

 ** **

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

 ** **

 *From:* R. Verlangen [mailto:ro...@us2.nl]
 *Sent:* Monday, June 04, 2012 2:44 PM
 *To:* user@cassandra.apache.org
 *Subject:* Re: Adding a new node to Cassandra cluster

 ** **

 Hi there,

 ** **

 When you speak to one node it will internally redirect the request to the
 proper node (local / external): but you won't be able to failover on a
 crash of the localhost.

 For adding another node to the connection pool you should take a look at
 the documentation of your java client.

 ** **

 Good luck!

 ** **

 2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

 Dear all

  

 I successfully added a new node to my cluster so now it’s a 2 node
 cluster. But how do I mention it in my Java code as when I am retrieving
 data its retrieving only for one node that I am specifying in the
 localhost. How do I specify more than one node in the localhost.

  

 Please help me

  

 Thanks and Regards

  

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

  

 ** **
  --

 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient
 is prohibited and may be illegal. If you received this in error, please
 contact the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet
 accessibility, the Company cannot accept liability for any virus introduced
 by this e-mail or any attachment and you are advised to use up-to-date
 virus checking software.



 

 ** **

 --
 With kind regards,

 ** **

 Robin Verlangen

 *Software engineer*

 ** **

 W www.robinverlangen.nl

 E ro...@us2.nl

 ** **

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.

 ** **

 --
 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient
 is prohibited and may be illegal. If you received this in error, please
 contact the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet
 accessibility, the Company cannot accept liability for any virus introduced
 by this e-mail or any attachment and you are advised to use up-to-date
 virus checking software.




 --
 With kind regards,

 Robin Verlangen
 *Software engineer*
 *
 *
 W www.robinverlangen.nl
 E ro...@us2.nl

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.




RE: Adding a new node to Cassandra cluster

2012-06-04 Thread Prakrati Agrawal
Ye I know I am trying to reinvent the wheel but I have to. The requirement is 
such that I have to use Java Thrift API without any client like Hector. Can you 
please tell me how do I do it.

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | www.mu-sigma.com

From: samal [mailto:samalgo...@gmail.com]
Sent: Monday, June 04, 2012 3:12 PM
To: user@cassandra.apache.org
Subject: Re: Adding a new node to Cassandra cluster

If you use thrift API, you have to maintain lot of low level code by yourself 
which is already being polished by HLC  hector, pycassa also with HLC your can 
easily switch between thrift and growing CQL.
On Mon, Jun 4, 2012 at 3:00 PM, R. Verlangen 
ro...@us2.nlmailto:ro...@us2.nl wrote:
You might consider using a higher level client (like Hector indeed). If you 
don't want this you will have to write your own connection pool. For start take 
a look at Hector. But keep in mind that you might be reinventing the wheel.

2012/6/4 Prakrati Agrawal 
prakrati.agra...@mu-sigma.commailto:prakrati.agra...@mu-sigma.com
Hi,

I am using Thrift API and I am not able to find anything on the internet about 
how to configure it for multiple nodes. I am not using any proper client like 
Hector.

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | 
www.mu-sigma.comhttp://www.mu-sigma.com

From: R. Verlangen [mailto:ro...@us2.nlmailto:ro...@us2.nl]
Sent: Monday, June 04, 2012 2:44 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Adding a new node to Cassandra cluster

Hi there,

When you speak to one node it will internally redirect the request to the 
proper node (local / external): but you won't be able to failover on a crash of 
the localhost.
For adding another node to the connection pool you should take a look at the 
documentation of your java client.

Good luck!

2012/6/4 Prakrati Agrawal 
prakrati.agra...@mu-sigma.commailto:prakrati.agra...@mu-sigma.com
Dear all

I successfully added a new node to my cluster so now it’s a 2 node cluster. But 
how do I mention it in my Java code as when I am retrieving data its retrieving 
only for one node that I am specifying in the localhost. How do I specify more 
than one node in the localhost.

Please help me

Thanks and Regards

Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 | 
www.mu-sigma.comhttp://www.mu-sigma.com



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



--
With kind regards,

Robin Verlangen
Software engineer

W www.robinverlangen.nlhttp://www.robinverlangen.nl
E ro...@us2.nlmailto:ro...@us2.nl

Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use of the named addressee and may be 
confidential. If you are not the intended recipient, you are reminded that the 
information remains the property of the sender. You must not use, disclose, 
distribute, copy, print or rely on this e-mail. If you have received this 
message in error, please contact the sender immediately and irrevocably delete 
this message and any copies.



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



--
With kind regards,

Robin Verlangen
Software engineer

W www.robinverlangen.nlhttp://www.robinverlangen.nl
E ro...@us2.nlmailto:ro...@us2.nl

Disclaimer: The information contained in this message and attachments is 
intended solely for the attention and use of the named addressee and may be 
confidential. If you

Re: Adding a new node to Cassandra cluster

2012-06-04 Thread R. Verlangen
Connection pooling involves things like:
- (transparent) failover / retry
- disposal of connections after X messages
- keep track of connections

Again: take a look at the hector connection pool. Source:
https://github.com/rantav/hector/tree/master/core/src/main/java/me/prettyprint/cassandra/connection

2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

  Ye I know I am trying to reinvent the wheel but I have to. The
 requirement is such that I have to use Java Thrift API without any client
 like Hector. Can you please tell me how do I do it.

 ** **

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

 ** **

 *From:* samal [mailto:samalgo...@gmail.com]
 *Sent:* Monday, June 04, 2012 3:12 PM

 *To:* user@cassandra.apache.org
 *Subject:* Re: Adding a new node to Cassandra cluster

  ** **

 If you use thrift API, you have to maintain lot of low level code by
 yourself which is already being polished by HLC  hector, pycassa also with
 HLC your can easily switch between thrift and growing CQL.

 On Mon, Jun 4, 2012 at 3:00 PM, R. Verlangen ro...@us2.nl wrote:

 You might consider using a higher level client (like Hector indeed). If
 you don't want this you will have to write your own connection pool. For
 start take a look at Hector. But keep in mind that you might be
 reinventing the wheel.

 ** **

 2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

 Hi,

  

 I am using Thrift API and I am not able to find anything on the internet
 about how to configure it for multiple nodes. I am not using any proper
 client like Hector.

  

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

  

 *From:* R. Verlangen [mailto:ro...@us2.nl]
 *Sent:* Monday, June 04, 2012 2:44 PM
 *To:* user@cassandra.apache.org
 *Subject:* Re: Adding a new node to Cassandra cluster

  

 Hi there,

  

 When you speak to one node it will internally redirect the request to the
 proper node (local / external): but you won't be able to failover on a
 crash of the localhost.

 For adding another node to the connection pool you should take a look at
 the documentation of your java client.

  

 Good luck!

  

 2012/6/4 Prakrati Agrawal prakrati.agra...@mu-sigma.com

 Dear all

  

 I successfully added a new node to my cluster so now it’s a 2 node
 cluster. But how do I mention it in my Java code as when I am retrieving
 data its retrieving only for one node that I am specifying in the
 localhost. How do I specify more than one node in the localhost.

  

 Please help me

  

 Thanks and Regards

  

 Prakrati Agrawal | Developer - Big Data(ID)| 9731648376 |
 www.mu-sigma.com 

  

  
  --

 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient
 is prohibited and may be illegal. If you received this in error, please
 contact the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet
 accessibility, the Company cannot accept liability for any virus introduced
 by this e-mail or any attachment and you are advised to use up-to-date
 virus checking software.



 

  

 --
 With kind regards,

  

 Robin Verlangen

 *Software engineer*

  

 W www.robinverlangen.nl

 E ro...@us2.nl

  

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.

  

 ** **
  --

 This email message may contain proprietary, private and confidential
 information. The information transmitted is intended only for the person(s)
 or entities to which it is addressed. Any review, retransmission,
 dissemination or other use of, or taking of any action in reliance upon,
 this information by persons or entities other than the intended recipient
 is prohibited and may be illegal. If you received this in error, please
 contact the sender and delete the message from your system.

 Mu Sigma takes all reasonable steps to ensure that its electronic
 communications are free from viruses. However, given Internet

Adding a new node to already existing single-node-cluster cassandra

2012-03-13 Thread Rishabh Agrawal
Hello,

I have been trying to add a node to single node cluster of Cassandra (1.0.8) 
but I always get following error:

INFO 17:50:35,555 JOINING: schema complete, ready to bootstrap
INFO 17:50:35,556 JOINING: getting bootstrap token
ERROR 17:50:35,557 Exception encountered during startup
java.lang.RuntimeException: No other nodes seen!  Unable to bootstrap.If you 
intended to start a single-node cluster, you should make sure your 
broadcast_address (or listen_address) is listed as a seed.  Otherwise, you need 
to determine why the seed being contacted has no knowledge of the rest of the 
cluster.  Usually, this can be solved by giving all nodes the same seed list.
at 
org.apache.cassandra.dht.BootStrapper.getBootstrapSource(BootStrapper.java:168)
at 
org.apache.cassandra.dht.BootStrapper.getBalancedToken(BootStrapper.java:150)
at 
org.apache.cassandra.dht.BootStrapper.getBootstrapToken(BootStrapper.java:145)
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:565)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:484)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:395)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:234)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:356)
at 
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107)
java.lang.RuntimeException: No other nodes seen!  Unable to bootstrap.If you 
intended to start a single-node cluster, you should make sure your 
broadcast_address (or listen_address) is listed as a seed.  Otherwise, you need 
to determine why the seed being contacted has no knowledge of the rest of the 
cluster.  Usually, this can be solved by giving all nodes the same seed list.
at 
org.apache.cassandra.dht.BootStrapper.getBootstrapSource(BootStrapper.java:168)
at 
org.apache.cassandra.dht.BootStrapper.getBalancedToken(BootStrapper.java:150)
at 
org.apache.cassandra.dht.BootStrapper.getBootstrapToken(BootStrapper.java:145)
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:565)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:484)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:395)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:234)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:356)
at 
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107)
Exception encountered during startup: No other nodes seen!  Unable to 
bootstrap.If you intended to start a single-node cluster, you should make sure 
your broadcast_address (or listen_address) is listed as a seed.  Otherwise, you 
need to determine why the seed being contacted has no knowledge of the rest of 
the cluster.  Usually, this can be solved by giving all nodes the same seed 
list.
INFO 17:50:35,571 Waiting for messaging service to quiesce
INFO 17:50:35,571 MessagingService shutting down server thread.

Kindly help me asap.

Regards
Rishabh Agrawal



Impetus to sponsor and exhibit at Structure Data 2012, NY; Mar 21-22. Know more 
about our Big Data quick-start program at the event.

New Impetus webcast 'Cloud-enabled Performance Testing vis-?-vis On-premise' 
available at http://bit.ly/z6zT4L.


NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.


Re: Adding a new node to already existing single-node-cluster cassandra

2012-03-13 Thread aaron morton
Sounds similar to 
http://www.mail-archive.com/user@cassandra.apache.org/msg20926.html

Are you able to try adding the node  again with logging set to DEBUG (in 
/etc/cassandra/log4j-server.properties) . (Please make sure the system 
directory is empty (/var/lib/cassandra/data/system) *NOTE* do not clear this 
dir if the node has already joined)

It looks like the node has not detected the cluster yet for some reason. You 
can try passing the JVM option cassandra.ring_delay_ms  (in cassandra-env.sh) 
to override the period it waits, the default is 3 (30 secs). 

Could you add a ticket here https://issues.apache.org/jira/browse/CASSANDRA as 
well. 

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 14/03/2012, at 1:36 AM, Rishabh Agrawal wrote:

 Hello,
  
 I have been trying to add a node to single node cluster of Cassandra (1.0.8) 
 but I always get following error:
  
 INFO 17:50:35,555 JOINING: schema complete, ready to bootstrap
 INFO 17:50:35,556 JOINING: getting bootstrap token
 ERROR 17:50:35,557 Exception encountered during startup
 java.lang.RuntimeException: No other nodes seen!  Unable to bootstrap.If you 
 intended to start a single-node cluster, you should make sure your 
 broadcast_address (or listen_address) is listed as a seed.  Otherwise, you 
 need to determine why the seed being contacted has no knowledge of the rest 
 of the cluster.  Usually, this can be solved by giving all nodes the same 
 seed list.
 at 
 org.apache.cassandra.dht.BootStrapper.getBootstrapSource(BootStrapper.java:168)
 at 
 org.apache.cassandra.dht.BootStrapper.getBalancedToken(BootStrapper.java:150)
 at 
 org.apache.cassandra.dht.BootStrapper.getBootstrapToken(BootStrapper.java:145)
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:565)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:484)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:395)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:234)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:356)
 at 
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107)
 java.lang.RuntimeException: No other nodes seen!  Unable to bootstrap.If you 
 intended to start a single-node cluster, you should make sure your 
 broadcast_address (or listen_address) is listed as a seed.  Otherwise, you 
 need to determine why the seed being contacted has no knowledge of the rest 
 of the cluster.  Usually, this can be solved by giving all nodes the same 
 seed list.
 at 
 org.apache.cassandra.dht.BootStrapper.getBootstrapSource(BootStrapper.java:168)
 at 
 org.apache.cassandra.dht.BootStrapper.getBalancedToken(BootStrapper.java:150)
 at 
 org.apache.cassandra.dht.BootStrapper.getBootstrapToken(BootStrapper.java:145)
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:565)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:484)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:395)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:234)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:356)
 at 
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107)
 Exception encountered during startup: No other nodes seen!  Unable to 
 bootstrap.If you intended to start a single-node cluster, you should make 
 sure your broadcast_address (or listen_address) is listed as a seed.  
 Otherwise, you need to determine why the seed being contacted has no 
 knowledge of the rest of the cluster.  Usually, this can be solved by giving 
 all nodes the same seed list.
 INFO 17:50:35,571 Waiting for messaging service to quiesce
 INFO 17:50:35,571 MessagingService shutting down server thread.
  
 Kindly help me asap.
  
 Regards
 Rishabh Agrawal
 
 
 Impetus to sponsor and exhibit at Structure Data 2012, NY; Mar 21-22. Know 
 more about our Big Data quick-start program at the event. 
 
 New Impetus webcast ‘Cloud-enabled Performance Testing vis-à-vis On-premise’ 
 available at http://bit.ly/z6zT4L. 
 
 
 NOTE: This message may contain information that is confidential, proprietary, 
 privileged or otherwise protected by law. The message is intended solely for 
 the named addressee. If received in error, please destroy and notify the 
 sender. Any use of this email is prohibited when received in error. Impetus 
 does not represent, warrant and/or guarantee, that the integrity of this 
 communication has been maintained nor that the communication is free of 
 errors, virus, interception or interference.



Re: Could not reach schema agreement when adding a new node.

2011-09-25 Thread aaron morton
Check the schema agreement using the CLI by running describe cluster;  it will 
tell you if they are in agreement.

it may have been a temporary thing while the new machine was applying it's 
schema. 

if the nodes are not in agreement or you want to dig deeper look for log 
messages from Migration.


Cheers


-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 24/09/2011, at 10:10 PM, Dikang Gu wrote:

 I found this in the system.log when adding a new node to the cluster.
 
 Anyone familiar with this?
 
 ERROR [HintedHandoff:2] 2011-09-24 18:01:30,498 AbstractCassandraDaemon.java 
 (line 113) Fatal exception in thread Thread[HintedHandoff:2,1,main]
 java.lang.RuntimeException: java.lang.RuntimeException: Could not reach 
 schema agreement with /192.168.1.9 in 6ms
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:619)
 Caused by: java.lang.RuntimeException: Could not reach schema agreement with 
 /192.168.1.9 in 6ms
   at 
 org.apache.cassandra.db.HintedHandOffManager.waitForSchemaAgreement(HintedHandOffManager.java:290)
   at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:301)
   at 
 org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager.java:89)
   at 
 org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffManager.java:394)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   ... 3 more
 
 Thanks.
 
 -- 
 Dikang Gu
 
 0086 - 18611140205
 



Could not reach schema agreement when adding a new node.

2011-09-24 Thread Dikang Gu
I found this in the system.log when adding a new node to the cluster.

Anyone familiar with this?

ERROR [HintedHandoff:2] 2011-09-24 18:01:30,498 AbstractCassandraDaemon.java
(line 113) Fatal exception in thread Thread[HintedHandoff:2,1,main]
java.lang.RuntimeException: java.lang.RuntimeException: Could not reach
schema agreement with /192.168.1.9 in 6ms
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.RuntimeException: Could not reach schema agreement with
/192.168.1.9 in 6ms
at
org.apache.cassandra.db.HintedHandOffManager.waitForSchemaAgreement(HintedHandOffManager.java:290)
at
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:301)
at
org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager.java:89)
at
org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffManager.java:394)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
... 3 more

Thanks.

-- 
Dikang Gu

0086 - 18611140205


Re: Adding a new node

2011-05-09 Thread Venkat Rama
Thanks for the pointer.  I restarted entire cluster and started nodes at the
same time. However, I still see the issue.  The view is not consistant. Am
running 0.7.5.
In general, if a node with bad ring view starts first, then I guess the
restart also doesnt help as it might be propagating its view.  Is this
assumption correct?



On Sun, May 8, 2011 at 9:02 PM, aaron morton aa...@thelastpickle.comwrote:

 It is possible to change IP address of a node, background
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/change-node-IP-address-td6197607.html


 If you have already bought a new node back with a different IP and the
 nodes in the cluster have different views of the ring (nodetool ring) you
 should see

 http://www.datastax.com/docs/0.7/troubleshooting/index#view-of-ring-differs-between-some-nodes


 What version are you on and what does nodetool ring say?

 Hope that helps.

 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/change-node-IP-address-td6197607.html
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 9 May 2011, at 12:24, Venkat Rama wrote:

 Hi,

 I am trying to bring up a new node (with different IP) to replace a dead
 node on cassandra 0.7.5.   Rather than bootstrap, I am copying the SSTable
 files to the new node(backed up files) as my data runs into several GB.
  Although the node successfully joins the ring, some of the ring nodes still
 seem to point to the old dead node as seen from ring command.  Is there a
 way to notify all nodes about the new node?  Am looking for options that can
 bring the cluster back to it original state in a faster and reliable manner
 since I do have all the SSTable files.
 One option I looked at was to remove all system table and restart the
 entire cluster.  But I loose the schemas with this approach.

 Thanks in advance for your reply.

 VR






Re: Adding a new node

2011-05-09 Thread aaron morton
Gossip should help them converge on the truth. 

Can you give an example of the different views from nodetool ring ? 

Also check the logs to see if there is anything been logged about endpoints. 

Hope that helps. 
 
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 9 May 2011, at 18:11, Venkat Rama wrote:

 Thanks for the pointer.  I restarted entire cluster and started nodes at the 
 same time. However, I still see the issue.  The view is not consistant. Am 
 running 0.7.5. 
 In general, if a node with bad ring view starts first, then I guess the 
 restart also doesnt help as it might be propagating its view.  Is this 
 assumption correct?
 
 
 
 On Sun, May 8, 2011 at 9:02 PM, aaron morton aa...@thelastpickle.com wrote:
 It is possible to change IP address of a node, background 
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/change-node-IP-address-td6197607.html
  
 
 If you have already bought a new node back with a different IP and the nodes 
 in the cluster have different views of the ring (nodetool ring) you should 
 see 
 http://www.datastax.com/docs/0.7/troubleshooting/index#view-of-ring-differs-between-some-nodes
  
 
 What version are you on and what does nodetool ring say?
 
 Hope that helps.
 
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com
 
 On 9 May 2011, at 12:24, Venkat Rama wrote:
 
 Hi,
 
 I am trying to bring up a new node (with different IP) to replace a dead 
 node on cassandra 0.7.5.   Rather than bootstrap, I am copying the SSTable 
 files to the new node(backed up files) as my data runs into several GB.  
 Although the node successfully joins the ring, some of the ring nodes still 
 seem to point to the old dead node as seen from ring command.  Is there a 
 way to notify all nodes about the new node?  Am looking for options that can 
 bring the cluster back to it original state in a faster and reliable manner 
 since I do have all the SSTable files.   
 One option I looked at was to remove all system table and restart the entire 
 cluster.  But I loose the schemas with this approach. 
 
 Thanks in advance for your reply.  
 
 VR
 
 
 
 



Adding a new node

2011-05-08 Thread Venkat Rama
Hi,

I am trying to bring up a new node (with different IP) to replace a dead
node on cassandra 0.7.5.   Rather than bootstrap, I am copying the SSTable
files to the new node(backed up files) as my data runs into several GB.
 Although the node successfully joins the ring, some of the ring nodes still
seem to point to the old dead node as seen from ring command.  Is there a
way to notify all nodes about the new node?  Am looking for options that can
bring the cluster back to it original state in a faster and reliable manner
since I do have all the SSTable files.
One option I looked at was to remove all system table and restart the entire
cluster.  But I loose the schemas with this approach.

Thanks in advance for your reply.

VR


Re: Adding a new node

2011-05-08 Thread aaron morton
It is possible to change IP address of a node, background 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/change-node-IP-address-td6197607.html
 

If you have already bought a new node back with a different IP and the nodes in 
the cluster have different views of the ring (nodetool ring) you should see 
http://www.datastax.com/docs/0.7/troubleshooting/index#view-of-ring-differs-between-some-nodes
 

What version are you on and what does nodetool ring say?

Hope that helps.

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 9 May 2011, at 12:24, Venkat Rama wrote:

 Hi,
 
 I am trying to bring up a new node (with different IP) to replace a dead node 
 on cassandra 0.7.5.   Rather than bootstrap, I am copying the SSTable files 
 to the new node(backed up files) as my data runs into several GB.  Although 
 the node successfully joins the ring, some of the ring nodes still seem to 
 point to the old dead node as seen from ring command.  Is there a way to 
 notify all nodes about the new node?  Am looking for options that can bring 
 the cluster back to it original state in a faster and reliable manner since I 
 do have all the SSTable files.   
 One option I looked at was to remove all system table and restart the entire 
 cluster.  But I loose the schemas with this approach. 
 
 Thanks in advance for your reply.  
 
 VR