Re: "Not enough replicas available for query" after reboot

2016-02-04 Thread Bryan Cheng
Hey Flavien!

Did your reboot come with any other changes (schema, configuration,
topology, version)?

On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon <flavien.char...@gmail.com>
wrote:

> I'm using the C# driver 2.5.2. I did try to restart the client
> application, but that didn't make any difference, I still get the same
> error after restart.
>
> On 4 February 2016 at 21:54, <sean_r_dur...@homedepot.com> wrote:
>
>> What client are you using?
>>
>>
>>
>> It is possible that the client saw nodes down and has kept them marked
>> that way (without retrying). Depending on the client, you may have options
>> to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will
>> probably fix the problem for now.
>>
>>
>>
>>
>>
>> Sean Durity
>>
>>
>>
>> *From:* Flavien Charlon [mailto:flavien.char...@gmail.com]
>> *Sent:* Thursday, February 04, 2016 4:06 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: "Not enough replicas available for query" after reboot
>>
>>
>>
>> Yes, all three nodes see all three nodes as UN.
>>
>>
>>
>> Also, connecting from a local Cassandra machine using cqlsh, I can run
>> the same query just fine (with QUORUM consistency level).
>>
>>
>>
>> On 4 February 2016 at 21:02, Robert Coli <rc...@eventbrite.com> wrote:
>>
>> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon <
>> flavien.char...@gmail.com> wrote:
>>
>> My cluster was running fine. I rebooted all three nodes (one by one), and
>> now all nodes are back up and running. "nodetool status" shows UP for all
>> three nodes on all three nodes:
>>
>>
>>
>> --  AddressLoad   Tokens  OwnsHost ID
>>   Rack
>>
>> UN  xx.xx.xx.xx331.84 GB  1   ?
>> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
>>
>> UN  xx.xx.xx.xx317.2 GB   1   ?
>> de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
>>
>> UN  xx.xx.xx.xx  291.61 GB  1   ?
>> b489c970-68db-44a7-90c6-be734b41475f  RAC1
>>
>>
>>
>> However, now the client application fails to run queries on the cluster
>> with:
>>
>>
>>
>> Cassandra.UnavailableException: Not enough replicas available for query
>> at consistency Quorum (2 required but only 1 alive)
>>
>>
>>
>> Do *all* nodes see each other as UP/UN?
>>
>>
>>
>> =Rob
>>
>>
>>
>>
>>
>> --
>>
>> The information in this Internet Email is confidential and may be legally
>> privileged. It is intended solely for the addressee. Access to this Email
>> by anyone else is unauthorized. If you are not the intended recipient, any
>> disclosure, copying, distribution or any action taken or omitted to be
>> taken in reliance on it, is prohibited and may be unlawful. When addressed
>> to our clients any opinions or advice contained in this Email are subject
>> to the terms and conditions expressed in any applicable governing The Home
>> Depot terms of business or client engagement letter. The Home Depot
>> disclaims all responsibility and liability for the accuracy and content of
>> this attachment and for any damages or losses arising from any
>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
>> items of a destructive nature, which may be contained in this attachment
>> and shall not be liable for direct, indirect, consequential or special
>> damages in connection with this e-mail message or its attachment.
>>
>
>


Re: "Not enough replicas available for query" after reboot

2016-02-04 Thread Robert Coli
On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon <flavien.char...@gmail.com>
wrote:

> My cluster was running fine. I rebooted all three nodes (one by one), and
> now all nodes are back up and running. "nodetool status" shows UP for all
> three nodes on all three nodes:
>
> --  AddressLoad   Tokens  OwnsHost ID
>   Rack
> UN  xx.xx.xx.xx331.84 GB  1   ?
> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
> UN  xx.xx.xx.xx317.2 GB   1   ?
> de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
> UN  xx.xx.xx.xx  291.61 GB  1   ?
> b489c970-68db-44a7-90c6-be734b41475f  RAC1
>
> However, now the client application fails to run queries on the cluster
> with:
>
> Cassandra.UnavailableException: Not enough replicas available for query at
>> consistency Quorum (2 required but only 1 alive)
>
>
Do *all* nodes see each other as UP/UN?

=Rob


RE: "Not enough replicas available for query" after reboot

2016-02-04 Thread SEAN_R_DURITY
What client are you using?

It is possible that the client saw nodes down and has kept them marked that way 
(without retrying). Depending on the client, you may have options to set in 
RetryPolicy, FailoverPolicy, etc. A bounce of the client will probably fix the 
problem for now.


Sean Durity

From: Flavien Charlon [mailto:flavien.char...@gmail.com]
Sent: Thursday, February 04, 2016 4:06 PM
To: user@cassandra.apache.org
Subject: Re: "Not enough replicas available for query" after reboot

Yes, all three nodes see all three nodes as UN.

Also, connecting from a local Cassandra machine using cqlsh, I can run the same 
query just fine (with QUORUM consistency level).

On 4 February 2016 at 21:02, Robert Coli 
<rc...@eventbrite.com<mailto:rc...@eventbrite.com>> wrote:
On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon 
<flavien.char...@gmail.com<mailto:flavien.char...@gmail.com>> wrote:
My cluster was running fine. I rebooted all three nodes (one by one), and now 
all nodes are back up and running. "nodetool status" shows UP for all three 
nodes on all three nodes:

--  AddressLoad   Tokens  OwnsHost ID   
Rack
UN  xx.xx.xx.xx331.84 GB  1   ?   
d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
UN  xx.xx.xx.xx317.2 GB   1   ?   
de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
UN  xx.xx.xx.xx  291.61 GB  1   ?   
b489c970-68db-44a7-90c6-be734b41475f  RAC1

However, now the client application fails to run queries on the cluster with:

Cassandra.UnavailableException: Not enough replicas available for query at 
consistency Quorum (2 required but only 1 alive)

Do *all* nodes see each other as UP/UN?

=Rob





The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.


Re: "Not enough replicas available for query" after reboot

2016-02-04 Thread Flavien Charlon
No, there was no other change. I did run "apt-get upgrade" before
rebooting, but Cassandra has not been upgraded.

On 4 February 2016 at 22:48, Bryan Cheng <br...@blockcypher.com> wrote:

> Hey Flavien!
>
> Did your reboot come with any other changes (schema, configuration,
> topology, version)?
>
> On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon <flavien.char...@gmail.com
> > wrote:
>
>> I'm using the C# driver 2.5.2. I did try to restart the client
>> application, but that didn't make any difference, I still get the same
>> error after restart.
>>
>> On 4 February 2016 at 21:54, <sean_r_dur...@homedepot.com> wrote:
>>
>>> What client are you using?
>>>
>>>
>>>
>>> It is possible that the client saw nodes down and has kept them marked
>>> that way (without retrying). Depending on the client, you may have options
>>> to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will
>>> probably fix the problem for now.
>>>
>>>
>>>
>>>
>>>
>>> Sean Durity
>>>
>>>
>>>
>>> *From:* Flavien Charlon [mailto:flavien.char...@gmail.com]
>>> *Sent:* Thursday, February 04, 2016 4:06 PM
>>> *To:* user@cassandra.apache.org
>>> *Subject:* Re: "Not enough replicas available for query" after reboot
>>>
>>>
>>>
>>> Yes, all three nodes see all three nodes as UN.
>>>
>>>
>>>
>>> Also, connecting from a local Cassandra machine using cqlsh, I can run
>>> the same query just fine (with QUORUM consistency level).
>>>
>>>
>>>
>>> On 4 February 2016 at 21:02, Robert Coli <rc...@eventbrite.com> wrote:
>>>
>>> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon <
>>> flavien.char...@gmail.com> wrote:
>>>
>>> My cluster was running fine. I rebooted all three nodes (one by one),
>>> and now all nodes are back up and running. "nodetool status" shows UP for
>>> all three nodes on all three nodes:
>>>
>>>
>>>
>>> --  AddressLoad   Tokens  OwnsHost ID
>>> Rack
>>>
>>> UN  xx.xx.xx.xx331.84 GB  1   ?
>>> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
>>>
>>> UN  xx.xx.xx.xx317.2 GB   1   ?
>>> de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
>>>
>>> UN  xx.xx.xx.xx  291.61 GB  1   ?
>>> b489c970-68db-44a7-90c6-be734b41475f  RAC1
>>>
>>>
>>>
>>> However, now the client application fails to run queries on the cluster
>>> with:
>>>
>>>
>>>
>>> Cassandra.UnavailableException: Not enough replicas available for query
>>> at consistency Quorum (2 required but only 1 alive)
>>>
>>>
>>>
>>> Do *all* nodes see each other as UP/UN?
>>>
>>>
>>>
>>> =Rob
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> The information in this Internet Email is confidential and may be
>>> legally privileged. It is intended solely for the addressee. Access to this
>>> Email by anyone else is unauthorized. If you are not the intended
>>> recipient, any disclosure, copying, distribution or any action taken or
>>> omitted to be taken in reliance on it, is prohibited and may be unlawful.
>>> When addressed to our clients any opinions or advice contained in this
>>> Email are subject to the terms and conditions expressed in any applicable
>>> governing The Home Depot terms of business or client engagement letter. The
>>> Home Depot disclaims all responsibility and liability for the accuracy and
>>> content of this attachment and for any damages or losses arising from any
>>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
>>> items of a destructive nature, which may be contained in this attachment
>>> and shall not be liable for direct, indirect, consequential or special
>>> damages in connection with this e-mail message or its attachment.
>>>
>>
>>
>


Re: "Not enough replicas available for query" after reboot

2016-02-04 Thread Flavien Charlon
Yes, all three nodes see all three nodes as UN.

Also, connecting from a local Cassandra machine using cqlsh, I can run the
same query just fine (with QUORUM consistency level).

On 4 February 2016 at 21:02, Robert Coli <rc...@eventbrite.com> wrote:

> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon <
> flavien.char...@gmail.com> wrote:
>
>> My cluster was running fine. I rebooted all three nodes (one by one), and
>> now all nodes are back up and running. "nodetool status" shows UP for all
>> three nodes on all three nodes:
>>
>> --  AddressLoad   Tokens  OwnsHost ID
>>   Rack
>> UN  xx.xx.xx.xx331.84 GB  1   ?
>> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
>> UN  xx.xx.xx.xx317.2 GB   1   ?
>> de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
>> UN  xx.xx.xx.xx  291.61 GB  1   ?
>> b489c970-68db-44a7-90c6-be734b41475f  RAC1
>>
>> However, now the client application fails to run queries on the cluster
>> with:
>>
>> Cassandra.UnavailableException: Not enough replicas available for query
>>> at consistency Quorum (2 required but only 1 alive)
>>
>>
> Do *all* nodes see each other as UP/UN?
>
> =Rob
>
>


Re: "Not enough replicas available for query" after reboot

2016-02-04 Thread Peddi, Praveen
Are you able to run queries using cqlsh with consistency ALL?

On Feb 4, 2016, at 6:32 PM, Flavien Charlon 
<flavien.char...@gmail.com<mailto:flavien.char...@gmail.com>> wrote:

No, there was no other change. I did run "apt-get upgrade" before rebooting, 
but Cassandra has not been upgraded.

On 4 February 2016 at 22:48, Bryan Cheng 
<br...@blockcypher.com<mailto:br...@blockcypher.com>> wrote:
Hey Flavien!

Did your reboot come with any other changes (schema, configuration, topology, 
version)?

On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon 
<flavien.char...@gmail.com<mailto:flavien.char...@gmail.com>> wrote:
I'm using the C# driver 2.5.2. I did try to restart the client application, but 
that didn't make any difference, I still get the same error after restart.

On 4 February 2016 at 21:54, 
<sean_r_dur...@homedepot.com<mailto:sean_r_dur...@homedepot.com>> wrote:
What client are you using?

It is possible that the client saw nodes down and has kept them marked that way 
(without retrying). Depending on the client, you may have options to set in 
RetryPolicy, FailoverPolicy, etc. A bounce of the client will probably fix the 
problem for now.


Sean Durity

From: Flavien Charlon 
[mailto:flavien.char...@gmail.com<mailto:flavien.char...@gmail.com>]
Sent: Thursday, February 04, 2016 4:06 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: "Not enough replicas available for query" after reboot

Yes, all three nodes see all three nodes as UN.

Also, connecting from a local Cassandra machine using cqlsh, I can run the same 
query just fine (with QUORUM consistency level).

On 4 February 2016 at 21:02, Robert Coli 
<rc...@eventbrite.com<mailto:rc...@eventbrite.com>> wrote:
On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon 
<flavien.char...@gmail.com<mailto:flavien.char...@gmail.com>> wrote:
My cluster was running fine. I rebooted all three nodes (one by one), and now 
all nodes are back up and running. "nodetool status" shows UP for all three 
nodes on all three nodes:

--  AddressLoad   Tokens  OwnsHost ID   
Rack
UN  xx.xx.xx.xx331.84 GB  1   ?   
d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
UN  xx.xx.xx.xx317.2 GB   1   ?   
de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
UN  xx.xx.xx.xx  291.61 GB  1   ?   
b489c970-68db-44a7-90c6-be734b41475f  RAC1

However, now the client application fails to run queries on the cluster with:

Cassandra.UnavailableException: Not enough replicas available for query at 
consistency Quorum (2 required but only 1 alive)

Do *all* nodes see each other as UP/UN?

=Rob





The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.





Re: "Not enough replicas available for query" after reboot

2016-02-04 Thread Flavien Charlon
Yes, that works with consistency ALL.

I restarted one of the Cassandra instances, and seems it's working again
now. I'm not sure what happened.

On 4 February 2016 at 23:48, Peddi, Praveen <pe...@amazon.com> wrote:

> Are you able to run queries using cqlsh with consistency ALL?
>
> On Feb 4, 2016, at 6:32 PM, Flavien Charlon <flavien.char...@gmail.com>
> wrote:
>
> No, there was no other change. I did run "apt-get upgrade" before
> rebooting, but Cassandra has not been upgraded.
>
> On 4 February 2016 at 22:48, Bryan Cheng <br...@blockcypher.com> wrote:
>
>> Hey Flavien!
>>
>> Did your reboot come with any other changes (schema, configuration,
>> topology, version)?
>>
>> On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon <
>> flavien.char...@gmail.com> wrote:
>>
>>> I'm using the C# driver 2.5.2. I did try to restart the client
>>> application, but that didn't make any difference, I still get the same
>>> error after restart.
>>>
>>> On 4 February 2016 at 21:54, <sean_r_dur...@homedepot.com> wrote:
>>>
>>>> What client are you using?
>>>>
>>>>
>>>>
>>>> It is possible that the client saw nodes down and has kept them marked
>>>> that way (without retrying). Depending on the client, you may have options
>>>> to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will
>>>> probably fix the problem for now.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Sean Durity
>>>>
>>>>
>>>>
>>>> *From:* Flavien Charlon [mailto:flavien.char...@gmail.com]
>>>> *Sent:* Thursday, February 04, 2016 4:06 PM
>>>> *To:* user@cassandra.apache.org
>>>> *Subject:* Re: "Not enough replicas available for query" after reboot
>>>>
>>>>
>>>>
>>>> Yes, all three nodes see all three nodes as UN.
>>>>
>>>>
>>>>
>>>> Also, connecting from a local Cassandra machine using cqlsh, I can run
>>>> the same query just fine (with QUORUM consistency level).
>>>>
>>>>
>>>>
>>>> On 4 February 2016 at 21:02, Robert Coli <rc...@eventbrite.com> wrote:
>>>>
>>>> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon <
>>>> flavien.char...@gmail.com> wrote:
>>>>
>>>> My cluster was running fine. I rebooted all three nodes (one by one),
>>>> and now all nodes are back up and running. "nodetool status" shows UP for
>>>> all three nodes on all three nodes:
>>>>
>>>>
>>>>
>>>> --  AddressLoad   Tokens  OwnsHost ID
>>>> Rack
>>>>
>>>> UN  xx.xx.xx.xx331.84 GB  1   ?
>>>> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
>>>>
>>>> UN  xx.xx.xx.xx317.2 GB   1   ?
>>>> de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
>>>>
>>>> UN  xx.xx.xx.xx  291.61 GB  1   ?
>>>> b489c970-68db-44a7-90c6-be734b41475f  RAC1
>>>>
>>>>
>>>>
>>>> However, now the client application fails to run queries on the cluster
>>>> with:
>>>>
>>>>
>>>>
>>>> Cassandra.UnavailableException: Not enough replicas available for query
>>>> at consistency Quorum (2 required but only 1 alive)
>>>>
>>>>
>>>>
>>>> Do *all* nodes see each other as UP/UN?
>>>>
>>>>
>>>>
>>>> =Rob
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> The information in this Internet Email is confidential and may be
>>>> legally privileged. It is intended solely for the addressee. Access to this
>>>> Email by anyone else is unauthorized. If you are not the intended
>>>> recipient, any disclosure, copying, distribution or any action taken or
>>>> omitted to be taken in reliance on it, is prohibited and may be unlawful.
>>>> When addressed to our clients any opinions or advice contained in this
>>>> Email are subject to the terms and conditions expressed in any applicable
>>>> governing The Home Depot terms of business or client engagement letter. The
>>>> Home Depot disclaims all responsibility and liability for the accuracy and
>>>> content of this attachment and for any damages or losses arising from any
>>>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
>>>> items of a destructive nature, which may be contained in this attachment
>>>> and shall not be liable for direct, indirect, consequential or special
>>>> damages in connection with this e-mail message or its attachment.
>>>>
>>>
>>>
>>
>


"Not enough replicas available for query" after reboot

2016-02-04 Thread Flavien Charlon
Hi,

My cluster was running fine. I rebooted all three nodes (one by one), and
now all nodes are back up and running. "nodetool status" shows UP for all
three nodes on all three nodes:

--  AddressLoad   Tokens  OwnsHost ID
Rack
UN  xx.xx.xx.xx331.84 GB  1   ?
d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538  RAC1
UN  xx.xx.xx.xx317.2 GB   1   ?
de7917ed-0de9-434d-be88-bc91eb4f8713  RAC1
UN  xx.xx.xx.xx  291.61 GB  1   ?
b489c970-68db-44a7-90c6-be734b41475f  RAC1

However, now the client application fails to run queries on the cluster
with:

Cassandra.UnavailableException: Not enough replicas available for query at
> consistency Quorum (2 required but only 1 alive)


The replication factor is 3. I am running Cassandra 2.1.7.

Any idea where that could come from or how to troubleshoot this further?

Best,
Flavien


decommission of one EC2 node in cluster causes other nodes to go DOWN/UP and results in May not be enough replicas...

2013-10-21 Thread John Pyeatt
We have a 6 node cassandra 1.2.10 cluster running on aws with
NetworkTopologyStrategy, a replication factor of 3 and the EC2Snitch. Each
AWS availability zone has 2 nodes in it.

When we are reading or writing data with consistency of Quorum to the
cluster while decommissioning a node we are getting 'May not be enough
replicas present to handle consistency level.

This doesn't make sense because we are only taking one node down, we have
an RF of three so even if we take one node down with a quorum read/write
there should still be enough nodes with the data (2).

Looking at the cassandra log on a server that we are not decommissioning we
are seeing this during the decommission of the other node.

 INFO [GossipTasks:1] 2013-10-21 15:18:10,695 Gossiper.java (line 803)
InetAddress /10.0.22.142 *is now DOWN*
 INFO [GossipTasks:1] 2013-10-21 15:18:10,696 Gossiper.java (line 803)
InetAddress /10.0.32.159 *is now DOWN*
 INFO [HANDSHAKE-/10.0.22.142] 2013-10-21 15:18:10,862
OutboundTcpConnection.java (line 399) Handshaking version with /10.0.22.142
 INFO [GossipTasks:1] 2013-10-21 15:18:11,696 Gossiper.java (line 803)
InetAddress /10.0.12.178* is now DOWN*
 INFO [GossipTasks:1] 2013-10-21 15:18:11,697 Gossiper.java (line 803)
InetAddress /10.0.22.106* is now DOWN*
 INFO [GossipTasks:1] 2013-10-21 15:18:11,698 Gossiper.java (line 803)
InetAddress /10.0.32.248 *is now DOWN*

Eventually we are seeing a message that looks like this.
 INFO [GossipStage:3] 2013-10-21 15:18:19,429 Gossiper.java (line 789)
InetAddress /10.0.32.248 is now UP

for each of the nodes. So eventually the remaining nodes in the cluster
come back to life.

While these nodes are down I can see why we get the May not be enough
replicas... message. Because everything is down.

My question is *why does gossip shutdown for these nodes that we aren't
decommissioning in the first place*?

-- 
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com


RE: Not enough replicas???

2013-02-04 Thread Stephen.M.Thompson
Hi Edward - thanks for responding.   The keyspace could not have been created 
more simply:



create keyspace KEYSPACE_NAME;



According to the help, this should have created a replication factor of 1:



Keyspace Attributes (all are optional):

- placement_strategy: Class used to determine how replicas

  are distributed among nodes. Defaults to NetworkTopologyStrategy with

  one datacenter defined with a replication factor of 1 ([datacenter1:1]).



Steve



-Original Message-
From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
Sent: Friday, February 01, 2013 5:49 PM
To: user@cassandra.apache.org
Subject: Re: Not enough replicas???



Please include the information on how your keyspace was created. This may 
indicate you set the replication factor to 3, when you only have 1 node, or 
some similar condition.



On Fri, Feb 1, 2013 at 4:57 PM,  
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

 I need to offer my profound thanks to this community which has been so

 helpful in trying to figure this system out.







 I've setup a simple ring with two nodes and I'm trying to insert data

 to them.  I get failures 100% with this error:







 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not

 be enough replicas present to handle consistency level.







 I'm not doing anything fancy - this is just from setting up the

 cluster following the basic instructions from datastax for a simple

 one data center cluster.  My config is basically the default except

 for the changes they discuss (except that I have configured for my IP

 addresses... my two boxes are

 .126 and .127)







 cluster_name: 'MyDemoCluster'



 num_tokens: 256



 seed_provider:



   - class_name: org.apache.cassandra.locator.SimpleSeedProvider



 parameters:



  - seeds: 10.28.205.126



 listen_address: 10.28.205.126



 rpc_address: 0.0.0.0



 endpoint_snitch: RackInferringSnitch







 Nodetool shows both nodes active in the ring, status = up, state = normal.







 For the CF:







ColumnFamily: SystemEvent



  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type



  Default column value validator:

 org.apache.cassandra.db.marshal.UTF8Type



  Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type



  GC grace seconds: 864000



  Compaction min/max thresholds: 4/32



  Read repair chance: 0.1



  DC Local Read repair chance: 0.0



  Replicate on write: true



  Caching: KEYS_ONLY



  Bloom Filter FP chance: default



  Built indexes: [SystemEvent.IdxName]



  Column Metadata:



Column Name: eventTimeStamp



  Validation Class: org.apache.cassandra.db.marshal.DateType



Column Name: name



  Validation Class: org.apache.cassandra.db.marshal.UTF8Type



  Index Name: IdxName



  Index Type: KEYS



  Compaction Strategy:

 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy



  Compression Options:



sstable_compression:

 org.apache.cassandra.io.compress.SnappyCompressor







 Any ideas?


Re: Not enough replicas???

2013-02-04 Thread Tyler Hobbs
RackInferringSnitch determines each node's DC and rack by looking at the
second and third octets in its IP address (
http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
so your nodes are in DC 28.

Your replication strategy says to put one replica in DC datacenter1, but
doesn't mention DC 28 at all, so you don't have any replicas for your
keyspace.


On Mon, Feb 4, 2013 at 7:55 AM, stephen.m.thomp...@wellsfargo.com wrote:

 Hi Edward - thanks for responding.   The keyspace could not have been
 created more simply:

 ** **

 create keyspace KEYSPACE_NAME;

 ** **

 According to the help, this should have created a replication factor of 1:
 

 ** **

 Keyspace Attributes (all are optional):

 - placement_strategy: Class used to determine how replicas

   are distributed among nodes. Defaults to NetworkTopologyStrategy with***
 *

   one datacenter defined with a replication factor of 1
 ([datacenter1:1]).

 ** **

 Steve

 ** **

 -Original Message-
 From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
 Sent: Friday, February 01, 2013 5:49 PM
 To: user@cassandra.apache.org
 Subject: Re: Not enough replicas???

 ** **

 Please include the information on how your keyspace was created. This may
 indicate you set the replication factor to 3, when you only have 1 node, or
 some similar condition.

 ** **

 On Fri, Feb 1, 2013 at 4:57 PM,  stephen.m.thomp...@wellsfargo.com
 wrote:

  I need to offer my profound thanks to this community which has been so *
 ***

  helpful in trying to figure this system out.

 ** **

 ** **

 ** **

  I’ve setup a simple ring with two nodes and I’m trying to insert data **
 **

  to them.  I get failures 100% with this error:

 ** **

 ** **

 ** **

  me.prettyprint.hector.api.exceptions.HUnavailableException: : May not **
 **

  be enough replicas present to handle consistency level.

 ** **

 ** **

 ** **

  I’m not doing anything fancy – this is just from setting up the 

  cluster following the basic instructions from datastax for a simple 

  one data center cluster.  My config is basically the default except 

  for the changes they discuss (except that I have configured for my IP **
 **

  addresses… my two boxes are

  .126 and .127)

 ** **

 ** **

 ** **

  cluster_name: 'MyDemoCluster'

 ** **

  num_tokens: 256

 ** **

  seed_provider:

 ** **

- class_name: org.apache.cassandra.locator.SimpleSeedProvider

 ** **

  parameters:

 ** **

   - seeds: 10.28.205.126

 ** **

  listen_address: 10.28.205.126

 ** **

  rpc_address: 0.0.0.0

 ** **

  endpoint_snitch: RackInferringSnitch

 ** **

 ** **

 ** **

  Nodetool shows both nodes active in the ring, status = up, state =
 normal.

 ** **

 ** **

 ** **

  For the CF:

 ** **

 ** **

 ** **

 ColumnFamily: SystemEvent

 ** **

   Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type

 ** **

   Default column value validator:

  org.apache.cassandra.db.marshal.UTF8Type

 ** **

   Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type

 ** **

   GC grace seconds: 864000

 ** **

   Compaction min/max thresholds: 4/32

 ** **

   Read repair chance: 0.1

 ** **

   DC Local Read repair chance: 0.0

 ** **

   Replicate on write: true

 ** **

   Caching: KEYS_ONLY

 ** **

   Bloom Filter FP chance: default

 ** **

   Built indexes: [SystemEvent.IdxName]

 ** **

   Column Metadata:

 ** **

 Column Name: eventTimeStamp

 ** **

   Validation Class: org.apache.cassandra.db.marshal.DateType

 ** **

 Column Name: name

 ** **

   Validation Class: org.apache.cassandra.db.marshal.UTF8Type

 ** **

   Index Name: IdxName

 ** **

   Index Type: KEYS

 ** **

   Compaction Strategy:

  org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

 ** **

   Compression Options:

 ** **

 sstable_compression:

  org.apache.cassandra.io.compress.SnappyCompressor

 ** **

 ** **

 ** **

  Any ideas?




-- 
Tyler Hobbs
DataStax http://datastax.com/


RE: Not enough replicas???

2013-02-04 Thread Stephen.M.Thompson
Thanks Tyler ... so I created my keyspace to explicitly indicate the datacenter 
and replication, as follows:

create keyspace KEYSPACE_NAME
  with placement_strategy = 
'org.apache.cassandra.locator.NetworkTopologyStrategy'
  and strategy_options={DC28:2};

And yet I still get the exact same error message:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough 
replicas present to handle consistency level.

It certainly is showing that it took my change:

[default@KEYSPACE_NAME] describe;
Keyspace: KEYSPACE_NAME:
  Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
  Durable Writes: true
Options: [DC28:2]

Looking at the ring 

[root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost ring

Datacenter: 28
==
Replicas: 0

Address RackStatus State   LoadOwns
Token
   
9187343239835811839
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9187343239835811840
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9151314442816847872
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9115285645797883904

( HUGE SNIP )

10.28.205.127   205 Up Normal  84.63 KB0.00%   
9115285645797883903
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9151314442816847871
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9187343239835811839

So both boxes are showing up in the ring.

Thank you guys SO MUCH for helping me figure this stuff out.


From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Monday, February 04, 2013 11:17 AM
To: user@cassandra.apache.org
Subject: Re: Not enough replicas???

RackInferringSnitch determines each node's DC and rack by looking at the second 
and third octets in its IP address 
(http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
 so your nodes are in DC 28.

Your replication strategy says to put one replica in DC datacenter1, but 
doesn't mention DC 28 at all, so you don't have any replicas for your 
keyspace.

On Mon, Feb 4, 2013 at 7:55 AM, 
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

Hi Edward - thanks for responding.   The keyspace could not have been created 
more simply:



create keyspace KEYSPACE_NAME;



According to the help, this should have created a replication factor of 1:



Keyspace Attributes (all are optional):

- placement_strategy: Class used to determine how replicas

  are distributed among nodes. Defaults to NetworkTopologyStrategy with

  one datacenter defined with a replication factor of 1 ([datacenter1:1]).



Steve



-Original Message-
From: Edward Capriolo 
[mailto:edlinuxg...@gmail.commailto:edlinuxg...@gmail.com]
Sent: Friday, February 01, 2013 5:49 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Not enough replicas???



Please include the information on how your keyspace was created. This may 
indicate you set the replication factor to 3, when you only have 1 node, or 
some similar condition.



On Fri, Feb 1, 2013 at 4:57 PM,  
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

 I need to offer my profound thanks to this community which has been so

 helpful in trying to figure this system out.







 I've setup a simple ring with two nodes and I'm trying to insert data

 to them.  I get failures 100% with this error:







 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not

 be enough replicas present to handle consistency level.







 I'm not doing anything fancy - this is just from setting up the

 cluster following the basic instructions from datastax for a simple

 one data center cluster.  My config is basically the default except

 for the changes they discuss (except that I have configured for my IP

 addresses... my two boxes are

 .126 and .127)







 cluster_name: 'MyDemoCluster'



 num_tokens: 256



 seed_provider:



   - class_name: org.apache.cassandra.locator.SimpleSeedProvider



 parameters:



  - seeds: 10.28.205.126



 listen_address: 10.28.205.126



 rpc_address: 0.0.0.0



 endpoint_snitch: RackInferringSnitch







 Nodetool shows both nodes active in the ring, status = up, state = normal.







 For the CF:







ColumnFamily: SystemEvent



  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type



  Default column value validator:

 org.apache.cassandra.db.marshal.UTF8Type



  Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type



  GC grace seconds: 864000



  Compaction min/max thresholds: 4/32



  Read repair chance: 0.1



  DC Local Read repair chance: 0.0



  Replicate on write: true



  Caching: KEYS_ONLY

Re: Not enough replicas???

2013-02-04 Thread Tyler Hobbs
Sorry, to be more precise, the name of the datacenter is just the string
28, not DC28.


On Mon, Feb 4, 2013 at 12:07 PM, stephen.m.thomp...@wellsfargo.com wrote:

 Thanks Tyler … so I created my keyspace to explicitly indicate the
 datacenter and replication, as follows:

 ** **

 create *keyspace* KEYSPACE_NAME

   with placement_strategy =
 'org.apache.cassandra.locator.NetworkTopologyStrategy'

   and strategy_options={DC28:2};

 ** **

 And yet I still get the exact same error message:

 ** **

 *me.prettyprint.hector.api.exceptions.HUnavailableException*: : May not
 be enough replicas present to handle consistency level.

 ** **

 It certainly is showing that it took my change:

 ** **

 [default@KEYSPACE_NAME] describe;

 Keyspace: KEYSPACE_NAME:

   Replication Strategy:
 org.apache.cassandra.locator.NetworkTopologyStrategy

   Durable Writes: true

 Options: [DC28:2]

 ** **

 Looking at the ring ….

 ** **

 [root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost
 ring

 ** **

 Datacenter: 28

 ==

 Replicas: 0

 ** **

 Address RackStatus State   Load
 OwnsToken


 9187343239835811839

 10.28.205.126   205 Up Normal  95.89 KB
 0.00%   -9187343239835811840

 10.28.205.126   205 Up Normal  95.89 KB
 0.00%   -9151314442816847872

 10.28.205.126   205 Up Normal  95.89 KB
 0.00%   -9115285645797883904

 ** **

 ( HUGE SNIP )

 ** **

 10.28.205.127   205 Up Normal  84.63 KB
 0.00%   9115285645797883903

 10.28.205.127   205 Up Normal  84.63 KB
 0.00%   9151314442816847871

 10.28.205.127   205 Up Normal  84.63 KB
 0.00%   9187343239835811839

 ** **

 So both boxes are showing up in the ring.  

 ** **

 *Thank you guys SO MUCH for helping me figure this stuff out.*

 ** **

 ** **

 *From:* Tyler Hobbs [mailto:ty...@datastax.com]
 *Sent:* Monday, February 04, 2013 11:17 AM

 *To:* user@cassandra.apache.org
 *Subject:* Re: Not enough replicas???

 ** **

 RackInferringSnitch determines each node's DC and rack by looking at the
 second and third octets in its IP address (
 http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
 so your nodes are in DC 28.

 Your replication strategy says to put one replica in DC datacenter1, but
 doesn't mention DC 28 at all, so you don't have any replicas for your
 keyspace.

 ** **

 On Mon, Feb 4, 2013 at 7:55 AM, stephen.m.thomp...@wellsfargo.com wrote:
 

 Hi Edward - thanks for responding.   The keyspace could not have been
 created more simply:

  

 create keyspace KEYSPACE_NAME;

  

 According to the help, this should have created a replication factor of 1:
 

  

 Keyspace Attributes (all are optional):

 - placement_strategy: Class used to determine how replicas

   are distributed among nodes. Defaults to NetworkTopologyStrategy with***
 *

   one datacenter defined with a replication factor of 1
 ([datacenter1:1]).

  

 Steve

  

 -Original Message-
 From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
 Sent: Friday, February 01, 2013 5:49 PM
 To: user@cassandra.apache.org
 Subject: Re: Not enough replicas???

  

 Please include the information on how your keyspace was created. This may
 indicate you set the replication factor to 3, when you only have 1 node, or
 some similar condition.

  

 On Fri, Feb 1, 2013 at 4:57 PM,  stephen.m.thomp...@wellsfargo.com
 wrote:

  I need to offer my profound thanks to this community which has been so *
 ***

  helpful in trying to figure this system out.

  

  

  

  I’ve setup a simple ring with two nodes and I’m trying to insert data **
 **

  to them.  I get failures 100% with this error:

  

  

  

  me.prettyprint.hector.api.exceptions.HUnavailableException: : May not **
 **

  be enough replicas present to handle consistency level.

  

  

  

  I’m not doing anything fancy – this is just from setting up the 

  cluster following the basic instructions from datastax for a simple 

  one data center cluster.  My config is basically the default except 

  for the changes they discuss (except that I have configured for my IP **
 **

  addresses… my two boxes are

  .126 and .127)

  

  

  

  cluster_name: 'MyDemoCluster'

  

  num_tokens: 256

  

  seed_provider:

  

- class_name: org.apache.cassandra.locator.SimpleSeedProvider

  

  parameters:

  

   - seeds: 10.28.205.126

  

  listen_address: 10.28.205.126

  

  rpc_address: 0.0.0.0

  

  endpoint_snitch: RackInferringSnitch

RE: Not enough replicas???

2013-02-04 Thread Stephen.M.Thompson
Sweet!  That worked!  THANK YOU!

Stephen Thompson
Wells Fargo Corporation
Internet Authentication  Fraud Prevention
704.427.3137 (W) | 704.807.3431 (C)

This message may contain confidential and/or privileged information, and is 
intended for the use of the addressee only. If you are not the addressee or 
authorized to receive this for the addressee, you must not use, copy, disclose, 
or take any action based on this message or any information herein. If you have 
received this message in error, please advise the sender immediately by reply 
e-mail and delete this message. Thank you for your cooperation.

From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Monday, February 04, 2013 1:43 PM
To: user@cassandra.apache.org
Subject: Re: Not enough replicas???

Sorry, to be more precise, the name of the datacenter is just the string 28, 
not DC28.

On Mon, Feb 4, 2013 at 12:07 PM, 
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:
Thanks Tyler ... so I created my keyspace to explicitly indicate the datacenter 
and replication, as follows:

create keyspace KEYSPACE_NAME
  with placement_strategy = 
'org.apache.cassandra.locator.NetworkTopologyStrategy'
  and strategy_options={DC28:2};

And yet I still get the exact same error message:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough 
replicas present to handle consistency level.

It certainly is showing that it took my change:

[default@KEYSPACE_NAME] describe;
Keyspace: KEYSPACE_NAME:
  Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
  Durable Writes: true
Options: [DC28:2]

Looking at the ring 

[root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost ring

Datacenter: 28
==
Replicas: 0

Address RackStatus State   LoadOwns
Token
   
9187343239835811839
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9187343239835811840
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9151314442816847872
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9115285645797883904

( HUGE SNIP )

10.28.205.127   205 Up Normal  84.63 KB0.00%   
9115285645797883903
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9151314442816847871
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9187343239835811839

So both boxes are showing up in the ring.

Thank you guys SO MUCH for helping me figure this stuff out.


From: Tyler Hobbs [mailto:ty...@datastax.commailto:ty...@datastax.com]
Sent: Monday, February 04, 2013 11:17 AM

To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Not enough replicas???

RackInferringSnitch determines each node's DC and rack by looking at the second 
and third octets in its IP address 
(http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
 so your nodes are in DC 28.

Your replication strategy says to put one replica in DC datacenter1, but 
doesn't mention DC 28 at all, so you don't have any replicas for your 
keyspace.

On Mon, Feb 4, 2013 at 7:55 AM, 
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

Hi Edward - thanks for responding.   The keyspace could not have been created 
more simply:



create keyspace KEYSPACE_NAME;



According to the help, this should have created a replication factor of 1:



Keyspace Attributes (all are optional):

- placement_strategy: Class used to determine how replicas

  are distributed among nodes. Defaults to NetworkTopologyStrategy with

  one datacenter defined with a replication factor of 1 ([datacenter1:1]).



Steve



-Original Message-
From: Edward Capriolo 
[mailto:edlinuxg...@gmail.commailto:edlinuxg...@gmail.com]
Sent: Friday, February 01, 2013 5:49 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Not enough replicas???



Please include the information on how your keyspace was created. This may 
indicate you set the replication factor to 3, when you only have 1 node, or 
some similar condition.



On Fri, Feb 1, 2013 at 4:57 PM,  
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

 I need to offer my profound thanks to this community which has been so

 helpful in trying to figure this system out.







 I've setup a simple ring with two nodes and I'm trying to insert data

 to them.  I get failures 100% with this error:







 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not

 be enough replicas present to handle consistency level.







 I'm not doing anything fancy - this is just from setting up the

 cluster following the basic instructions from datastax for a simple

 one data center cluster.  My config

Not enough replicas???

2013-02-01 Thread Stephen.M.Thompson
I need to offer my profound thanks to this community which has been so helpful 
in trying to figure this system out.

I've setup a simple ring with two nodes and I'm trying to insert data to them.  
I get failures 100% with this error:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough 
replicas present to handle consistency level.

I'm not doing anything fancy - this is just from setting up the cluster 
following the basic instructions from datastax for a simple one data center 
cluster.  My config is basically the default except for the changes they 
discuss (except that I have configured for my IP addresses... my two boxes are 
.126 and .127)

cluster_name: 'MyDemoCluster'
num_tokens: 256
seed_provider:
  - class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
 - seeds: 10.28.205.126
listen_address: 10.28.205.126
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch

Nodetool shows both nodes active in the ring, status = up, state = normal.

For the CF:

   ColumnFamily: SystemEvent
 Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
 Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
 Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
 GC grace seconds: 864000
 Compaction min/max thresholds: 4/32
 Read repair chance: 0.1
 DC Local Read repair chance: 0.0
 Replicate on write: true
 Caching: KEYS_ONLY
 Bloom Filter FP chance: default
 Built indexes: [SystemEvent.IdxName]
 Column Metadata:
   Column Name: eventTimeStamp
 Validation Class: org.apache.cassandra.db.marshal.DateType
   Column Name: name
 Validation Class: org.apache.cassandra.db.marshal.UTF8Type
 Index Name: IdxName
 Index Type: KEYS
 Compaction Strategy: 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
 Compression Options:
   sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor

Any ideas?


Re: Not enough replicas???

2013-02-01 Thread Edward Capriolo
Please include the information on how your keyspace was created. This
may indicate you set the replication factor to 3, when you only have 1
node, or some similar condition.

On Fri, Feb 1, 2013 at 4:57 PM,  stephen.m.thomp...@wellsfargo.com wrote:
 I need to offer my profound thanks to this community which has been so
 helpful in trying to figure this system out.



 I’ve setup a simple ring with two nodes and I’m trying to insert data to
 them.  I get failures 100% with this error:



 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be
 enough replicas present to handle consistency level.



 I’m not doing anything fancy – this is just from setting up the cluster
 following the basic instructions from datastax for a simple one data center
 cluster.  My config is basically the default except for the changes they
 discuss (except that I have configured for my IP addresses… my two boxes are
 .126 and .127)



 cluster_name: 'MyDemoCluster'

 num_tokens: 256

 seed_provider:

   - class_name: org.apache.cassandra.locator.SimpleSeedProvider

 parameters:

  - seeds: 10.28.205.126

 listen_address: 10.28.205.126

 rpc_address: 0.0.0.0

 endpoint_snitch: RackInferringSnitch



 Nodetool shows both nodes active in the ring, status = up, state = normal.



 For the CF:



ColumnFamily: SystemEvent

  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type

  Default column value validator:
 org.apache.cassandra.db.marshal.UTF8Type

  Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type

  GC grace seconds: 864000

  Compaction min/max thresholds: 4/32

  Read repair chance: 0.1

  DC Local Read repair chance: 0.0

  Replicate on write: true

  Caching: KEYS_ONLY

  Bloom Filter FP chance: default

  Built indexes: [SystemEvent.IdxName]

  Column Metadata:

Column Name: eventTimeStamp

  Validation Class: org.apache.cassandra.db.marshal.DateType

Column Name: name

  Validation Class: org.apache.cassandra.db.marshal.UTF8Type

  Index Name: IdxName

  Index Type: KEYS

  Compaction Strategy:
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

  Compression Options:

sstable_compression:
 org.apache.cassandra.io.compress.SnappyCompressor



 Any ideas?


Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Nate McCall
It looks like you only have 2 replicas configured in each data center?

If so, LOCAL_QUORUM cannot be achieved with a host down same as with
QUORUM on RF=2 in a single DC cluster.

On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.



Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
Well, this is the part I don't understand then. I thought that if I
configure 2 replicas with 3 nodes and one of 3 nodes goes down, I'll
still have 2 nodes to store 3 replicas. Is my logic flawed somehere?

On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.




Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
from http://www.datastax.com/docs/0.8/consistency/index:

A “quorum” of replicas is essentially a majority of replicas, or RF /
2 + 1 with any resulting fractions rounded down.

I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
node goes down?

On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.




Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Nate McCall
In your options, you have configured 2 replicas for each data center:
Options: [DC2:2, DC1:2]

If one of those replicas is down, then LOCAL_QUORUM will fail as there
is only one replica left 'locally.'


On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 from http://www.datastax.com/docs/0.8/consistency/index:

 A “quorum” of replicas is essentially a majority of replicas, or RF /
 2 + 1 with any resulting fractions rounded down.

 I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
 node goes down?

 On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.





Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
Do you mean I need to configure 3 replicas in each DC and keep using
LOCAL_QUORUM? In which case, if I'm following your logic, even one of
the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds?

On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall n...@datastax.com wrote:
 In your options, you have configured 2 replicas for each data center:
 Options: [DC2:2, DC1:2]

 If one of those replicas is down, then LOCAL_QUORUM will fail as there
 is only one replica left 'locally.'


 On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 from http://www.datastax.com/docs/0.8/consistency/index:

 A “quorum” of replicas is essentially a majority of replicas, or RF /
 2 + 1 with any resulting fractions rounded down.

 I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
 node goes down?

 On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.






Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Nate McCall
Yes - you would need at least 3 replicas per data center to use
LOCAL_QUORUM and survive a node failure.

On Fri, Sep 2, 2011 at 3:51 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 Do you mean I need to configure 3 replicas in each DC and keep using
 LOCAL_QUORUM? In which case, if I'm following your logic, even one of
 the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds?

 On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall n...@datastax.com wrote:
 In your options, you have configured 2 replicas for each data center:
 Options: [DC2:2, DC1:2]

 If one of those replicas is down, then LOCAL_QUORUM will fail as there
 is only one replica left 'locally.'


 On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 from http://www.datastax.com/docs/0.8/consistency/index:

 A “quorum” of replicas is essentially a majority of replicas, or RF /
 2 + 1 with any resulting fractions rounded down.

 I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
 node goes down?

 On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com 
 wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.







Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Konstantin Naryshkin
I think that Oleg may have misunderstood how replicas are selected. If you have 
3 nodes in your cluster and a RF of 2, Cassandra first selects what two nodes, 
out of the 3 will get data, then, and only then does it write it out. The 
selection is based on the row key, the token of the node, and you choice of 
partitioner. This means that Cassandra does not need to store what node is 
responsible for a given row. That information can be recalculated whenever it 
is needed.

The error that you are getting is because you may have 2 nodes up, those are 
not the nodes that Cassandra will use to store data.

- Original Message -
From: Nate McCall n...@datastax.com
To: hector-us...@googlegroups.com
Cc: Cassandra Users user@cassandra.apache.org
Sent: Friday, September 2, 2011 4:44:01 PM
Subject: Re: HUnavailableException: : May not be enough replicas present to 
handle consistency level.

In your options, you have configured 2 replicas for each data center:
Options: [DC2:2, DC1:2]

If one of those replicas is down, then LOCAL_QUORUM will fail as there
is only one replica left 'locally.'


On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 from http://www.datastax.com/docs/0.8/consistency/index:

 A “quorum” of replicas is essentially a majority of replicas, or RF /
 2 + 1 with any resulting fractions rounded down.

 I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
 node goes down?

 On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.





Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
And now, when I have one node down with no chance of bringing it back
anytime soon, can I still change RF to 3 and get restore functionality
of my cluster? Should I run 'nodetool repair' or simple keyspace
update will suffice?

On Fri, Sep 2, 2011 at 1:55 PM, Nate McCall n...@datastax.com wrote:
 Yes - you would need at least 3 replicas per data center to use
 LOCAL_QUORUM and survive a node failure.

 On Fri, Sep 2, 2011 at 3:51 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 Do you mean I need to configure 3 replicas in each DC and keep using
 LOCAL_QUORUM? In which case, if I'm following your logic, even one of
 the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds?

 On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall n...@datastax.com wrote:
 In your options, you have configured 2 replicas for each data center:
 Options: [DC2:2, DC1:2]

 If one of those replicas is down, then LOCAL_QUORUM will fail as there
 is only one replica left 'locally.'


 On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 from http://www.datastax.com/docs/0.8/consistency/index:

 A “quorum” of replicas is essentially a majority of replicas, or RF /
 2 + 1 with any resulting fractions rounded down.

 I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
 node goes down?

 On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com 
 wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: 
 org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.








Re: HUnavailableException: : May not be enough replicas present to handle consistency level.

2011-09-02 Thread Oleg Tsvinev
Yes, I think I get it now. quorum of replicas != quorum of nodes
and I don't think quorum of nodes is ever defined. Thank you,
Konstantin.

Now, I believe I need to change my cluster to store data in two
remaining nodes in DC1, keeping 3 nodes in DC2. I believe nodetool
removetoken is what I need to use. Anything else I can/should do?

On Fri, Sep 2, 2011 at 1:56 PM, Konstantin  Naryshkin
konstant...@a-bb.net wrote:
 I think that Oleg may have misunderstood how replicas are selected. If you 
 have 3 nodes in your cluster and a RF of 2, Cassandra first selects what two 
 nodes, out of the 3 will get data, then, and only then does it write it out. 
 The selection is based on the row key, the token of the node, and you choice 
 of partitioner. This means that Cassandra does not need to store what node is 
 responsible for a given row. That information can be recalculated whenever it 
 is needed.

 The error that you are getting is because you may have 2 nodes up, those are 
 not the nodes that Cassandra will use to store data.

 - Original Message -
 From: Nate McCall n...@datastax.com
 To: hector-us...@googlegroups.com
 Cc: Cassandra Users user@cassandra.apache.org
 Sent: Friday, September 2, 2011 4:44:01 PM
 Subject: Re: HUnavailableException: : May not be enough replicas present to 
 handle consistency level.

 In your options, you have configured 2 replicas for each data center:
 Options: [DC2:2, DC1:2]

 If one of those replicas is down, then LOCAL_QUORUM will fail as there
 is only one replica left 'locally.'


 On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 from http://www.datastax.com/docs/0.8/consistency/index:

 A “quorum” of replicas is essentially a majority of replicas, or RF /
 2 + 1 with any resulting fractions rounded down.

 I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
 node goes down?

 On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall n...@datastax.com wrote:
 It looks like you only have 2 replicas configured in each data center?

 If so, LOCAL_QUORUM cannot be achieved with a host down same as with
 QUORUM on RF=2 in a single DC cluster.

 On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev oleg.tsvi...@gmail.com wrote:
 I believe I don't quite understand semantics of this exception:

 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
 be enough replicas present to handle consistency level.

 Does it mean there *might be* enough?
 Does it mean there *is not* enough?

 My case is as following - I have 3 nodes with keyspaces configured as 
 following:

 Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 Durable Writes: true
 Options: [DC2:2, DC1:2]

 Hector can only connect to nodes in DC1 and configured to neither see
 nor connect to nodes in DC2. This is for replication by Cassandra
 means, asynchronously between datacenters DC1 and DC2. Each of 6 total
 nodes can see any of the remaining 5.

 and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
 However, this morning one node went down and I started seeing the
 HUnavailableException: : May not be enough replicas present to handle
 consistency level.

 I believed if I have 3 nodes and one goes down, two remaining nodes
 are sufficient for my configuration.

 Please help me to understand what's going on.