Re: How to prevent queries being routed to new DC?

2015-09-03 Thread Tom van den Berge
Hi Bryan,

It does not generate any errors. A query for a specific row simply does not
return the row if it is sent to a node in the new DC. This makes sense,
because the node is still empty.

On Thu, Sep 3, 2015 at 9:03 PM, Bryan Cheng  wrote:

> This all seems fine so far. Are you able to see what errors are being
> returned?
>
> We had a similar issue where one of our secondary, less used keyspaces was
> on a replication strategy that was not DC-aware, which was causing errors
> about being unable to satisfy LOCAL_ONE and LOCAL_QUORUM quoroum levels.
>
>
> On Thu, Sep 3, 2015 at 11:53 AM, Tom van den Berge <
> tom.vandenbe...@gmail.com> wrote:
>
>> Hi Bryan,
>>
>> I'm using the PropertyFileSnitch, and it contains entries for all nodes
>> in the old DC, and all nodes in the new DC. The replication factor for both
>> DCs is 1.
>>
>> With the first approach I described, the new nodes join the cluster, and
>> show up correctly under the new DC, so all seems to be fine.
>> With the second approach (join_ring=false), they don't show up at all,
>> which is also what I expected.
>>
>>
>> On Thu, Sep 3, 2015 at 8:44 PM, Bryan Cheng 
>> wrote:
>>
>>> Hey Tom,
>>>
>>> What's your replication strategy look like? When your new nodes join the
>>> ring, can you verify that they show up under a new DC and not as part of
>>> the old?
>>>
>>> --Bryan
>>>
>>> On Thu, Sep 3, 2015 at 11:27 AM, Tom van den Berge <
>>> tom.vandenbe...@gmail.com> wrote:
>>>
 I want to start using vnodes in my cluster. To do so, I've set up a new
 data center with the same number of nodes as the existing one, as described
 in
 http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configVnodesProduction_t.html.
 The new DC is in the same physical location as the old one.

 The problem I'm running into is that as soon as the nodes in the new
 data center are started, the application that is using the nodes in the old
 data center is frequently getting error messages because queries don't
 return the expected data. I'm pretty sure this is because somehow these
 queries are routed to the new, empty data center. The application is not
 connecting to the nodes in the new DC.

 I've tried two different things to prevent this:

 1) Ensure that all queries use either LOCAL_ONE or LOCAL_QUORUM
 consistency. Nevertheless, I'm still seeing failed queries.
 2) Start the new nodes with -Dcassandra.join_ring=false, to prevent
 them from participating in the cluster. Although they don't show up in
 nodetool ring, I'm still seeing failed queries.

 If I understand it correctly, both measures should prevent queries from
 ending up in the new DC, but somehow they don't in my situation.

 How is it possible that queries are routed to the new, emtpy data
 center? And more importantly, how can I prevent it?

 Thanks,
 Tom

>>>
>>>
>>
>


Re: How to prevent queries being routed to new DC?

2015-09-03 Thread Bryan Cheng
Hey Tom,

What's your replication strategy look like? When your new nodes join the
ring, can you verify that they show up under a new DC and not as part of
the old?

--Bryan

On Thu, Sep 3, 2015 at 11:27 AM, Tom van den Berge <
tom.vandenbe...@gmail.com> wrote:

> I want to start using vnodes in my cluster. To do so, I've set up a new
> data center with the same number of nodes as the existing one, as described
> in
> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configVnodesProduction_t.html.
> The new DC is in the same physical location as the old one.
>
> The problem I'm running into is that as soon as the nodes in the new data
> center are started, the application that is using the nodes in the old data
> center is frequently getting error messages because queries don't return
> the expected data. I'm pretty sure this is because somehow these queries
> are routed to the new, empty data center. The application is not connecting
> to the nodes in the new DC.
>
> I've tried two different things to prevent this:
>
> 1) Ensure that all queries use either LOCAL_ONE or LOCAL_QUORUM
> consistency. Nevertheless, I'm still seeing failed queries.
> 2) Start the new nodes with -Dcassandra.join_ring=false, to prevent them
> from participating in the cluster. Although they don't show up in nodetool
> ring, I'm still seeing failed queries.
>
> If I understand it correctly, both measures should prevent queries from
> ending up in the new DC, but somehow they don't in my situation.
>
> How is it possible that queries are routed to the new, emtpy data center?
> And more importantly, how can I prevent it?
>
> Thanks,
> Tom
>


Querying on multiple columns

2015-09-03 Thread Samya Maiti
Hi All,

I have a use-case where in I want to execute query on my cassandra table with 
different where clauses.

Two approaches know to me is :-
Creating secondary index
But to my understanding, query on secondary index will be slow.
Creating multiple tables with same data but different primary key.
This option has many consequences as lot of things needs to be taken care of 
while writing the data.

Please let me know if a better solution is available. I am using 2.1.5 version.

Regards,
Sam

How to prevent queries being routed to new DC?

2015-09-03 Thread Tom van den Berge
I want to start using vnodes in my cluster. To do so, I've set up a new
data center with the same number of nodes as the existing one, as described
in
http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configVnodesProduction_t.html.
The new DC is in the same physical location as the old one.

The problem I'm running into is that as soon as the nodes in the new data
center are started, the application that is using the nodes in the old data
center is frequently getting error messages because queries don't return
the expected data. I'm pretty sure this is because somehow these queries
are routed to the new, empty data center. The application is not connecting
to the nodes in the new DC.

I've tried two different things to prevent this:

1) Ensure that all queries use either LOCAL_ONE or LOCAL_QUORUM
consistency. Nevertheless, I'm still seeing failed queries.
2) Start the new nodes with -Dcassandra.join_ring=false, to prevent them
from participating in the cluster. Although they don't show up in nodetool
ring, I'm still seeing failed queries.

If I understand it correctly, both measures should prevent queries from
ending up in the new DC, but somehow they don't in my situation.

How is it possible that queries are routed to the new, emtpy data center?
And more importantly, how can I prevent it?

Thanks,
Tom


[Ann] Ansible base Cassandra stress framework for EC2

2015-09-03 Thread Tzach Livyatan
I'm please to share a framework for running Cassandra stress tests on EC2.
https://github.com/cloudius-systems/ansible-cassandra-cluster-stress

The framework is a collection of Ansible playbooks and scripts, allowing to:
- Create a Cassandra cluster (setting server type, version, etc)
- Launch any number of loaders
- Run cassandra-stress on all loaders and collect the results
- Add nodes to a running cluster
- Stop and starts nodes
- Clean old data from servers before each test
- Collect and display relevant metrics on a Collectd+Graphite+Tessera

 server

Use cases I tested using this framework:
* Stress with multiple loaders
* Out Scale

(adding
server under stress)
* Testing Cassandra Repair


More info in README

the  and Wiki

Some of my future plans include adding YCSB, run on other providers and
more.
Contributions and suggestions will be appreciated!

Tzach


Re: How to prevent queries being routed to new DC?

2015-09-03 Thread Tom van den Berge
Hi Bryan,

I'm using the PropertyFileSnitch, and it contains entries for all nodes in
the old DC, and all nodes in the new DC. The replication factor for both
DCs is 1.

With the first approach I described, the new nodes join the cluster, and
show up correctly under the new DC, so all seems to be fine.
With the second approach (join_ring=false), they don't show up at all,
which is also what I expected.


On Thu, Sep 3, 2015 at 8:44 PM, Bryan Cheng  wrote:

> Hey Tom,
>
> What's your replication strategy look like? When your new nodes join the
> ring, can you verify that they show up under a new DC and not as part of
> the old?
>
> --Bryan
>
> On Thu, Sep 3, 2015 at 11:27 AM, Tom van den Berge <
> tom.vandenbe...@gmail.com> wrote:
>
>> I want to start using vnodes in my cluster. To do so, I've set up a new
>> data center with the same number of nodes as the existing one, as described
>> in
>> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configVnodesProduction_t.html.
>> The new DC is in the same physical location as the old one.
>>
>> The problem I'm running into is that as soon as the nodes in the new data
>> center are started, the application that is using the nodes in the old data
>> center is frequently getting error messages because queries don't return
>> the expected data. I'm pretty sure this is because somehow these queries
>> are routed to the new, empty data center. The application is not connecting
>> to the nodes in the new DC.
>>
>> I've tried two different things to prevent this:
>>
>> 1) Ensure that all queries use either LOCAL_ONE or LOCAL_QUORUM
>> consistency. Nevertheless, I'm still seeing failed queries.
>> 2) Start the new nodes with -Dcassandra.join_ring=false, to prevent them
>> from participating in the cluster. Although they don't show up in nodetool
>> ring, I'm still seeing failed queries.
>>
>> If I understand it correctly, both measures should prevent queries from
>> ending up in the new DC, but somehow they don't in my situation.
>>
>> How is it possible that queries are routed to the new, emtpy data center?
>> And more importantly, how can I prevent it?
>>
>> Thanks,
>> Tom
>>
>
>


Re: How to prevent queries being routed to new DC?

2015-09-03 Thread Bryan Cheng
This all seems fine so far. Are you able to see what errors are being
returned?

We had a similar issue where one of our secondary, less used keyspaces was
on a replication strategy that was not DC-aware, which was causing errors
about being unable to satisfy LOCAL_ONE and LOCAL_QUORUM quoroum levels.


On Thu, Sep 3, 2015 at 11:53 AM, Tom van den Berge <
tom.vandenbe...@gmail.com> wrote:

> Hi Bryan,
>
> I'm using the PropertyFileSnitch, and it contains entries for all nodes in
> the old DC, and all nodes in the new DC. The replication factor for both
> DCs is 1.
>
> With the first approach I described, the new nodes join the cluster, and
> show up correctly under the new DC, so all seems to be fine.
> With the second approach (join_ring=false), they don't show up at all,
> which is also what I expected.
>
>
> On Thu, Sep 3, 2015 at 8:44 PM, Bryan Cheng  wrote:
>
>> Hey Tom,
>>
>> What's your replication strategy look like? When your new nodes join the
>> ring, can you verify that they show up under a new DC and not as part of
>> the old?
>>
>> --Bryan
>>
>> On Thu, Sep 3, 2015 at 11:27 AM, Tom van den Berge <
>> tom.vandenbe...@gmail.com> wrote:
>>
>>> I want to start using vnodes in my cluster. To do so, I've set up a new
>>> data center with the same number of nodes as the existing one, as described
>>> in
>>> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configVnodesProduction_t.html.
>>> The new DC is in the same physical location as the old one.
>>>
>>> The problem I'm running into is that as soon as the nodes in the new
>>> data center are started, the application that is using the nodes in the old
>>> data center is frequently getting error messages because queries don't
>>> return the expected data. I'm pretty sure this is because somehow these
>>> queries are routed to the new, empty data center. The application is not
>>> connecting to the nodes in the new DC.
>>>
>>> I've tried two different things to prevent this:
>>>
>>> 1) Ensure that all queries use either LOCAL_ONE or LOCAL_QUORUM
>>> consistency. Nevertheless, I'm still seeing failed queries.
>>> 2) Start the new nodes with -Dcassandra.join_ring=false, to prevent them
>>> from participating in the cluster. Although they don't show up in nodetool
>>> ring, I'm still seeing failed queries.
>>>
>>> If I understand it correctly, both measures should prevent queries from
>>> ending up in the new DC, but somehow they don't in my situation.
>>>
>>> How is it possible that queries are routed to the new, emtpy data
>>> center? And more importantly, how can I prevent it?
>>>
>>> Thanks,
>>> Tom
>>>
>>
>>
>


Re: How to prevent queries being routed to new DC?

2015-09-03 Thread Bryan Cheng
Hey Tom,

I'd recommend you enable tracing and do a few queries in a controlled
environment to verify that queries are being routed to your new nodes.
Provided you have followed the procedure outlined above (specifically, have
set auto_bootstrap to false in your new cluster), rebuild has not been run,
the application is not connecting to the new cluster, and all your queries
are run at LOCAL_* quorum levels, I do not believe those queries should be
routed to the new dc.

On Thu, Sep 3, 2015 at 12:14 PM, Tom van den Berge <
tom.vandenbe...@gmail.com> wrote:

> Hi Bryan,
>
> It does not generate any errors. A query for a specific row simply does
> not return the row if it is sent to a node in the new DC. This makes sense,
> because the node is still empty.
>
> On Thu, Sep 3, 2015 at 9:03 PM, Bryan Cheng  wrote:
>
>> This all seems fine so far. Are you able to see what errors are being
>> returned?
>>
>> We had a similar issue where one of our secondary, less used keyspaces
>> was on a replication strategy that was not DC-aware, which was causing
>> errors about being unable to satisfy LOCAL_ONE and LOCAL_QUORUM quoroum
>> levels.
>>
>>
>> On Thu, Sep 3, 2015 at 11:53 AM, Tom van den Berge <
>> tom.vandenbe...@gmail.com> wrote:
>>
>>> Hi Bryan,
>>>
>>> I'm using the PropertyFileSnitch, and it contains entries for all nodes
>>> in the old DC, and all nodes in the new DC. The replication factor for both
>>> DCs is 1.
>>>
>>> With the first approach I described, the new nodes join the cluster, and
>>> show up correctly under the new DC, so all seems to be fine.
>>> With the second approach (join_ring=false), they don't show up at all,
>>> which is also what I expected.
>>>
>>>
>>> On Thu, Sep 3, 2015 at 8:44 PM, Bryan Cheng 
>>> wrote:
>>>
 Hey Tom,

 What's your replication strategy look like? When your new nodes join
 the ring, can you verify that they show up under a new DC and not as part
 of the old?

 --Bryan

 On Thu, Sep 3, 2015 at 11:27 AM, Tom van den Berge <
 tom.vandenbe...@gmail.com> wrote:

> I want to start using vnodes in my cluster. To do so, I've set up a
> new data center with the same number of nodes as the existing one, as
> described in
> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configVnodesProduction_t.html.
> The new DC is in the same physical location as the old one.
>
> The problem I'm running into is that as soon as the nodes in the new
> data center are started, the application that is using the nodes in the 
> old
> data center is frequently getting error messages because queries don't
> return the expected data. I'm pretty sure this is because somehow these
> queries are routed to the new, empty data center. The application is not
> connecting to the nodes in the new DC.
>
> I've tried two different things to prevent this:
>
> 1) Ensure that all queries use either LOCAL_ONE or LOCAL_QUORUM
> consistency. Nevertheless, I'm still seeing failed queries.
> 2) Start the new nodes with -Dcassandra.join_ring=false, to prevent
> them from participating in the cluster. Although they don't show up in
> nodetool ring, I'm still seeing failed queries.
>
> If I understand it correctly, both measures should prevent queries
> from ending up in the new DC, but somehow they don't in my situation.
>
> How is it possible that queries are routed to the new, emtpy data
> center? And more importantly, how can I prevent it?
>
> Thanks,
> Tom
>


>>>
>>
>


Re: How to prevent queries being routed to new DC?

2015-09-03 Thread Robert Coli
On Thu, Sep 3, 2015 at 12:25 PM, Bryan Cheng  wrote:

> I'd recommend you enable tracing and do a few queries in a controlled
> environment to verify that queries are being routed to your new nodes.
> Provided you have followed the procedure outlined above (specifically, have
> set auto_bootstrap to false in your new cluster), rebuild has not been run,
> the application is not connecting to the new cluster, and all your queries
> are run at LOCAL_* quorum levels, I do not believe those queries should be
> routed to the new dc.
>

Other than CASSANDRA-9753, this is true.

https://issues.apache.org/jira/browse/CASSANDRA-9753 (Unresolved; ):
"LOCAL_QUORUM reads can block cross-DC if there is a digest mismatch"

=Rob


Re: Order By limitation or bug?

2015-09-03 Thread DuyHai Doan
Limitation, not bug. The reason ?

On disk, data are sorted by type first, and FOR EACH type value, the data
are sorted by id.

So to do an order by Id, C* will need to perform an in-memory re-ordering,
not sure how bad it is for performance. In any case currently it's not
possible, maybe you should create a JIRA to ask for lifting the limitation.

On Thu, Sep 3, 2015 at 10:27 PM, Robert Wille  wrote:

> Given this table:
>
> CREATE TABLE import_file (
>   roll int,
>   type text,
>   id timeuuid,
>   data text,
>   PRIMARY KEY ((roll), type, id)
> )
>
> This should be possible:
>
> SELECT data FROM import_file WHERE roll = 1 AND type = 'foo' ORDER BY id
> DESC;
>
> but it results in the following error:
>
> Bad Request: Order by currently only support the ordering of columns
> following their declared order in the PRIMARY KEY
>
> I am ordering in the declared order in the primary key. I don’t see why
> this shouldn’t be able to be supported. Is this a known limitation or a bug?
>
> In this example, I can get the results I want by omitting the ORDER BY
> clause and adding WITH CLUSTERING ORDER BY (id DESC) to the schema.
> However, now I can only get descending order. I have to choose either
> ascending or descending order. I cannot get both.
>
> Robert
>
>


Re: Order By limitation or bug?

2015-09-03 Thread Robert Wille
If you only specify the partition key, and none of the clustering columns, you 
can order by in either direction:

SELECT data FROM import_file WHERE roll = 1 order by type;
SELECT data FROM import_file WHERE roll = 1 order by type DESC;

These are both valid. Seems like specifying the prefix of the clustering 
columns is just a specialization of an already-supported pattern.

Robert

On Sep 3, 2015, at 2:46 PM, DuyHai Doan 
> wrote:

Limitation, not bug. The reason ?

On disk, data are sorted by type first, and FOR EACH type value, the data are 
sorted by id.

So to do an order by Id, C* will need to perform an in-memory re-ordering, not 
sure how bad it is for performance. In any case currently it's not possible, 
maybe you should create a JIRA to ask for lifting the limitation.

On Thu, Sep 3, 2015 at 10:27 PM, Robert Wille 
> wrote:
Given this table:

CREATE TABLE import_file (
  roll int,
  type text,
  id timeuuid,
  data text,
  PRIMARY KEY ((roll), type, id)
)

This should be possible:

SELECT data FROM import_file WHERE roll = 1 AND type = 'foo' ORDER BY id DESC;

but it results in the following error:

Bad Request: Order by currently only support the ordering of columns following 
their declared order in the PRIMARY KEY

I am ordering in the declared order in the primary key. I don’t see why this 
shouldn’t be able to be supported. Is this a known limitation or a bug?

In this example, I can get the results I want by omitting the ORDER BY clause 
and adding WITH CLUSTERING ORDER BY (id DESC) to the schema. However, now I can 
only get descending order. I have to choose either ascending or descending 
order. I cannot get both.

Robert





Order By limitation or bug?

2015-09-03 Thread Robert Wille
Given this table:

CREATE TABLE import_file (
  roll int,
  type text,
  id timeuuid,
  data text,
  PRIMARY KEY ((roll), type, id)
)

This should be possible:

SELECT data FROM import_file WHERE roll = 1 AND type = 'foo' ORDER BY id DESC;

but it results in the following error:

Bad Request: Order by currently only support the ordering of columns following 
their declared order in the PRIMARY KEY

I am ordering in the declared order in the primary key. I don’t see why this 
shouldn’t be able to be supported. Is this a known limitation or a bug?

In this example, I can get the results I want by omitting the ORDER BY clause 
and adding WITH CLUSTERING ORDER BY (id DESC) to the schema. However, now I can 
only get descending order. I have to choose either ascending or descending 
order. I cannot get both.

Robert



Re: Order By limitation or bug?

2015-09-03 Thread DuyHai Doan
It's normal, type is the FIRST clustering column so on disk, data are
sorted first by "type" naturally. C* does not have to perform any sorting
in memory.

And when you're using "order by type DESC", it's still not sorted in
memory, C* is just doing a backward-scan on disk starting from the
"biggest" value for type. At least that's what I've understood from the
pre-3.0 storage engine.


Re: How to prevent queries being routed to new DC?

2015-09-03 Thread Tom van den Berge
Thanks for your help so far!

I have some problems trying to understand the jira mentioned by Rob :(

I'm currently trying to set up the first node in the new DC with
auto_bootstrap = true. The node then becomes visible with status "joining",
which (hopefully) prevents other DCs from sending queries to it.

Do you think this will work?



On Thu, Sep 3, 2015 at 9:46 PM, Robert Coli  wrote:

> On Thu, Sep 3, 2015 at 12:25 PM, Bryan Cheng 
> wrote:
>
>> I'd recommend you enable tracing and do a few queries in a controlled
>> environment to verify that queries are being routed to your new nodes.
>> Provided you have followed the procedure outlined above (specifically, have
>> set auto_bootstrap to false in your new cluster), rebuild has not been run,
>> the application is not connecting to the new cluster, and all your queries
>> are run at LOCAL_* quorum levels, I do not believe those queries should be
>> routed to the new dc.
>>
>
> Other than CASSANDRA-9753, this is true.
>
> https://issues.apache.org/jira/browse/CASSANDRA-9753 (Unresolved; ):
> "LOCAL_QUORUM reads can block cross-DC if there is a digest mismatch"
>
> =Rob
>
>


Incremental repair from the get go

2015-09-03 Thread Jean-Francois Gosselin
On fresh install of Cassandra what's the best approach to start using
incremental repair from the get go (I'm using LCS) ?

Run nodetool repair -inc after inserting a few rows , or we still need to
follow the migration procedure with sstablerepairedset ?

>From the documentation "... If you use the leveled compaction strategy and
perform an incremental repair for the first time, Cassandra performs
size-tiering on all SSTables because the repair/unrepaired status is
unknown. This operation can take a long time. To save time, migrate to
incremental repair one node at a time. ..."

With almost no data size-tiering should be quick ?  Basically is there a
short cut to avoid the migration via sstablerepairedset  on a fresh install
?

Thanks

JF


RE: Order By limitation or bug?

2015-09-03 Thread Alec Collier
You should be able to execute the following

SELECT data FROM import_file WHERE roll = 1 AND type = 'foo' ORDER BY type, id 
DESC;

Essentially the order by clause has to specify the clustering columns in order 
in full. It doesn't by default know that you have already essentially filtered 
by type.

Alec Collier | Workplace Service Design
Corporate Operations Group - Technology | Macquarie Group Limited *

From: Robert Wille [mailto:rwi...@fold3.com]
Sent: Friday, 4 September 2015 7:17 AM
To: user@cassandra.apache.org
Subject: Re: Order By limitation or bug?

If you only specify the partition key, and none of the clustering columns, you 
can order by in either direction:

SELECT data FROM import_file WHERE roll = 1 order by type;
SELECT data FROM import_file WHERE roll = 1 order by type DESC;

These are both valid. Seems like specifying the prefix of the clustering 
columns is just a specialization of an already-supported pattern.

Robert

On Sep 3, 2015, at 2:46 PM, DuyHai Doan 
> wrote:


Limitation, not bug. The reason ?

On disk, data are sorted by type first, and FOR EACH type value, the data are 
sorted by id.

So to do an order by Id, C* will need to perform an in-memory re-ordering, not 
sure how bad it is for performance. In any case currently it's not possible, 
maybe you should create a JIRA to ask for lifting the limitation.

On Thu, Sep 3, 2015 at 10:27 PM, Robert Wille 
> wrote:

Given this table:

CREATE TABLE import_file (
  roll int,
  type text,
  id timeuuid,
  data text,
  PRIMARY KEY ((roll), type, id)
)

This should be possible:

SELECT data FROM import_file WHERE roll = 1 AND type = 'foo' ORDER BY id DESC;

but it results in the following error:

Bad Request: Order by currently only support the ordering of columns following 
their declared order in the PRIMARY KEY

I am ordering in the declared order in the primary key. I don't see why this 
shouldn't be able to be supported. Is this a known limitation or a bug?

In this example, I can get the results I want by omitting the ORDER BY clause 
and adding WITH CLUSTERING ORDER BY (id DESC) to the schema. However, now I can 
only get descending order. I have to choose either ascending or descending 
order. I cannot get both.

Robert




This email, including any attachments, is confidential. If you are not the 
intended recipient, you must not disclose, distribute or use the information in 
this email in any way. If you received this email in error, please notify the 
sender immediately by return email and delete the message. Unless expressly 
stated otherwise, the information in this email should not be regarded as an 
offer to sell or as a solicitation of an offer to buy any financial product or 
service, an official confirmation of any transaction, or as an official 
statement of the entity sending this message. Neither Macquarie Group Limited, 
nor any of its subsidiaries, guarantee the integrity of any emails or attached 
files and are not responsible for any changes made to them by any other person.

Re: 'no such object in table'

2015-09-03 Thread Jason Lewis
After enabling that option, I'm seeing errors like this on the node I
can't connect to.


Sep 04, 2015 2:35:48 AM sun.rmi.server.UnicastServerRef logCallException
FINE: RMI TCP Connection(4)-127.0.0.1: [127.0.0.1] exception:
javax.management.InstanceNotFoundException:
org.apache.cassandra.metrics:type=ColumnFamily,keyspace=ks,scope=cf,name=PendingTasks
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1448)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1312)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1404)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:641)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$79(TCPTransport.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

On Wed, Aug 26, 2015 at 2:05 PM, Nate McCall  wrote:
>> LOCAL_JMX=no
>>
>> if [ "$LOCAL_JMX" = "yes" ]; then
>>   JVM_OPTS="$JVM_OPTS -Dcassandra.jmx.local.port=$JMX_PORT
>> -XX:+DisableExplicitGC"
>> else
>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false"
>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
>>   JVM_OPTS="$JVM_OPTS
>>
>> -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
>> fi
>>
>
> Retry with the following option added to your JVM_OPTS:
> java.rmi.server.logCalls=true
>
> This should produce some more information about what is going on.
>
>
>
>
> --
> -
> Nate McCall
> Austin, TX
> @zznate
>
> Co-Founder & Sr. Technical Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com


Re: 'no such object in table'

2015-09-03 Thread Jason Lewis
I figured this one out.  As it turns out, the nodes that I couldn't
connect to, had the hostname set to 127.0.1.1.  The listen IP is *not*
that IP.

Thanks for the logging tip, it helped track it down.

On Thu, Sep 3, 2015 at 10:43 PM, Jason Lewis  wrote:
> After enabling that option, I'm seeing errors like this on the node I
> can't connect to.
>
>
> Sep 04, 2015 2:35:48 AM sun.rmi.server.UnicastServerRef logCallException
> FINE: RMI TCP Connection(4)-127.0.0.1: [127.0.0.1] exception:
> javax.management.InstanceNotFoundException:
> org.apache.cassandra.metrics:type=ColumnFamily,keyspace=ks,scope=cf,name=PendingTasks
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1448)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1312)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1404)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:641)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$79(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> On Wed, Aug 26, 2015 at 2:05 PM, Nate McCall  wrote:
>>> LOCAL_JMX=no
>>>
>>> if [ "$LOCAL_JMX" = "yes" ]; then
>>>   JVM_OPTS="$JVM_OPTS -Dcassandra.jmx.local.port=$JMX_PORT
>>> -XX:+DisableExplicitGC"
>>> else
>>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
>>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
>>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false"
>>>   JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
>>>   JVM_OPTS="$JVM_OPTS
>>>
>>> -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
>>> fi
>>>
>>
>> Retry with the following option added to your JVM_OPTS:
>> java.rmi.server.logCalls=true
>>
>> This should produce some more information about what is going on.
>>
>>
>>
>>
>> --
>> -
>> Nate McCall
>> Austin, TX
>> @zznate
>>
>> Co-Founder & Sr. Technical Consultant
>> Apache Cassandra Consulting
>> http://www.thelastpickle.com