zabbix templates

2012-05-11 Thread Cord MacLeod
I've seen some Cacti templates for Cassandra and a JMX bridge called zap cat, 
but has anyone created Zabbix templates for Cassandra?

Does or will Cassandra support OpenJDK ?

2012-05-11 Thread ramesh
I've had problem downloading the Sun (Oracle) JDK and found this thread 
where the Oracle official is insisting or rather forcing Linux users to 
move to OpenJDK. Here is the thread


https://forums.oracle.com/forums/thread.jspa?threadID=2365607

I need this because I run Cassandra.
Just curious to know if I would be able to avoid the pain of using Sun 
JDK in future for production Cassandra ?


regards
Ramesh


Re: C 1.1 & CQL 2.0 or 3.0?

2012-05-11 Thread cyril auburtin
yes it seems so

as long as I create (more than just create keyspace mykeyspace) it with
cassandra cli, by creating CF, then I can't connect to it with cqlsh -3

I'll need to translate it in cql3 then

2012/5/11 Jason Wellonen 

> **
> I think you need to create the keyspace under the context of a v3
> connection.  Maybe someone else can confirm?
>
>
>  --
> *From:* cyril auburtin [mailto:cyril.aubur...@gmail.com]
> *Sent:* Friday, May 11, 2012 11:46 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: C 1.1 & CQL 2.0 or 3.0?
>
> thx just: can't connect to keyspace with cql 3.0
>
>  tic@my:~$ cqlsh
> Connected to My Cluster at 127.0.0.1:4201.
> [cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 2.0.0 | Thrift protocol 19.30.0]
> Use HELP for help.
> cqlsh> use mykeyspace;
> cqlsh:mykeyspace> exit;
> tic@my:~$ cqlsh -3
> Connected to My Cluster at 127.0.0.1:4201.
> [cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 3.0.0 | Thrift protocol 19.30.0]
> Use HELP for help.
> cqlsh> use mykeyspace;
> Bad Request: Keyspace 'mykeyspace' does not exist
> cqlsh>
>
> ??
> 2012/5/11 Jason Wellonen 
>
>> **
>> Version 2 is the default for your connection.
>>
>> Are you using cqlsh?  If so, use the "-3" parameter for version 3...
>>
>>
>>  --
>> *From:* cyril auburtin [mailto:cyril.aubur...@gmail.com]
>> *Sent:* Friday, May 11, 2012 10:51 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* C 1.1 & CQL 2.0 or 3.0?
>>
>>  I have C* 1.1 but it seems to only come with cql 2.0
>>  INFO 19:35:21,579 Cassandra version: 1.1.0
>>  INFO 19:35:21,581 Thrift API version: 19.30.0
>>  INFO 19:35:21,583 CQL supported versions: 2.0.0,3.0.0-beta1 (default:
>> 2.0.0)
>>
>> the problem is I would like to create such CF :
>>
>> CREATE COLUMNFAMILY TaggedPosts (
>>  ... tag text,
>>  ... post uuid,
>>  ... blog_rowentries_rowkey text,
>>  ... PRIMARY KEY (tag, post)
>>  ... ) WITH COMPACT STORAGE;
>>
>> and for me, (cql 2.0) it returns this error
>>
>> Bad Request: line 6:0 mismatched input ')' expecting EOF
>>
>> Is it due to the cql version? how to upgrade to 3.0, since I already have
>> the lastest cassandra release?
>>
>
>


Re: C 1.1 & CQL 2.0 or 3.0?

2012-05-11 Thread cyril auburtin
it works with a 'raw' keyspace (create keyspace mykeyspace;)
but not with create keyspace mykeyspace with
placement_strategy='SimpleStrategy' and strategy_options =
{replication_factor:1};

( ^ the config mykeyspace had)

2012/5/11 cyril auburtin 

> thx just: can't connect to keyspace with cql 3.0
>
> tic@my:~$ cqlsh
> Connected to My Cluster at 127.0.0.1:4201.
> [cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 2.0.0 | Thrift protocol 19.30.0]
> Use HELP for help.
> cqlsh> use mykeyspace;
> cqlsh:mykeyspace> exit;
> tic@my:~$ cqlsh -3
> Connected to My Cluster at 127.0.0.1:4201.
>  [cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 3.0.0 | Thrift protocol 19.30.0]
> Use HELP for help.
> cqlsh> use mykeyspace;
> Bad Request: Keyspace 'mykeyspace' does not exist
> cqlsh>
>
> ??
>
> 2012/5/11 Jason Wellonen 
>
>> **
>> Version 2 is the default for your connection.
>>
>> Are you using cqlsh?  If so, use the "-3" parameter for version 3...
>>
>>
>>  --
>> *From:* cyril auburtin [mailto:cyril.aubur...@gmail.com]
>> *Sent:* Friday, May 11, 2012 10:51 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* C 1.1 & CQL 2.0 or 3.0?
>>
>> I have C* 1.1 but it seems to only come with cql 2.0
>>  INFO 19:35:21,579 Cassandra version: 1.1.0
>>  INFO 19:35:21,581 Thrift API version: 19.30.0
>>  INFO 19:35:21,583 CQL supported versions: 2.0.0,3.0.0-beta1 (default:
>> 2.0.0)
>>
>> the problem is I would like to create such CF :
>>
>> CREATE COLUMNFAMILY TaggedPosts (
>>  ... tag text,
>>  ... post uuid,
>>  ... blog_rowentries_rowkey text,
>>  ... PRIMARY KEY (tag, post)
>>  ... ) WITH COMPACT STORAGE;
>>
>> and for me, (cql 2.0) it returns this error
>>
>> Bad Request: line 6:0 mismatched input ')' expecting EOF
>>
>> Is it due to the cql version? how to upgrade to 3.0, since I already have
>> the lastest cassandra release?
>>
>
>


RE: C 1.1 & CQL 2.0 or 3.0?

2012-05-11 Thread Jason Wellonen
I think you need to create the keyspace under the context of a v3
connection.  Maybe someone else can confirm?
 



From: cyril auburtin [mailto:cyril.aubur...@gmail.com] 
Sent: Friday, May 11, 2012 11:46 AM
To: user@cassandra.apache.org
Subject: Re: C 1.1 & CQL 2.0 or 3.0?


thx just: can't connect to keyspace with cql 3.0 

tic@my:~$ cqlsh
Connected to My Cluster at 127.0.0.1:4201.
[cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 2.0.0 | Thrift protocol
19.30.0]
Use HELP for help.
cqlsh> use mykeyspace;
cqlsh:mykeyspace> exit;
tic@my:~$ cqlsh -3
Connected to My Cluster at 127.0.0.1:4201.
[cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 3.0.0 | Thrift protocol
19.30.0]
Use HELP for help.
cqlsh> use mykeyspace;
Bad Request: Keyspace 'mykeyspace' does not exist
cqlsh> 

??

2012/5/11 Jason Wellonen 



Version 2 is the default for your connection.
 
Are you using cqlsh?  If so, use the "-3" parameter for version
3...
 



From: cyril auburtin [mailto:cyril.aubur...@gmail.com] 
Sent: Friday, May 11, 2012 10:51 AM
To: user@cassandra.apache.org
Subject: C 1.1 & CQL 2.0 or 3.0?


I have C* 1.1 but it seems to only come with cql 2.0 
 INFO 19:35:21,579 Cassandra version: 1.1.0
 INFO 19:35:21,581 Thrift API version: 19.30.0
 INFO 19:35:21,583 CQL supported versions: 2.0.0,3.0.0-beta1
(default: 2.0.0)

the problem is I would like to create such CF :

CREATE COLUMNFAMILY TaggedPosts (
 ... tag text,
 ... post uuid,
 ... blog_rowentries_rowkey text,
 ... PRIMARY KEY (tag, post)
 ... ) WITH COMPACT STORAGE;

and for me, (cql 2.0) it returns this error

Bad Request: line 6:0 mismatched input ')' expecting EOF

Is it due to the cql version? how to upgrade to 3.0, since I
already have the lastest cassandra release?




Re: C 1.1 & CQL 2.0 or 3.0?

2012-05-11 Thread cyril auburtin
thx just: can't connect to keyspace with cql 3.0

tic@my:~$ cqlsh
Connected to My Cluster at 127.0.0.1:4201.
[cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 2.0.0 | Thrift protocol 19.30.0]
Use HELP for help.
cqlsh> use mykeyspace;
cqlsh:mykeyspace> exit;
tic@my:~$ cqlsh -3
Connected to My Cluster at 127.0.0.1:4201.
[cqlsh 2.2.0 | Cassandra 1.1.0 | CQL spec 3.0.0 | Thrift protocol 19.30.0]
Use HELP for help.
cqlsh> use mykeyspace;
Bad Request: Keyspace 'mykeyspace' does not exist
cqlsh>

??
2012/5/11 Jason Wellonen 

> **
> Version 2 is the default for your connection.
>
> Are you using cqlsh?  If so, use the "-3" parameter for version 3...
>
>
>  --
> *From:* cyril auburtin [mailto:cyril.aubur...@gmail.com]
> *Sent:* Friday, May 11, 2012 10:51 AM
> *To:* user@cassandra.apache.org
> *Subject:* C 1.1 & CQL 2.0 or 3.0?
>
> I have C* 1.1 but it seems to only come with cql 2.0
>  INFO 19:35:21,579 Cassandra version: 1.1.0
>  INFO 19:35:21,581 Thrift API version: 19.30.0
>  INFO 19:35:21,583 CQL supported versions: 2.0.0,3.0.0-beta1 (default:
> 2.0.0)
>
> the problem is I would like to create such CF :
>
> CREATE COLUMNFAMILY TaggedPosts (
>  ... tag text,
>  ... post uuid,
>  ... blog_rowentries_rowkey text,
>  ... PRIMARY KEY (tag, post)
>  ... ) WITH COMPACT STORAGE;
>
> and for me, (cql 2.0) it returns this error
>
> Bad Request: line 6:0 mismatched input ')' expecting EOF
>
> Is it due to the cql version? how to upgrade to 3.0, since I already have
> the lastest cassandra release?
>


Re: Thrift error occurred during processing of message

2012-05-11 Thread ruslan usifov
Looks like you used TBUfferedTransport, but sinve 1.0.x cassandra support
only framed

2011/12/19 Tamil selvan R.S 

> Hi,
>  We are using PHPCassa to connect to Cassandra 1.0.2. After we installed
> the thrift extension we started noticing the following in the error logs.
> [We didn't notice this when we were running raw thrift library with out
> extension].
>
> ERROR [pool-2-thread-5314] 2011-12-05 20:26:47,729
> CustomTThreadPoolServer.java (line 201) Thrift error occurred during
> processing of message.
> org.apache.thrift.protocol.
> TProtocolException: Missing version in readMessageBegin, old client?
> at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:213)
> at
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2877)
> at
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
>
> Is there any issue with the thrift protocol compatibilty?
>
> Regards,
> Tamil
>


Re: Thrift error occurred during processing of message

2012-05-11 Thread Tyler Hobbs
If you're only fetching a single column and you know its name, use the
second form. However, they should both technically work.

I introduced it because PHP only has positional parameters and most of the
parameters are optional, so having 6 or 10 parameters was annoying when you
only needed to set the last one.

On Fri, May 11, 2012 at 1:00 PM, Alain RODRIGUEZ  wrote:

> Thank you for the fast answer Tylor, I just discover the existence of
> the "ColumnSlice" class. I was using phpcassa for a while and didn't
> notice about this evolution.
>
> If I want to get just a column should I do :
>
> $slice  = new ColumnSlice('name', 'name');
> $result = $cf->get($key, $slice);
>
> or
>
> $result = $cf->get($key, null, array('name'));
>
> Is there any difference on performance between this 2 solution ? Do
> they both work ?
>
> Why introducing ColumnSlice ?
>
> Alain
>
> 2012/5/11 Tyler Hobbs :
> > Can you paste a snippet of your code showing how you're creating the
> > ColumnSlice and calling get()?
> >
> >
> > On Fri, May 11, 2012 at 8:16 AM, Alain RODRIGUEZ 
> wrote:
> >>
> >> I got the error above in cassandra logs.
> >>
> >> In my web browser I have the following error :
> >>
> >> "500 | Internal Server Error | TApplicationException
> >>
> >> Required field 'reversed' was not found in serialized data! Struct:
> >> SliceRange(start:80 01 00 01 00 00 00 0E 6D 75 6C 74 69 67 65 74 5F 73
> >> 6C 69 63 65 00 00 00 00 0F 00 01 0B 00 00 00 01 00 00 00 01 31 0C 00
> >> 02 0B 00 03 00 00 00 11 61 6C 67 6F 5F 70 72 6F 64 75 63 74 5F 76 69
> >> 65 77 00 0C 00 03 0C 00 02 0B 00 01 00 00 00 00, finish:80 01 00 01 00
> >> 00 00 0E 6D 75 6C 74 69 67 65 74 5F 73 6C 69 63 65 00 00 00 00 0F 00
> >> 01 0B 00 00 00 01 00 00 00 01 31 0C 00 02 0B 00 03 00 00 00 11 61 6C
> >> 67 6F 5F 70 72 6F 64 75 63 74 5F 76 69 65 77 00 0C 00 03 0C 00 02 0B
> >> 00 01 00 00 00 00 0B 00 02 00 00 00 00, reversed:false, count:100)"
> >>
> >> I think I forgot something that may have changed in the new phpcassa
> >> release. I'm still looking for it but any Idea is welcome :)
> >>
> >> Alain
> >>
> >> 2012/5/11 Alain RODRIGUEZ 
> >> >
> >> > Hi, I guess you finally solved this issue. I'm experimenting the same
> >> > one when trying to upgrade to phpcass 1.0.a.1.
> >> >
> >> > Do you remember how you fixed it or what the problem was exactly ?
> >> >
> >> > Thanks,
> >> >
> >> > Alain
> >> >
> >> > 2011/12/19 Tamil selvan R.S 
> >> >
> >> >> Hi,
> >> >>  We are using PHPCassa to connect to Cassandra 1.0.2. After we
> >> >> installed the thrift extension we started noticing the following in
> the
> >> >> error logs. [We didn't notice this when we were running raw thrift
> library
> >> >> with out extension].
> >> >>
> >> >> ERROR [pool-2-thread-5314] 2011-12-05 20:26:47,729
> >> >> CustomTThreadPoolServer.java (line 201) Thrift error occurred during
> >> >> processing of message.
> >> >> org.apache.thrift.protocol.
> >> >> TProtocolException: Missing version in readMessageBegin, old client?
> >> >> at
> >> >>
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:213)
> >> >> at
> >> >>
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2877)
> >> >> at
> >> >>
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
> >> >> at
> >> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> >> >> at
> >> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >> >> at java.lang.Thread.run(Thread.java:722)
> >> >>
> >> >> Is there any issue with the thrift protocol compatibilty?
> >> >>
> >> >> Regards,
> >> >> Tamil
> >> >
> >> >
> >
> >
> >
> >
> > --
> > Tyler Hobbs
> > DataStax
> >
>



-- 
Tyler Hobbs
DataStax 


RE: C 1.1 & CQL 2.0 or 3.0?

2012-05-11 Thread Jason Wellonen
Version 2 is the default for your connection.
 
Are you using cqlsh?  If so, use the "-3" parameter for version 3...
 



From: cyril auburtin [mailto:cyril.aubur...@gmail.com] 
Sent: Friday, May 11, 2012 10:51 AM
To: user@cassandra.apache.org
Subject: C 1.1 & CQL 2.0 or 3.0?


I have C* 1.1 but it seems to only come with cql 2.0 
 INFO 19:35:21,579 Cassandra version: 1.1.0
 INFO 19:35:21,581 Thrift API version: 19.30.0
 INFO 19:35:21,583 CQL supported versions: 2.0.0,3.0.0-beta1 (default:
2.0.0)

the problem is I would like to create such CF :

CREATE COLUMNFAMILY TaggedPosts (
 ... tag text,
 ... post uuid,
 ... blog_rowentries_rowkey text,
 ... PRIMARY KEY (tag, post)
 ... ) WITH COMPACT STORAGE;

and for me, (cql 2.0) it returns this error

Bad Request: line 6:0 mismatched input ')' expecting EOF

Is it due to the cql version? how to upgrade to 3.0, since I already
have the lastest cassandra release?


Re: Thrift error occurred during processing of message

2012-05-11 Thread Alain RODRIGUEZ
Thank you for the fast answer Tylor, I just discover the existence of
the "ColumnSlice" class. I was using phpcassa for a while and didn't
notice about this evolution.

If I want to get just a column should I do :

$slice  = new ColumnSlice('name', 'name');
$result = $cf->get($key, $slice);

or

$result = $cf->get($key, null, array('name'));

Is there any difference on performance between this 2 solution ? Do
they both work ?

Why introducing ColumnSlice ?

Alain

2012/5/11 Tyler Hobbs :
> Can you paste a snippet of your code showing how you're creating the
> ColumnSlice and calling get()?
>
>
> On Fri, May 11, 2012 at 8:16 AM, Alain RODRIGUEZ  wrote:
>>
>> I got the error above in cassandra logs.
>>
>> In my web browser I have the following error :
>>
>> "500 | Internal Server Error | TApplicationException
>>
>> Required field 'reversed' was not found in serialized data! Struct:
>> SliceRange(start:80 01 00 01 00 00 00 0E 6D 75 6C 74 69 67 65 74 5F 73
>> 6C 69 63 65 00 00 00 00 0F 00 01 0B 00 00 00 01 00 00 00 01 31 0C 00
>> 02 0B 00 03 00 00 00 11 61 6C 67 6F 5F 70 72 6F 64 75 63 74 5F 76 69
>> 65 77 00 0C 00 03 0C 00 02 0B 00 01 00 00 00 00, finish:80 01 00 01 00
>> 00 00 0E 6D 75 6C 74 69 67 65 74 5F 73 6C 69 63 65 00 00 00 00 0F 00
>> 01 0B 00 00 00 01 00 00 00 01 31 0C 00 02 0B 00 03 00 00 00 11 61 6C
>> 67 6F 5F 70 72 6F 64 75 63 74 5F 76 69 65 77 00 0C 00 03 0C 00 02 0B
>> 00 01 00 00 00 00 0B 00 02 00 00 00 00, reversed:false, count:100)"
>>
>> I think I forgot something that may have changed in the new phpcassa
>> release. I'm still looking for it but any Idea is welcome :)
>>
>> Alain
>>
>> 2012/5/11 Alain RODRIGUEZ 
>> >
>> > Hi, I guess you finally solved this issue. I'm experimenting the same
>> > one when trying to upgrade to phpcass 1.0.a.1.
>> >
>> > Do you remember how you fixed it or what the problem was exactly ?
>> >
>> > Thanks,
>> >
>> > Alain
>> >
>> > 2011/12/19 Tamil selvan R.S 
>> >
>> >> Hi,
>> >>  We are using PHPCassa to connect to Cassandra 1.0.2. After we
>> >> installed the thrift extension we started noticing the following in the
>> >> error logs. [We didn't notice this when we were running raw thrift library
>> >> with out extension].
>> >>
>> >> ERROR [pool-2-thread-5314] 2011-12-05 20:26:47,729
>> >> CustomTThreadPoolServer.java (line 201) Thrift error occurred during
>> >> processing of message.
>> >> org.apache.thrift.protocol.
>> >> TProtocolException: Missing version in readMessageBegin, old client?
>> >>     at
>> >> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:213)
>> >>     at
>> >> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2877)
>> >>     at
>> >> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
>> >>     at
>> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>> >>     at
>> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >>     at java.lang.Thread.run(Thread.java:722)
>> >>
>> >> Is there any issue with the thrift protocol compatibilty?
>> >>
>> >> Regards,
>> >> Tamil
>> >
>> >
>
>
>
>
> --
> Tyler Hobbs
> DataStax
>


C 1.1 & CQL 2.0 or 3.0?

2012-05-11 Thread cyril auburtin
I have C* 1.1 but it seems to only come with cql 2.0
 INFO 19:35:21,579 Cassandra version: 1.1.0
 INFO 19:35:21,581 Thrift API version: 19.30.0
 INFO 19:35:21,583 CQL supported versions: 2.0.0,3.0.0-beta1 (default:
2.0.0)

the problem is I would like to create such CF :

CREATE COLUMNFAMILY TaggedPosts (
 ... tag text,
 ... post uuid,
 ... blog_rowentries_rowkey text,
 ... PRIMARY KEY (tag, post)
 ... ) WITH COMPACT STORAGE;

and for me, (cql 2.0) it returns this error

Bad Request: line 6:0 mismatched input ')' expecting EOF

Is it due to the cql version? how to upgrade to 3.0, since I already have
the lastest cassandra release?


Re: cassandra 1.0.9 error - "Read an invalid frame size of 0"

2012-05-11 Thread Gurpreet Singh
This is hampering our testing of cassandra a lot, and our move to cassandra
1.0.9.
Has anyone seen this before? Should I be trying a different version of
cassandra?

/G

On Thu, May 10, 2012 at 11:29 PM, Gurpreet Singh
wrote:

> Hi,
> i have created 1 node cluster of cassandra 1.0.9. I am setting this up for
> testing reads/writes.
>
> I am seeing the following error in the server system.log
>
> ERROR [Selector-Thread-7] 2012-05-10 22:44:02,607 TNonblockingServer.java
> (line 467) Read an invalid frame size of 0. Are you using TFramedTransport
> on the client side?
>
> Initially i was using a old hector 0.7.x, but even after switching to
> hector 1.0-5 and thrift version 0.6.1, i still see this error.
> I am using 20 threads writing/reading from cassandra. The max write batch
> size is 10 with payload size constant per key to be 600 bytes.
>
> On the client side, i see Hector exceptions happenning coinciding with
> these messages on the server.
>
> Any ideas why these errors are happenning?
>
> Thanks
> Gurpreet
>
>


Re: Thrift error occurred during processing of message

2012-05-11 Thread Tyler Hobbs
Can you paste a snippet of your code showing how you're creating the
ColumnSlice and calling get()?

On Fri, May 11, 2012 at 8:16 AM, Alain RODRIGUEZ  wrote:

> I got the error above in cassandra logs.
>
> In my web browser I have the following error :
>
> "500 | Internal Server Error | TApplicationException
>
> Required field 'reversed' was not found in serialized data! Struct:
> SliceRange(start:80 01 00 01 00 00 00 0E 6D 75 6C 74 69 67 65 74 5F 73
> 6C 69 63 65 00 00 00 00 0F 00 01 0B 00 00 00 01 00 00 00 01 31 0C 00
> 02 0B 00 03 00 00 00 11 61 6C 67 6F 5F 70 72 6F 64 75 63 74 5F 76 69
> 65 77 00 0C 00 03 0C 00 02 0B 00 01 00 00 00 00, finish:80 01 00 01 00
> 00 00 0E 6D 75 6C 74 69 67 65 74 5F 73 6C 69 63 65 00 00 00 00 0F 00
> 01 0B 00 00 00 01 00 00 00 01 31 0C 00 02 0B 00 03 00 00 00 11 61 6C
> 67 6F 5F 70 72 6F 64 75 63 74 5F 76 69 65 77 00 0C 00 03 0C 00 02 0B
> 00 01 00 00 00 00 0B 00 02 00 00 00 00, reversed:false, count:100)"
>
> I think I forgot something that may have changed in the new phpcassa
> release. I'm still looking for it but any Idea is welcome :)
>
> Alain
>
> 2012/5/11 Alain RODRIGUEZ 
> >
> > Hi, I guess you finally solved this issue. I'm experimenting the same
> one when trying to upgrade to phpcass 1.0.a.1.
> >
> > Do you remember how you fixed it or what the problem was exactly ?
> >
> > Thanks,
> >
> > Alain
> >
> > 2011/12/19 Tamil selvan R.S 
> >
> >> Hi,
> >>  We are using PHPCassa to connect to Cassandra 1.0.2. After we
> installed the thrift extension we started noticing the following in the
> error logs. [We didn't notice this when we were running raw thrift library
> with out extension].
> >>
> >> ERROR [pool-2-thread-5314] 2011-12-05 20:26:47,729
> CustomTThreadPoolServer.java (line 201) Thrift error occurred during
> processing of message.
> >> org.apache.thrift.protocol.
> >> TProtocolException: Missing version in readMessageBegin, old client?
> >> at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:213)
> >> at
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2877)
> >> at
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
> >> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> >> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >> at java.lang.Thread.run(Thread.java:722)
> >>
> >> Is there any issue with the thrift protocol compatibilty?
> >>
> >> Regards,
> >> Tamil
> >
> >
>



-- 
Tyler Hobbs
DataStax 


Select on indexed columns and with IN clause for the PRIMARY KEY

2012-05-11 Thread Roland Mechler
I am using C* 1.1 and CQL 3.0. Am trying to do a select with an IN clause
for the primary key, and on an indexed column, which appears to not be
supported:

cqlsh:Keyspace1> SELECT * FROM TestTable WHERE id IN ('1', '2') AND
data = 'b';
Bad Request: Select on indexed columns and with IN clause for the
PRIMARY KEY are not supported

Any chance this will be supported in the future?


Full example:

cqlsh:Keyspace1> CREATE TABLE TestTable (id text PRIMARY KEY, data text);
cqlsh:Keyspace1> CREATE INDEX ON TestTable (data);
cqlsh:Keyspace1> INSERT INTO TestTable (id, data) VALUES ('1', 'a');
cqlsh:Keyspace1> INSERT INTO TestTable (id, data) VALUES ('2', 'b');
cqlsh:Keyspace1> INSERT INTO TestTable (id, data) VALUES ('3', 'b');
cqlsh:Keyspace1> SELECT * FROM TestTable WHERE id IN ('1', '2');
 id | data
+--
  1 |a
  2 |b

cqlsh:Keyspace1> SELECT * FROM TestTable WHERE data = 'b';
 id | data
+--
  3 |b
  2 |b

cqlsh:Keyspace1> SELECT * FROM TestTable WHERE id IN ('1', '2') AND data =
'b';
Bad Request: Select on indexed columns and with IN clause for the PRIMARY
KEY are not supported
cqlsh:Keyspace1>

-Roland


Re: Cassandra backup question regarding commitlogs

2012-05-11 Thread Vijay
The incremental backups are generated when the flush is complete (Only
during the flush), If the node crash before the flush completes then the
commit logs in the local node's backup for the data in memory.
It wouldn't help to copy the Commit log across because they are not
immutable (They are recycled).

There is commit log backup in 1.1.1 (Yet to be released)
https://issues.apache.org/jira/browse/CASSANDRA-3690

Thanks,




On Sun, Apr 29, 2012 at 3:29 PM, Roshan  wrote:

> Hi
>
> Currently I am taking daily snapshot on my keyspace in production and
> already enable the incremental backups as well.
>
> According to the documentation, the incremental backup option will create
> an
> hard-link to the backup folder when new sstable is flushed. Snapshot will
> copy all the data/index/etc. files to a new folder.
>
> Question:
> What will happen (with enabling the incremental backup) when crash (due to
> any reason) the Cassandra before flushing the data as a SSTable (inserted
> data still in commitlog). In this case how can I backup/restore data?
>
> Do I need to backup the commitlogs as well and and replay during the server
> start to restore the data in commitlog files?
>
> Thanks.
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-backup-question-regarding-commitlogs-tp7511918.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>


Re: primary keys query

2012-05-11 Thread cyril auburtin
I was thinking of a CF with many many rows with id, type, latitude and
longitude (indexed), and do geolocation queries: type=all and lat < 43 and
lat >42.9 and lon < 7.3 and lon > 7.2

where all rows have type=all
(at least try how Cassandra deals with that)
So it seems it's not a good idea, to use Cassandra like that?

There's also the possibly to do in parallel, other CF, with latitude in
rows, that will be sorted, so an indexed query can give us the right
latidue range, and then just query with logitude < and >

What do you think of that

thanks

2012/5/11 Dave Brosius 

> Inequalities on secondary indices are always done in memory, so without at
> least one EQ on another secondary index you will be loading every row in
> the database, which with a massive database isn't a good idea. So by
> requiring at least one EQ on an index, you hopefully limit the set of rows
> that need to be read into memory to a manageable size. Although obviously
> you can still get into trouble with that as well.
>
>
>
>
> On 05/11/2012 09:39 AM, cyril auburtin wrote:
>
>> Sorry for askign that
>> but Why is it necessary to always have at least one EQ comparison
>>
>> [default@Keyspace1] get test where birth_year>1985;
>>No indexed columns present in index clause with operator EQ
>>
>> It oblige to have one dummy indexed column, to do this query
>>
>> [default@Keyspace1] get test where tag=sea and birth_year>1985;
>> ---
>> RowKey: sam
>> => (column=birth_year, value=1988, timestamp=1336742346059000)
>>
>>
>>
>


RE: Cassandra stucks

2012-05-11 Thread Pavel Polushkin
Hello, 

Actually there is no problems with JMX, it works fine when node are in
UP state. But after a while cluster goes to inadequate state. For now it
seems that it's a bug of connection handling in Cassandra. 

Pavel.

 

From: Madalina Matei [mailto:madalinaima...@gmail.com] 
Sent: Friday, May 11, 2012 20:03
To: user@cassandra.apache.org
Subject: Re: Cassandra stucks

 

Check your JMX port in cassandra-env.sh and see if that's open. 

 

Also if you have enabled 

 

 JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname="

 

and you are using an ip address for -Djava.rmi.server.hostname make sure
that is the correct ip.

 

 

On 11 May 2012, at 16:42, Pavel Polushkin wrote:





No We are using dedicated phisical hardware. Currently we have 5 nodes.

 

From: Madalina Matei [mailto:madalinaima...@gmail.com] 
Sent: Friday, May 11, 2012 19:40
To: user@cassandra.apache.org
Subject: Re: Cassandra stucks

 

Are you using EC2 ?

 

On 11 May 2012, at 16:13, Pavel Polushkin wrote:






We use 1.0.8 version.

 

From: David Leimbach [mailto:leim...@gmail.com] 
Sent: Friday, May 11, 2012 18:48
To: user@cassandra.apache.org
Subject: Re: Cassandra stucks

 

What's the version number of Cassandra?

On Fri, May 11, 2012 at 7:38 AM, Pavel Polushkin 
wrote:

Hello,

 

We faced with a strange problem while testing performance on Cassandra
cluster. After some time all nodes went to down state for several days.
Now all nodes went back to up state and only one node still down.

 

Nodetool on down node throws exception:

Error connection to remote JMX agent!

java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.CommunicationException [Root exception is
java.rmi.ConnectIOException: error during JRMP connection establishment;
nested exception is:

java.net.SocketTimeoutException: Read timed out]

at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340)

at
javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.
java:248)

at
org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144)

at
org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:114)

at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623)

Caused by: javax.naming.CommunicationException [Root exception is
java.rmi.ConnectIOException: error during JRMP connection establishment;
nested exception is:

java.net.SocketTimeoutException: Read timed out]

at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:10
1)

at
com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java
:185)

at javax.naming.InitialContext.lookup(InitialContext.java:392)

at
javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.
java:1888)

at
javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java
:1858)

at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257)

... 4 more

Caused by: java.rmi.ConnectIOException: error during JRMP connection
establishment; nested exception is:

java.net.SocketTimeoutException: Read timed out

at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286)

at
sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)

at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322)

at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)

at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97
)

... 9 more

Caused by: java.net.SocketTimeoutException: Read timed out

at java.net.SocketInputStream.socketRead0(Native Method)

at java.net.SocketInputStream.read(SocketInputStream.java:129)

at
java.io.BufferedInputStream.fill(BufferedInputStream.java:218)

at
java.io.BufferedInputStream.read(BufferedInputStream.java:237)

at java.io.DataInputStream.readByte(DataInputStream.java:248)

at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)

... 13 more

 

In system log of down node unlimited list of such errors:

INFO [GossipStage:1] 2012-05-10 23:18:27,579 Gossiper.java (line 804)
InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10
23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.162 is now
UP INFO [GossipStage:1] 2012-05-10 23:18:27,580 Gossiper.java (line 804)
InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10
23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.165 is now
UP INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.161 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.165 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.162 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.163 

Re: Losing key cache on restart

2012-05-11 Thread Omid Aladini
Hey,

Sorry for the late response.

On Wed, Apr 25, 2012 at 1:36 AM, aaron morton  wrote:
> - Cassandra log reports 12,955,585 of them have been saved on the last save
> events.
>
> Has their been much activity between saves ?

For testing I set the key_cache_save_period to a short period of 10
minutes. How would this affect the result? Does saving the cache also
prunes duplicate in-memory elements?

> Nothing jumps out. There is a setting for the max entries to store, but this
> only applies to the row cache. Can you reproduce issue in a dev environment
> ?

So far I haven't been able to reproduce this in development environment.

> When running the key cache holds keys of the form  so
> there is an entry for each SSTable the key appears in. When saved only the
> DecoratedKey's are stored, and the key cache is rebuilt on startup when
> iterating over the index files. e.g. ig you have 12 entries in the keycache,
> it may only be 4 unique keys and that is all that is written when saving the
> cache.

If you have 12 entries for the same key on 12 sstables, this means
your data is spread across 12 sstables, how could cassandra
deduplicate them to 4?

I'll try to reproduce/debug it as well.

Thanks,
Omid

>
> From a quick look at the code it looks like the code is writing all
> DecoratedKeys , not just the unique ones. This may be mucking up the
> reported numbers, I'll take a look later.
>
> If you can reproduce it simply it would help.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 25/04/2012, at 1:11 AM, Omid Aladini wrote:
>
> Hi,
>
> I'm experiencing losing a part of key cache on restart on Cassandra 1.0.7.
> For example:
>
> - cfstats reports key cache size of 13,040,502 with capacity of 15,000,000.
> - Cassandra log reports 12,955,585 of them have been saved on the last save
> events.
> - On restart Cassandra reads saved cache.
> - cfstats reports key cache size of only 2,833,586 with correct capacity of
> 15,000,000.
>
> There is no sign that the cache size is reduced due to memory pressure. The
> key cache capacity is set manually via cassandra-cli.
>
> Has anyone else encountered this problem or is it a known issue?
>
> Thanks,
> Omid
>
>


Re: Cassandra stucks

2012-05-11 Thread Madalina Matei
Check your JMX port in cassandra-env.sh and see if that's open. 

Also if you have enabled 

 JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname="

and you are using an ip address for -Djava.rmi.server.hostname make sure that 
is the correct ip.


On 11 May 2012, at 16:42, Pavel Polushkin wrote:

> No We are using dedicated phisical hardware. Currently we have 5 nodes.
>  
> From: Madalina Matei [mailto:madalinaima...@gmail.com] 
> Sent: Friday, May 11, 2012 19:40
> To: user@cassandra.apache.org
> Subject: Re: Cassandra stucks
>  
> Are you using EC2 ?
>  
> On 11 May 2012, at 16:13, Pavel Polushkin wrote:
> 
> 
> We use 1.0.8 version.
>  
> From: David Leimbach [mailto:leim...@gmail.com] 
> Sent: Friday, May 11, 2012 18:48
> To: user@cassandra.apache.org
> Subject: Re: Cassandra stucks
>  
> What's the version number of Cassandra?
> 
> On Fri, May 11, 2012 at 7:38 AM, Pavel Polushkin  
> wrote:
> Hello,
> 
>  
> 
> We faced with a strange problem while testing performance on Cassandra 
> cluster. After some time all nodes went to down state for several days. Now 
> all nodes went back to up state and only one node still down.
> 
>  
> 
> Nodetool on down node throws exception:
> 
> Error connection to remote JMX agent!
> 
> java.io.IOException: Failed to retrieve RMIServer stub: 
> javax.naming.CommunicationException [Root exception is 
> java.rmi.ConnectIOException: error during JRMP connection establishment; 
> nested exception is:
> 
> java.net.SocketTimeoutException: Read timed out]
> 
> at 
> javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340)
> 
> at 
> javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
> 
> at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144)
> 
> at org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:114)
> 
> at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623)
> 
> Caused by: javax.naming.CommunicationException [Root exception is 
> java.rmi.ConnectIOException: error during JRMP connection establishment; 
> nested exception is:
> 
> java.net.SocketTimeoutException: Read timed out]
> 
> at 
> com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101)
> 
> at 
> com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:185)
> 
> at javax.naming.InitialContext.lookup(InitialContext.java:392)
> 
> at 
> javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1888)
> 
> at 
> javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1858)
> 
> at 
> javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257)
> 
> ... 4 more
> 
> Caused by: java.rmi.ConnectIOException: error during JRMP connection 
> establishment; nested exception is:
> 
> java.net.SocketTimeoutException: Read timed out
> 
> at 
> sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286)
> 
> at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
> 
> at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322)
> 
> at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
> 
> at 
> com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97)
> 
> ... 9 more
> 
> Caused by: java.net.SocketTimeoutException: Read timed out
> 
> at java.net.SocketInputStream.socketRead0(Native Method)
> 
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> 
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 
> at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 
> at java.io.DataInputStream.readByte(DataInputStream.java:248)
> 
> at 
> sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)
> 
> ... 13 more
> 
>  
> 
> In system log of down node unlimited list of such errors:
> 
> INFO [GossipStage:1] 2012-05-10 23:18:27,579 Gossiper.java (line 804) 
> InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10 
> 23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.162 is now UP 
> INFO [GossipStage:1] 2012-05-10 23:18:27,580 Gossiper.java (line 804) 
> InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10 
> 23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.165 is now UP 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.161 is now dead.
> 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.165 is now dead.
> 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.162 is now dead.
> 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.163 is now dead.
> 
> INFO [GossipStage:1] 2012-05-10 23:18:29,291 Gossiper.java (line 804) 
> InetAddress /172.15.

RE: Cassandra stucks

2012-05-11 Thread Pavel Polushkin
No We are using dedicated phisical hardware. Currently we have 5 nodes.

 

From: Madalina Matei [mailto:madalinaima...@gmail.com] 
Sent: Friday, May 11, 2012 19:40
To: user@cassandra.apache.org
Subject: Re: Cassandra stucks

 

Are you using EC2 ?

 

On 11 May 2012, at 16:13, Pavel Polushkin wrote:





We use 1.0.8 version.

 

From: David Leimbach [mailto:leim...@gmail.com] 
Sent: Friday, May 11, 2012 18:48
To: user@cassandra.apache.org
Subject: Re: Cassandra stucks

 

What's the version number of Cassandra?

On Fri, May 11, 2012 at 7:38 AM, Pavel Polushkin 
wrote:

Hello,

 

We faced with a strange problem while testing performance on Cassandra
cluster. After some time all nodes went to down state for several days.
Now all nodes went back to up state and only one node still down.

 

Nodetool on down node throws exception:

Error connection to remote JMX agent!

java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.CommunicationException [Root exception is
java.rmi.ConnectIOException: error during JRMP connection establishment;
nested exception is:

java.net.SocketTimeoutException: Read timed out]

at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340)

at
javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.
java:248)

at
org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144)

at
org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:114)

at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623)

Caused by: javax.naming.CommunicationException [Root exception is
java.rmi.ConnectIOException: error during JRMP connection establishment;
nested exception is:

java.net.SocketTimeoutException: Read timed out]

at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:10
1)

at
com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java
:185)

at javax.naming.InitialContext.lookup(InitialContext.java:392)

at
javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.
java:1888)

at
javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java
:1858)

at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257)

... 4 more

Caused by: java.rmi.ConnectIOException: error during JRMP connection
establishment; nested exception is:

java.net.SocketTimeoutException: Read timed out

at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286)

at
sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)

at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322)

at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)

at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97
)

... 9 more

Caused by: java.net.SocketTimeoutException: Read timed out

at java.net.SocketInputStream.socketRead0(Native Method)

at java.net.SocketInputStream.read(SocketInputStream.java:129)

at
java.io.BufferedInputStream.fill(BufferedInputStream.java:218)

at
java.io.BufferedInputStream.read(BufferedInputStream.java:237)

at java.io.DataInputStream.readByte(DataInputStream.java:248)

at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)

... 13 more

 

In system log of down node unlimited list of such errors:

INFO [GossipStage:1] 2012-05-10 23:18:27,579 Gossiper.java (line 804)
InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10
23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.162 is now
UP INFO [GossipStage:1] 2012-05-10 23:18:27,580 Gossiper.java (line 804)
InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10
23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.165 is now
UP INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.161 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.165 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.162 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.163 is now dead.

INFO [GossipStage:1] 2012-05-10 23:18:29,291 Gossiper.java (line 804)
InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10
23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.162 is now
UP INFO [GossipStage:1] 2012-05-10 23:18:29,292 Gossiper.java (line 804)
InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10
23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.165 is now
UP

 

The suspicious fact is that on this node we have several tcp connections
to other nodes 7000 port in CLOSE_WAIT state:

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address   Foreign Address
State

tcp   

Re: Cassandra stucks

2012-05-11 Thread Madalina Matei
Are you using EC2 ?

On 11 May 2012, at 16:13, Pavel Polushkin wrote:

> We use 1.0.8 version.
>  
> From: David Leimbach [mailto:leim...@gmail.com] 
> Sent: Friday, May 11, 2012 18:48
> To: user@cassandra.apache.org
> Subject: Re: Cassandra stucks
>  
> What's the version number of Cassandra?
> 
> On Fri, May 11, 2012 at 7:38 AM, Pavel Polushkin  
> wrote:
> Hello,
> 
>  
> 
> We faced with a strange problem while testing performance on Cassandra 
> cluster. After some time all nodes went to down state for several days. Now 
> all nodes went back to up state and only one node still down.
> 
>  
> 
> Nodetool on down node throws exception:
> 
> Error connection to remote JMX agent!
> 
> java.io.IOException: Failed to retrieve RMIServer stub: 
> javax.naming.CommunicationException [Root exception is 
> java.rmi.ConnectIOException: error during JRMP connection establishment; 
> nested exception is:
> 
> java.net.SocketTimeoutException: Read timed out]
> 
> at 
> javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340)
> 
> at 
> javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
> 
> at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144)
> 
> at org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:114)
> 
> at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623)
> 
> Caused by: javax.naming.CommunicationException [Root exception is 
> java.rmi.ConnectIOException: error during JRMP connection establishment; 
> nested exception is:
> 
> java.net.SocketTimeoutException: Read timed out]
> 
> at 
> com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101)
> 
> at 
> com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:185)
> 
> at javax.naming.InitialContext.lookup(InitialContext.java:392)
> 
> at 
> javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1888)
> 
> at 
> javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1858)
> 
> at 
> javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257)
> 
> ... 4 more
> 
> Caused by: java.rmi.ConnectIOException: error during JRMP connection 
> establishment; nested exception is:
> 
> java.net.SocketTimeoutException: Read timed out
> 
> at 
> sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286)
> 
> at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
> 
> at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322)
> 
> at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
> 
> at 
> com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97)
> 
> ... 9 more
> 
> Caused by: java.net.SocketTimeoutException: Read timed out
> 
> at java.net.SocketInputStream.socketRead0(Native Method)
> 
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> 
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 
> at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 
> at java.io.DataInputStream.readByte(DataInputStream.java:248)
> 
> at 
> sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)
> 
> ... 13 more
> 
>  
> 
> In system log of down node unlimited list of such errors:
> 
> INFO [GossipStage:1] 2012-05-10 23:18:27,579 Gossiper.java (line 804) 
> InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10 
> 23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.162 is now UP 
> INFO [GossipStage:1] 2012-05-10 23:18:27,580 Gossiper.java (line 804) 
> InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10 
> 23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.165 is now UP 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.161 is now dead.
> 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.165 is now dead.
> 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.162 is now dead.
> 
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818) 
> InetAddress /172.15.2.163 is now dead.
> 
> INFO [GossipStage:1] 2012-05-10 23:18:29,291 Gossiper.java (line 804) 
> InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10 
> 23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.162 is now UP 
> INFO [GossipStage:1] 2012-05-10 23:18:29,292 Gossiper.java (line 804) 
> InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10 
> 23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.165 is now UP
> 
>  
> 
> The suspicious fact is that on this node we have several tcp connections to 
> other nodes 7000 port in CLOSE_WAIT state:
> 
> Active Internet connections (servers and established)
> 
> P

RE: Cassandra stucks

2012-05-11 Thread Pavel Polushkin
We use 1.0.8 version.

 

From: David Leimbach [mailto:leim...@gmail.com] 
Sent: Friday, May 11, 2012 18:48
To: user@cassandra.apache.org
Subject: Re: Cassandra stucks

 

What's the version number of Cassandra?

On Fri, May 11, 2012 at 7:38 AM, Pavel Polushkin 
wrote:

Hello,

 

We faced with a strange problem while testing performance on Cassandra
cluster. After some time all nodes went to down state for several days.
Now all nodes went back to up state and only one node still down.

 

Nodetool on down node throws exception:

Error connection to remote JMX agent!

java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.CommunicationException [Root exception is
java.rmi.ConnectIOException: error during JRMP connection establishment;
nested exception is:

java.net.SocketTimeoutException: Read timed out]

at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340)

at
javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.
java:248)

at
org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144)

at
org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:114)

at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623)

Caused by: javax.naming.CommunicationException [Root exception is
java.rmi.ConnectIOException: error during JRMP connection establishment;
nested exception is:

java.net.SocketTimeoutException: Read timed out]

at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:10
1)

at
com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java
:185)

at javax.naming.InitialContext.lookup(InitialContext.java:392)

at
javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.
java:1888)

at
javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java
:1858)

at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257)

... 4 more

Caused by: java.rmi.ConnectIOException: error during JRMP connection
establishment; nested exception is:

java.net.SocketTimeoutException: Read timed out

at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286)

at
sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)

at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322)

at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)

at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97
)

... 9 more

Caused by: java.net.SocketTimeoutException: Read timed out

at java.net.SocketInputStream.socketRead0(Native Method)

at java.net.SocketInputStream.read(SocketInputStream.java:129)

at
java.io.BufferedInputStream.fill(BufferedInputStream.java:218)

at
java.io.BufferedInputStream.read(BufferedInputStream.java:237)

at java.io.DataInputStream.readByte(DataInputStream.java:248)

at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)

... 13 more

 

In system log of down node unlimited list of such errors:

INFO [GossipStage:1] 2012-05-10 23:18:27,579 Gossiper.java (line 804)
InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10
23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.162 is now
UP INFO [GossipStage:1] 2012-05-10 23:18:27,580 Gossiper.java (line 804)
InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10
23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.165 is now
UP INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.161 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.165 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.162 is now dead.

INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
InetAddress /172.15.2.163 is now dead.

INFO [GossipStage:1] 2012-05-10 23:18:29,291 Gossiper.java (line 804)
InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10
23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.162 is now
UP INFO [GossipStage:1] 2012-05-10 23:18:29,292 Gossiper.java (line 804)
InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10
23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.165 is now
UP

 

The suspicious fact is that on this node we have several tcp connections
to other nodes 7000 port in CLOSE_WAIT state:

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address   Foreign Address
State

tcp   869073  0 rcwocas:afs3-fileserver rcwocas03.enkata.:34274
CLOSE_WAIT

tcp   463429  0 rcwocas:afs3-fileserver rcwocas02.enkata.:39654
CLOSE_WAIT

tcp   873838  0 rcwocas:afs3-fileserver rcwocas01.enkata.:49486
CLOSE_WAIT

tcp   860245  0 rcwocas:afs3-fileserver rcwocas05.enkata.:43028
CLOSE

Re: primary keys query

2012-05-11 Thread Dave Brosius
Inequalities on secondary indices are always done in memory, so without 
at least one EQ on another secondary index you will be loading every row 
in the database, which with a massive database isn't a good idea. So by 
requiring at least one EQ on an index, you hopefully limit the set of 
rows that need to be read into memory to a manageable size. Although 
obviously you can still get into trouble with that as well.




On 05/11/2012 09:39 AM, cyril auburtin wrote:

Sorry for askign that
but Why is it necessary to always have at least one EQ comparison

[default@Keyspace1] get test where birth_year>1985;
No indexed columns present in index clause with operator EQ

It oblige to have one dummy indexed column, to do this query

[default@Keyspace1] get test where tag=sea and birth_year>1985;
---
RowKey: sam
=> (column=birth_year, value=1988, timestamp=1336742346059000)






Re: Cassandra stucks

2012-05-11 Thread David Leimbach
What's the version number of Cassandra?

On Fri, May 11, 2012 at 7:38 AM, Pavel Polushkin wrote:

> Hello,
>
> ** **
>
> We faced with a strange problem while testing performance on Cassandra
> cluster. After some time all nodes went to down state for several days. Now
> all nodes went back to up state and only one node still down.
>
> ** **
>
> Nodetool on down node throws exception:
>
> Error connection to remote JMX agent!
>
> java.io.IOException: Failed to retrieve RMIServer stub:
> javax.naming.CommunicationException [Root exception is
> java.rmi.ConnectIOException: error during JRMP connection establishment;
> nested exception is:
>
> java.net.SocketTimeoutException: Read timed out]
>
> at
> javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340)***
> *
>
> at
> javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
> 
>
> at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144)
> 
>
> at org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:114)
> 
>
> at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623)
>
> Caused by: javax.naming.CommunicationException [Root exception is
> java.rmi.ConnectIOException: error during JRMP connection establishment;
> nested exception is:
>
> java.net.SocketTimeoutException: Read timed out]
>
> at
> com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101)
> 
>
> at
> com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:185)
> 
>
> at javax.naming.InitialContext.lookup(InitialContext.java:392)
>
> at
> javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1888)
> 
>
> at
> javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1858)
> 
>
> at
> javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257)***
> *
>
> ... 4 more
>
> Caused by: java.rmi.ConnectIOException: error during JRMP connection
> establishment; nested exception is:
>
> java.net.SocketTimeoutException: Read timed out
>
> at
> sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286)
>
> at
> sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
>
> at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322)
>
> at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
>
> at
> com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97)*
> ***
>
> ... 9 more
>
> Caused by: java.net.SocketTimeoutException: Read timed out
>
> at java.net.SocketInputStream.socketRead0(Native Method)
>
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
>
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)*
> ***
>
> at java.io.BufferedInputStream.read(BufferedInputStream.java:237)*
> ***
>
> at java.io.DataInputStream.readByte(DataInputStream.java:248)
>
> at
> sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)
>
> ... 13 more
>
> ** **
>
> In system log of down node unlimited list of such errors:
>
> INFO [GossipStage:1] 2012-05-10 23:18:27,579 Gossiper.java (line 804)
> InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10
> 23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.162 is now UP
> INFO [GossipStage:1] 2012-05-10 23:18:27,580 Gossiper.java (line 804)
> InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10
> 23:18:27,580 Gossiper.java (line 804) InetAddress /172.15.2.165 is now UP
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
> InetAddress /172.15.2.161 is now dead.
>
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
> InetAddress /172.15.2.165 is now dead.
>
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
> InetAddress /172.15.2.162 is now dead.
>
> INFO [GossipTasks:1] 2012-05-10 23:18:29,291 Gossiper.java (line 818)
> InetAddress /172.15.2.163 is now dead.
>
> INFO [GossipStage:1] 2012-05-10 23:18:29,291 Gossiper.java (line 804)
> InetAddress /172.15.2.161 is now UP INFO [GossipStage:1] 2012-05-10
> 23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.162 is now UP
> INFO [GossipStage:1] 2012-05-10 23:18:29,292 Gossiper.java (line 804)
> InetAddress /172.15.2.163 is now UP INFO [GossipStage:1] 2012-05-10
> 23:18:29,292 Gossiper.java (line 804) InetAddress /172.15.2.165 is now UP*
> ***
>
> ** **
>
> The suspicious fact is that on this node we have several tcp connections
> to other nodes 7000 port in CLOSE_WAIT state:
>
> Active Internet connections (servers and established)
>
> Proto Recv-Q Send-Q Local Address   Foreign Address State*
> ***
>
> tcp   869073  0 rcwocas

primary keys query

2012-05-11 Thread cyril auburtin
Sorry for askign that
but Why is it necessary to always have at least one EQ comparison

[default@Keyspace1] get test where birth_year>1985;
No indexed columns present in index clause with operator EQ

It oblige to have one dummy indexed column, to do this query

[default@Keyspace1] get test where tag=sea and birth_year>1985;
---
RowKey: sam
=> (column=birth_year, value=1988, timestamp=1336742346059000)


Re: Thrift error occurred during processing of message

2012-05-11 Thread Alain RODRIGUEZ
I got the error above in cassandra logs.

In my web browser I have the following error :

"500 | Internal Server Error | TApplicationException

Required field 'reversed' was not found in serialized data! Struct:
SliceRange(start:80 01 00 01 00 00 00 0E 6D 75 6C 74 69 67 65 74 5F 73
6C 69 63 65 00 00 00 00 0F 00 01 0B 00 00 00 01 00 00 00 01 31 0C 00
02 0B 00 03 00 00 00 11 61 6C 67 6F 5F 70 72 6F 64 75 63 74 5F 76 69
65 77 00 0C 00 03 0C 00 02 0B 00 01 00 00 00 00, finish:80 01 00 01 00
00 00 0E 6D 75 6C 74 69 67 65 74 5F 73 6C 69 63 65 00 00 00 00 0F 00
01 0B 00 00 00 01 00 00 00 01 31 0C 00 02 0B 00 03 00 00 00 11 61 6C
67 6F 5F 70 72 6F 64 75 63 74 5F 76 69 65 77 00 0C 00 03 0C 00 02 0B
00 01 00 00 00 00 0B 00 02 00 00 00 00, reversed:false, count:100)"

I think I forgot something that may have changed in the new phpcassa
release. I'm still looking for it but any Idea is welcome :)

Alain

2012/5/11 Alain RODRIGUEZ 
>
> Hi, I guess you finally solved this issue. I'm experimenting the same one 
> when trying to upgrade to phpcass 1.0.a.1.
>
> Do you remember how you fixed it or what the problem was exactly ?
>
> Thanks,
>
> Alain
>
> 2011/12/19 Tamil selvan R.S 
>
>> Hi,
>>  We are using PHPCassa to connect to Cassandra 1.0.2. After we installed the 
>> thrift extension we started noticing the following in the error logs. [We 
>> didn't notice this when we were running raw thrift library with out 
>> extension].
>>
>> ERROR [pool-2-thread-5314] 2011-12-05 20:26:47,729 
>> CustomTThreadPoolServer.java (line 201) Thrift error occurred during 
>> processing of message.
>> org.apache.thrift.protocol.
>> TProtocolException: Missing version in readMessageBegin, old client?
>>     at 
>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:213)
>>     at 
>> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2877)
>>     at 
>> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>     at java.lang.Thread.run(Thread.java:722)
>>
>> Is there any issue with the thrift protocol compatibilty?
>>
>> Regards,
>> Tamil
>
>


Re: Thrift error occurred during processing of message

2012-05-11 Thread Alain RODRIGUEZ
Hi, I guess you finally solved this issue. I'm experimenting the same one
when trying to upgrade to phpcass 1.0.a.1.

Do you remember how you fixed it or what the problem was exactly ?

Thanks,

Alain

2011/12/19 Tamil selvan R.S 

> Hi,
>  We are using PHPCassa to connect to Cassandra 1.0.2. After we installed
> the thrift extension we started noticing the following in the error logs.
> [We didn't notice this when we were running raw thrift library with out
> extension].
>
> ERROR [pool-2-thread-5314] 2011-12-05 20:26:47,729
> CustomTThreadPoolServer.java (line 201) Thrift error occurred during
> processing of message.
> org.apache.thrift.protocol.
> TProtocolException: Missing version in readMessageBegin, old client?
> at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:213)
> at
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2877)
> at
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
>
> Is there any issue with the thrift protocol compatibilty?
>
> Regards,
> Tamil
>


RE: Behavior on inconsistent reads

2012-05-11 Thread Carpenter, Curt
I (now) understand that the point of this is to get the most recent copy
(at least of the nodes checked) if all replicas simply haven't been
updated to the latest changes. But what about dealing with corruption?
What if the most recent copy is corrupt? With a Zookeeper-based
transaction system on top, corruption is all I'm worried about. 

 

From: Dave Brosius [mailto:dbros...@mebigfatguy.com] 
Sent: Thursday, May 10, 2012 10:03 PM



If you read at Consistency of at least quorum, you are guaranteed that
at least one of the nodes has the latest data, and so you get the right
data. If you read with less than quorum it would be possible for all the
nodes that respond to have stale data.

On 05/10/2012 09:46 PM, Carpenter, Curt wrote: 

Hi all, newbie here. Be gentle.

 

From
http://www.datastax.com/docs/1.0/cluster_architecture/about_client_reque
sts:

"Thus, the coordinator first contacts the replicas specified by the
consistency level. The coordinator will send these requests to the
replicas that are currently responding most promptly. The nodes
contacted will respond with the requested data; if multiple nodes are
contacted, the rows from each replica are compared in memory to see if
they are consistent. If they are not, then the replica that has the most
recent data (based on the timestamp) is used by the coordinator to
forward the result back to the client.

To ensure that all replicas have the most recent version of
frequently-read data, the coordinator also contacts and compares the
data from all the remaining replicas that own the row in the background,
and if they are inconsistent, issues writes to the out-of-date replicas
to update the row to reflect the most recently written values. This
process is known as read repair. Read repair can be configured per
column family (usingread_repair_chance
 ), and is enabled by default.

For example, in a cluster with a replication factor of 3, and a read
consistency level of QUORUM, 2 of the 3 replicas for the given row are
contacted to fulfill the read request. Supposing the contacted replicas
had different versions of the row, the replica with the most recent
version would return the requested data. In the background, the third
replica is checked for consistency with the first two, and if needed,
the most recent replica issues a write to the out-of-date replicas."

 

Always returns the most recent? What if the most recent write is
corrupt? I thought the whole point of a quorum was that consistency is
verified before the data is returned to the client. No?

 

Thanks,

 

Curt

 



Re: Keyspace lost after restart

2012-05-11 Thread Conan Cook
Hi Jeff,

Great!  We'll roll back for now, thanks for letting me know.

Conan

On 11 May 2012 10:18, Jeff Williams  wrote:

> Conan,
>
> Good to see I'm not alone in this! I just set up a fresh test cluster. I
> first did a fresh install of 1.1.0 and was able to replicate the issue. I
> then did a fresh install using 1.0.10 and didn't see the issue. So it looks
> like rolling back to 1.0.10 could be the answer for now.
>
> Jeff
>
> On May 11, 2012, at 10:40 AM, Conan Cook wrote:
>
> Hi,
>
> OK we're pretty sure we dropped and re-created the keyspace before
> restarting the Cassandra nodes during some testing (we've been migrating to
> a new cluster).  The keyspace was created via the cli:
>
>
> create keyspace m7
>
>   with placement_strategy = 'NetworkTopologyStrategy'
>
>   and strategy_options = {us-east: 3}
>
>   and durable_writes = true;
>
>
> I'm pretty confident that it's a result of the issue I spotted before:
>
> https://issues.apache.org/jira/browse/CASSANDRA-4219
>
> Does anyone know whether this also affected versions before 1.1.0?  If not
> then we can just roll back until there's a fix; we're not using our cluster
> in production so we can afford to just bin it all and load it again.  +1
> for this being a major issue though, the fact that you can't see it until
> you restart a node makes it quite dangerous, and that node is lost when it
> occurs (I also haven't been able to restore the schema in any way).
>
> Thanks very much,
>
>
> Conan
>
>
>
> On 10 May 2012 17:15, Conan Cook  wrote:
>
>> Hi Aaron,
>>
>> Thanks for getting back to me!  Yes, I believe our keyspace was created
>> prior to 1.1, and I think I also understand why you're asking that, having
>> found this:
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-4219
>>
>> Here's our startup log:
>>
>> https://gist.github.com/2654155
>>
>> There isn't much in there of interest however.  It may well be the case
>> that we created our keyspace, dropped it, then created it again.  The dev
>> responsible for setting it up is ill today, but I'll get back to you
>> tomorrow with exact details of how it was originally created and whether we
>> did definitely drop and re-create it.
>>
>> Ta,
>>
>> Conan
>>
>>
>> On 10 May 2012 11:43, aaron morton  wrote:
>>
>>> Was this a schema that was created prior to 1.1 ?
>>>
>>> What process are you using to create the schema ?
>>>
>>> Can you share the logs from system startup ? Up until it logs "Listening
>>> for thrift clients". (if they are long please link to them)
>>>
>>> Cheers
>>>
>>>   -
>>> Aaron Morton
>>> Freelance Developer
>>> @aaronmorton
>>> http://www.thelastpickle.com
>>>
>>> On 10/05/2012, at 1:04 AM, Conan Cook wrote:
>>>
>>> Sorry, forgot to mention we're running Cassandra 1.1.
>>>
>>> Conan
>>>
>>> On 8 May 2012 17:51, Conan Cook  wrote:
>>>
 Hi Cassandra Folk,

 We've experienced a problem a couple of times where Cassandra nodes
 lose a keyspace after a restart.  We've restarted 2 out of 3 nodes, and
 they have both experienced this problem; clearly we're doing something
 wrong, but don't know what.  The data files are all still there, as before,
 but the node can't see the keyspace (we only have one).  Tthe nodetool
 still says that each one is responsible for 33% of the keys, but the disk
 usage has dropped to a tiny amount on the nodes that we've restarted.  I
 saw this:


 http://mail-archives.apache.org/mod_mbox/cassandra-user/201202.mbox/%3c4f3582e7.20...@conga.com%3E

 Seems to be exactly our problem, but we have not modified the
 cassandra.yaml - we have overwritten it through an automated process, and
 that happened just before restarting, but the contents did not change.

 Any ideas as to what might cause this, or how the keyspace can be
 restored (like I say, the data is all still in the data directory).

 We're running in AWS.

 Thanks,


 Conan

>>>
>>>
>>>
>>
>
>


Re: Keyspace lost after restart

2012-05-11 Thread Jeff Williams
Conan,

Good to see I'm not alone in this! I just set up a fresh test cluster. I first 
did a fresh install of 1.1.0 and was able to replicate the issue. I then did a 
fresh install using 1.0.10 and didn't see the issue. So it looks like rolling 
back to 1.0.10 could be the answer for now.

Jeff

On May 11, 2012, at 10:40 AM, Conan Cook wrote:

> Hi,
> 
> OK we're pretty sure we dropped and re-created the keyspace before restarting 
> the Cassandra nodes during some testing (we've been migrating to a new 
> cluster).  The keyspace was created via the cli:
> 
> 
> create keyspace m7
> 
>   with placement_strategy = 'NetworkTopologyStrategy'
> 
>   and strategy_options = {us-east: 3}
> 
>   and durable_writes = true;
> 
> I'm pretty confident that it's a result of the issue I spotted before:
> 
> https://issues.apache.org/jira/browse/CASSANDRA-4219 
> 
> Does anyone know whether this also affected versions before 1.1.0?  If not 
> then we can just roll back until there's a fix; we're not using our cluster 
> in production so we can afford to just bin it all and load it again.  +1 for 
> this being a major issue though, the fact that you can't see it until you 
> restart a node makes it quite dangerous, and that node is lost when it occurs 
> (I also haven't been able to restore the schema in any way).
> 
> Thanks very much,
> 
> 
> Conan
> 
> 
> 
> On 10 May 2012 17:15, Conan Cook  wrote:
> Hi Aaron,
> 
> Thanks for getting back to me!  Yes, I believe our keyspace was created prior 
> to 1.1, and I think I also understand why you're asking that, having found 
> this:
> 
> https://issues.apache.org/jira/browse/CASSANDRA-4219 
> 
> Here's our startup log:
> 
> https://gist.github.com/2654155
> 
> There isn't much in there of interest however.  It may well be the case that 
> we created our keyspace, dropped it, then created it again.  The dev 
> responsible for setting it up is ill today, but I'll get back to you tomorrow 
> with exact details of how it was originally created and whether we did 
> definitely drop and re-create it.
> 
> Ta,
> 
> Conan
> 
> 
> On 10 May 2012 11:43, aaron morton  wrote:
> Was this a schema that was created prior to 1.1 ?
> 
> What process are you using to create the schema ? 
> 
> Can you share the logs from system startup ? Up until it logs "Listening for 
> thrift clients". (if they are long please link to them)
> 
> Cheers
> 
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 10/05/2012, at 1:04 AM, Conan Cook wrote:
> 
>> Sorry, forgot to mention we're running Cassandra 1.1.
>> 
>> Conan
>> 
>> On 8 May 2012 17:51, Conan Cook  wrote:
>> Hi Cassandra Folk,
>> 
>> We've experienced a problem a couple of times where Cassandra nodes lose a 
>> keyspace after a restart.  We've restarted 2 out of 3 nodes, and they have 
>> both experienced this problem; clearly we're doing something wrong, but 
>> don't know what.  The data files are all still there, as before, but the 
>> node can't see the keyspace (we only have one).  Tthe nodetool still says 
>> that each one is responsible for 33% of the keys, but the disk usage has 
>> dropped to a tiny amount on the nodes that we've restarted.  I saw this:
>> 
>> http://mail-archives.apache.org/mod_mbox/cassandra-user/201202.mbox/%3c4f3582e7.20...@conga.com%3E
>> 
>> Seems to be exactly our problem, but we have not modified the cassandra.yaml 
>> - we have overwritten it through an automated process, and that happened 
>> just before restarting, but the contents did not change.
>> 
>> Any ideas as to what might cause this, or how the keyspace can be restored 
>> (like I say, the data is all still in the data directory).
>> 
>> We're running in AWS.
>> 
>> Thanks,
>> 
>> 
>> Conan
>> 
> 
> 
> 



Re: Keyspace lost after restart

2012-05-11 Thread Conan Cook
Hi,

OK we're pretty sure we dropped and re-created the keyspace before
restarting the Cassandra nodes during some testing (we've been migrating to
a new cluster).  The keyspace was created via the cli:

create keyspace m7
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {us-east: 3}
  and durable_writes = true;


I'm pretty confident that it's a result of the issue I spotted before:

https://issues.apache.org/jira/browse/CASSANDRA-4219

Does anyone know whether this also affected versions before 1.1.0?  If not
then we can just roll back until there's a fix; we're not using our cluster
in production so we can afford to just bin it all and load it again.  +1
for this being a major issue though, the fact that you can't see it until
you restart a node makes it quite dangerous, and that node is lost when it
occurs (I also haven't been able to restore the schema in any way).

Thanks very much,


Conan



On 10 May 2012 17:15, Conan Cook  wrote:

> Hi Aaron,
>
> Thanks for getting back to me!  Yes, I believe our keyspace was created
> prior to 1.1, and I think I also understand why you're asking that, having
> found this:
>
> https://issues.apache.org/jira/browse/CASSANDRA-4219
>
> Here's our startup log:
>
> https://gist.github.com/2654155
>
> There isn't much in there of interest however.  It may well be the case
> that we created our keyspace, dropped it, then created it again.  The dev
> responsible for setting it up is ill today, but I'll get back to you
> tomorrow with exact details of how it was originally created and whether we
> did definitely drop and re-create it.
>
> Ta,
>
> Conan
>
>
> On 10 May 2012 11:43, aaron morton  wrote:
>
>> Was this a schema that was created prior to 1.1 ?
>>
>> What process are you using to create the schema ?
>>
>> Can you share the logs from system startup ? Up until it logs "Listening
>> for thrift clients". (if they are long please link to them)
>>
>> Cheers
>>
>>   -
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 10/05/2012, at 1:04 AM, Conan Cook wrote:
>>
>> Sorry, forgot to mention we're running Cassandra 1.1.
>>
>> Conan
>>
>> On 8 May 2012 17:51, Conan Cook  wrote:
>>
>>> Hi Cassandra Folk,
>>>
>>> We've experienced a problem a couple of times where Cassandra nodes lose
>>> a keyspace after a restart.  We've restarted 2 out of 3 nodes, and they
>>> have both experienced this problem; clearly we're doing something wrong,
>>> but don't know what.  The data files are all still there, as before, but
>>> the node can't see the keyspace (we only have one).  Tthe nodetool still
>>> says that each one is responsible for 33% of the keys, but the disk usage
>>> has dropped to a tiny amount on the nodes that we've restarted.  I saw this:
>>>
>>>
>>> http://mail-archives.apache.org/mod_mbox/cassandra-user/201202.mbox/%3c4f3582e7.20...@conga.com%3E
>>>
>>> Seems to be exactly our problem, but we have not modified the
>>> cassandra.yaml - we have overwritten it through an automated process, and
>>> that happened just before restarting, but the contents did not change.
>>>
>>> Any ideas as to what might cause this, or how the keyspace can be
>>> restored (like I say, the data is all still in the data directory).
>>>
>>> We're running in AWS.
>>>
>>> Thanks,
>>>
>>>
>>> Conan
>>>
>>
>>
>>
>