Cass 1.1.11 out of memory during compaction ?

2013-11-03 Thread Oleg Dulin

Cass 1.1.11 ran out of memory on me with this exception (see below).

My parameters are 8gig heap, new gen is 1200M.

ERROR [ReadStage:55887] 2013-11-02 23:35:18,419 
AbstractCassandraDaemon.java (line 132) Exception in thread 
Thread[ReadStage:55887,5,main]

java.lang.OutOfMemoryError: Java heap space
   at 
org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323) 

   at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:398)
   at 
org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380) 

   at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88) 

   at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83) 

   at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73) 

   at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37) 

   at 
org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179) 

   at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:121) 

   at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:48) 

   at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) 

   at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) 

   at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116) 

   at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147) 

   at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126) 

   at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100) 

   at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) 

   at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) 

   at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:117) 

   at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:140) 

   at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292) 

   at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) 

   at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362) 

   at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224) 

   at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159) 


   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) 

   at 
org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
   at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) 

   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 

   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 


   at java.lang.Thread.run(Thread.java:722)


Any thoughts ?

This is a dual data center set up, with 4 nodes in each DC and RF=2 in each.


--
Regards,
Oleg Dulin
http://www.olegdulin.com




Bad Request: No indexed columns present in by-columns clause with Equal operator?

2013-11-03 Thread Techy Teck
I have below table in CQL-

create table test (
employee_id text,
employee_name text,
value text,
last_modified_date timeuuid,
primary key (employee_id)
   );


I inserted couple of records in the above table like this which I will be
inserting in our actual use case scenario as well-

insert into test (employee_id, employee_name, value,
last_modified_date) values ('1', 'e27',  'some_value', now());
insert into test (employee_id, employee_name, value,
last_modified_date) values ('2', 'e27',  'some_new_value', now());
insert into test (employee_id, employee_name, value,
last_modified_date) values ('3', 'e27',  'some_again_value', now());
insert into test (employee_id, employee_name, value,
last_modified_date) values ('4', 'e28',  'some_values', now());
insert into test (employee_id, employee_name, value,
last_modified_date) values ('5', 'e28',  'some_new_values', now());



Now I was doing select query for -  give me all the employee_id for
employee_name `e27`.

select employee_id from test where employee_name = 'e27';

And this is the error I am getting -

Bad Request: No indexed columns present in by-columns clause with Equal
operator
Perhaps you meant to use CQL 2? Try using the -2 option when starting
cqlsh.


Is there anything wrong I am doing here?

My use cases are in general -

 1. Give me everything for any of the employee_name?
 2. Give me everything for what has changed in last 5 minutes?
 3. Give me the latest employee_id for any of the employee_name?

I am running Cassandra 1.2.11


Re: Bad Request: No indexed columns present in by-columns clause with Equal operator?

2013-11-03 Thread Techy Teck
I forgot to mention one of my use case in my previous email -  So here is
the complete list of my use case again -




* 1. Give me everything for any of the employee_name?  2. Give me
everything for what has changed in last 5 minutes?  3. Give me the latest
employee_id and value for any of the employee_name? 4. Give me all the
employee_id for any of the employee_name?*





On Sun, Nov 3, 2013 at 10:26 AM, Techy Teck comptechge...@gmail.com wrote:

 I have below table in CQL-

 create table test (
 employee_id text,
 employee_name text,
 value text,
 last_modified_date timeuuid,
 primary key (employee_id)
);


 I inserted couple of records in the above table like this which I will be
 inserting in our actual use case scenario as well-

 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('1', 'e27',  'some_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('2', 'e27',  'some_new_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('3', 'e27',  'some_again_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('4', 'e28',  'some_values', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('5', 'e28',  'some_new_values', now());



 Now I was doing select query for -  give me all the employee_id for
 employee_name `e27`.

 select employee_id from test where employee_name = 'e27';

 And this is the error I am getting -

 Bad Request: No indexed columns present in by-columns clause with
 Equal operator
 Perhaps you meant to use CQL 2? Try using the -2 option when starting
 cqlsh.


 Is there anything wrong I am doing here?

 My use cases are in general -

  1. Give me everything for any of the employee_name?
  2. Give me everything for what has changed in last 5 minutes?
  3. Give me the latest employee_id for any of the employee_name?

 I am running Cassandra 1.2.11




Re: Bad Request: No indexed columns present in by-columns clause with Equal operator?

2013-11-03 Thread Hannu Kröger
Hi,

You cannot query using a field that is not indexed in CQL. You have to
create either secondary index or create index tables and manage those
indexes by yourself and query using those. Since those keys are of high
cardinality, usually the recommendation for this kind of use cases is that
you create several tables with all the data.

1) A table with employee_id as the primary key.
2) A table with last_modified_at as the primary key (use case 2)
3) A table with employee_name as the primary key (your test query with
employee_name 'e27' and use cases 1  3.)

Then you populate all those tables with your data and then you use those
tables depending on the query.

Cheers,
Hannu



2013/11/3 Techy Teck comptechge...@gmail.com

 I have below table in CQL-

 create table test (
 employee_id text,
 employee_name text,
 value text,
 last_modified_date timeuuid,
 primary key (employee_id)
);


 I inserted couple of records in the above table like this which I will be
 inserting in our actual use case scenario as well-

 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('1', 'e27',  'some_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('2', 'e27',  'some_new_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('3', 'e27',  'some_again_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('4', 'e28',  'some_values', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('5', 'e28',  'some_new_values', now());



 Now I was doing select query for -  give me all the employee_id for
 employee_name `e27`.

 select employee_id from test where employee_name = 'e27';

 And this is the error I am getting -

 Bad Request: No indexed columns present in by-columns clause with
 Equal operator
 Perhaps you meant to use CQL 2? Try using the -2 option when starting
 cqlsh.


 Is there anything wrong I am doing here?

 My use cases are in general -

  1. Give me everything for any of the employee_name?
  2. Give me everything for what has changed in last 5 minutes?
  3. Give me the latest employee_id for any of the employee_name?

 I am running Cassandra 1.2.11




Re: Cass 1.1.11 out of memory during compaction ?

2013-11-03 Thread Mohit Anchlia
Post your gc logs

Sent from my iPhone

On Nov 3, 2013, at 6:54 AM, Oleg Dulin  oleg.du...@gmail.com wrote:

 Cass 1.1.11 ran out of memory on me with this exception (see below).
 
 My parameters are 8gig heap, new gen is 1200M.
 
 ERROR [ReadStage:55887] 2013-11-02 23:35:18,419 AbstractCassandraDaemon.java 
 (line 132) Exception in thread Thread[ReadStage:55887,5,main]
 java.lang.OutOfMemoryError: Java heap space
   at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323)
  
   at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:398)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380)
  
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88)
  
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83)
  
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73)
  
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)
  
   at 
 org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179)
  
   at 
 org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:121)
  
   at 
 org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:48)
  
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
  
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) 
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116)
  
   at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147)
  
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126)
  
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100)
  
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
  
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) 
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:117)
  
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:140)
  
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292)
  
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
  
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362)
  
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224)
  
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159)
  
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
  
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) 
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  
   at java.lang.Thread.run(Thread.java:722)
 
 
 Any thoughts ?
 
 This is a dual data center set up, with 4 nodes in each DC and RF=2 in each.
 
 
 -- 
 Regards,
 Oleg Dulin
 http://www.olegdulin.com
 
 


Re: Bad Request: No indexed columns present in by-columns clause with Equal operator?

2013-11-03 Thread Techy Teck
Thanks Hannu. I got your point.. But in my example `employee_id` won't be
larger than `32767`.. So I am thinking of creating an index on these two
columns -

create index employee_name_idx on test (employee_name);
create index last_modified_date_idx on test (last_modified_date);

As the chances of executing the queries on above is very minimal.. Very
rarely, we will be executing the above query but if we do, I wanted system
to be capable of doing it.

Now I can execute the below queries after creating an index -

select * from test where employee_name = 'e27';
select employee_id from test where employee_name = 'e27';
select * from test where employee_id = '1';

But I cannot execute the below query which is - Give me everything that
has changed within 15 minutes . So I wrote the below query like this -

select * from test where last_modified_date  mintimeuuid('2013-11-03
13:33:30') and last_modified_date  maxtimeuuid('2013-11-03 13:33:45');

But it doesn't run and I always get error as  -

Bad Request: No indexed columns present in by-columns clause with Equal
operator


Any thoughts what wrong I am doing here?


On Sun, Nov 3, 2013 at 12:43 PM, Hannu Kröger hkro...@gmail.com wrote:

 Hi,

 You cannot query using a field that is not indexed in CQL. You have to
 create either secondary index or create index tables and manage those
 indexes by yourself and query using those. Since those keys are of high
 cardinality, usually the recommendation for this kind of use cases is that
 you create several tables with all the data.

 1) A table with employee_id as the primary key.
 2) A table with last_modified_at as the primary key (use case 2)
 3) A table with employee_name as the primary key (your test query with
 employee_name 'e27' and use cases 1  3.)

 Then you populate all those tables with your data and then you use those
 tables depending on the query.

 Cheers,
 Hannu



 2013/11/3 Techy Teck comptechge...@gmail.com

 I have below table in CQL-

 create table test (
 employee_id text,
 employee_name text,
 value text,
 last_modified_date timeuuid,
 primary key (employee_id)
);


 I inserted couple of records in the above table like this which I will be
 inserting in our actual use case scenario as well-

 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('1', 'e27',  'some_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('2', 'e27',  'some_new_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('3', 'e27',  'some_again_value', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('4', 'e28',  'some_values', now());
 insert into test (employee_id, employee_name, value,
 last_modified_date) values ('5', 'e28',  'some_new_values', now());



 Now I was doing select query for -  give me all the employee_id for
 employee_name `e27`.

 select employee_id from test where employee_name = 'e27';

 And this is the error I am getting -

 Bad Request: No indexed columns present in by-columns clause with
 Equal operator
 Perhaps you meant to use CQL 2? Try using the -2 option when starting
 cqlsh.


 Is there anything wrong I am doing here?

 My use cases are in general -

  1. Give me everything for any of the employee_name?
  2. Give me everything for what has changed in last 5 minutes?
  3. Give me the latest employee_id for any of the employee_name?

 I am running Cassandra 1.2.11





Re: Cassandra book/tuturial

2013-11-03 Thread Erwin Karbasi
Thanks a lot.

On Thu, Oct 31, 2013 at 9:43 AM, Markus Jais markus.j...@yahoo.de wrote:

 This one is coming out soon. Looks interesting:


 http://www.informit.com/store/practical-cassandra-a-developers-approach-9780321933942

 Beside that , I found the already mentioned docs on the Datastax site to
 be the best information.

 Markus


   Joe Stein crypt...@gmail.com schrieb am 5:51 Montag, 28.Oktober 2013:

 Reading previous version's documentation and related information from that
 time in the past (like books) has value!  It helps to understand decisions
 that were made and changed and some that are still the same like
 Secondary Indexes which were introduced in 0.7 when
 http://www.amazon.com/Cassandra-Definitive-Guide-Eben-Hewitt/dp/1449390412came
  out back in 2011.

 If you are really just getting started then I say go and start here
 http://www.planetcassandra.org/

 /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
 /


 On Mon, Oct 28, 2013 at 12:15 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.comwrote:

 With lot of enthusiasm i started reading it. Its out-dated, error prone. I
 could not even get Cassandra running from that book. Eventually i could not
 get start with cassandra.


 On Mon, Oct 28, 2013 at 9:41 AM, Joe Stein crypt...@gmail.com wrote:

 http://www.planetcassandra.org has a lot of great resources on it.

 Eben Hewitt's book is great, as are the other C* books like the High
 Performance Cookbook
 http://www.amazon.com/Cassandra-Performance-Cookbook-Edward-Capriolo/dp/1849515123

 I would recommend reading both of those books.  You can also read
 http://www.datastax.com/dev/blog/thrift-to-cql3 to help understandings.

 From there go with CQL http://cassandra.apache.org/doc/cql3/CQL.html

  /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
 /


 On Sun, Oct 27, 2013 at 11:58 PM, Mohan L l.mohan...@gmail.com wrote:

 And here also good intro: http://10kloc.wordpress.com/category/nosql-2/

 Thanks
 Mohan L


 On Mon, Oct 28, 2013 at 8:02 AM, Danie Viljoen dav...@gmail.com wrote:

 Not a book, but I think this is a good start:
 http://www.datastax.com/documentation/cassandra/2.0/webhelp/index.html


 On Mon, Oct 28, 2013 at 3:14 PM, Dave Brosius dbros...@mebigfatguy.comwrote:

  Unfortunately, as tech books tend to be, it's quite a bit out of date,
 at this point.




 On 10/27/2013 09:54 PM, Mohan L wrote:




 On Sun, Oct 27, 2013 at 9:57 PM, Erwin Karbasi er...@optinity.com wrote:

   Hey Guys,

  What is the best book to learn Cassandra from scratch?

  Thanks in advance,
  Erwin


 Hi,

 Buy :

 Cassandra: The Definitive Guide By Eben Hewitt :
 http://shop.oreilly.com/product/0636920010852.do

  Thanks
  Mohan L









 --
 Deepak







Re: Cass 1.1.11 out of memory during compaction ?

2013-11-03 Thread Takenori Sato
Try increasing column_index_size_in_kb.

A slice query to get some ranges(SliceFromReadCommand) requires to read all
the column indexes for the row, thus could hit OOM if you have a very wide
row.



On Sun, Nov 3, 2013 at 11:54 PM, Oleg Dulin oleg.du...@gmail.com wrote:

 Cass 1.1.11 ran out of memory on me with this exception (see below).

 My parameters are 8gig heap, new gen is 1200M.

 ERROR [ReadStage:55887] 2013-11-02 23:35:18,419
 AbstractCassandraDaemon.java (line 132) Exception in thread
 Thread[ReadStage:55887,5,main]
 java.lang.OutOfMemoryError: Java heap space
at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323)

at org.apache.cassandra.utils.ByteBufferUtil.read(
 ByteBufferUtil.java:398)
at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380)

at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88)

at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83)

at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73)

at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)

at org.apache.cassandra.db.columniterator.IndexedSliceReader$
 IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179)
at org.apache.cassandra.db.columniterator.IndexedSliceReader.
 computeNext(IndexedSliceReader.java:121)
at org.apache.cassandra.db.columniterator.IndexedSliceReader.
 computeNext(IndexedSliceReader.java:48)
at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)

at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)

at org.apache.cassandra.db.columniterator.
 SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116)
at org.apache.cassandra.utils.MergeIterator$Candidate.
 advance(MergeIterator.java:147)
at org.apache.cassandra.utils.MergeIterator$ManyToOne.
 advance(MergeIterator.java:126)
at org.apache.cassandra.utils.MergeIterator$ManyToOne.
 computeNext(MergeIterator.java:100)
at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)

at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)

at org.apache.cassandra.db.filter.SliceQueryFilter.
 collectReducedColumns(SliceQueryFilter.java:117)
at org.apache.cassandra.db.filter.QueryFilter.
 collateColumns(QueryFilter.java:140)
at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292)

at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)

at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362)

at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224)

at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159)

at org.apache.cassandra.db.Table.getRow(Table.java:378)
at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)

at org.apache.cassandra.db.ReadVerbHandler.doVerb(
 ReadVerbHandler.java:51)
at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)

at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)


 Any thoughts ?

 This is a dual data center set up, with 4 nodes in each DC and RF=2 in
 each.


 --
 Regards,
 Oleg Dulin
 http://www.olegdulin.com





Cassandra Data Query

2013-11-03 Thread Chandana Tummala
Hi,

we are  using a 6 node cluster with 3 nodes in each DC with replication
factor:3
cassandra version :- dse-3.1.1 version 
we have to load data into the cluster for every two hrs using Java driver
batch program
presently data size  in cluster is 2TB
I want to validate the data loaded.
so using 
select count(*) from table name;
giving me request time out error

so , using 

select count(*) from table name where secondary_index='';
giving me request time out error

Can you please suggest me how to validate the data loaded.
for a load max of 1 GB data is loaded into the cluster.

Is there any way i can validate the count of data loaded.






--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Data-Query-tp7591180.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.