Re: Issue after loading data using ssttable loader

2014-07-18 Thread Robert Coli
On Thu, Jul 17, 2014 at 4:03 AM, mahesh rajamani rajamani.mah...@gmail.com
wrote:

 I have 3 nodes in the source environment, which is configured as 3
 datacenter, having 1 node. I did an export from source environment and
 imported into new environment with 9 nodes. The other difference is source
 is configured as 256 vnodes and destination environment is with 32 vnodes.


That's not going to work.

http://www.palominodb.com/blog/2012/09/25/bulk-loading-options-cassandra

As an aside, 2.0.5 contains serious bugs, upgrade ASAP.

https://engineering.eventbrite.com/what-version-of-cassandra-should-i-run/

=Rob


Re: Issue after loading data using ssttable loader

2014-07-18 Thread Tyler Hobbs
On Fri, Jul 18, 2014 at 3:00 PM, Robert Coli rc...@eventbrite.com wrote:


 I have 3 nodes in the source environment, which is configured as 3
 datacenter, having 1 node. I did an export from source environment and
 imported into new environment with 9 nodes. The other difference is source
 is configured as 256 vnodes and destination environment is with 32 vnodes.


 That's not going to work.


It should work.  sstableloader is able to see what ranges a node owns and
exclusively stream those ranges.

There's some other sort of problem going on, though.  I would double check
your schema for the two clusters and ensure that it's *identical*.


-- 
Tyler Hobbs
DataStax http://datastax.com/


Issue after loading data using ssttable loader

2014-07-17 Thread mahesh rajamani
Hi,

I have an issue in my environment running with cassandra 2.0.5, It is build
with 9 nodes, with 3 nodes in each datacenter. After loading the data, I am
able to do token range lookup or list in cassandra-cli, but when I do get
x[rowkey], the system hangs. Similar query in CQL also has same
behavior.

I have 3 nodes in the source environment, which is configured as 3
datacenter, having 1 node. I did an export from source environment and
imported into new environment with 9 nodes. The other difference is source
is configured as 256 vnodes and destination environment is with 32 vnodes.

Below is the exception i see in cassandra.
ERROR [ReadStage:103] 2014-07-16 21:23:55,648 CassandraDaemon.java (line
192) Exception in thread Thread[ReadStage:103,5,main]

java.lang.AssertionError: Added column does not sort as the first column


at
org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)

at
org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:116)


at
org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:110)
at
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:205)

at
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)


at
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)


at
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)


at
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)

at
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)

at
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)

at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)

at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)


at
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)


at
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1396)

at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)


at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)


at java.lang.Thread.run(Thread.java:744)



-- 
Regards,
Mahesh Rajamani