Re: sstable loader

2015-03-30 Thread Rahul Bhardwaj
Hi Amila ,

I tried your code. In which we made some modification accordingly like:

public class DataImportExample
{
static String filename;

public static void main(String[] args) throws IOException
{
filename = /root/perl_work/abc.csv;
BufferedReader reader = new BufferedReader(new
FileReader(filename));
String keyspace = mesh_glusr;
File directory = new File(keyspace);
if (!directory.exists()){
  directory.mkdir();}

// random partitioner is created, u can give the partitioner as u
want
IPartitioner partitioner = new RandomPartitioner();

SSTableSimpleUnsortedWriter usersWriter = new
SSTableSimpleUnsortedWriter(

directory,partitioner,keyspace,test1,AsciiType.instance,null,64);


after compiling when we executed it with below command :
java  -cp /root/perl_work DataImportExample

it throws below error:

Expecting URI in variable: [cassandra.config].  Please prefix the file with
file:/// for local files or file://server/ for remote files.  Aborting.
Fatal configuration error; unable to start. See log for stacktrace.

We are not able to find what went wrong , actually not good java developer,
so plz guide.


Regards:
Rahul Bhardwaj

On Fri, Mar 27, 2015 at 2:55 PM, Amila Paranawithana amila1...@gmail.com
wrote:

 Hi,

 This post[1] may be useful. But note that this was done with cassandra
 older version. So there may be new way to do this.

 [1].
 http://amilaparanawithana.blogspot.com/2012/06/bulk-loading-external-data-to-cassandra.html

 Thanks,


 On Fri, Mar 27, 2015 at 11:40 AM, Rahul Bhardwaj 
 rahul.bhard...@indiamart.com wrote:

 Hi All,

  Can we use sstable loader for loading external flat file or csv file.
 If yes , kindly share the steps or manual.

 I need to put 40 million data into a table of around 70 columns



 Regards:
 Rahul Bhardwaj





 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna expand his business.
 kyunki Kaam Yahin Banta Hai
 https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!




 --

  *Amila Iroshani Paranawithana , **Senior Software Engineer* *,
 AdroitLogic http://adroitlogic.org*
   | ☎: +94779747398
 | ✍:  http://amilaparanawithana.blogspot.com
 [image: Facebook] https://www.facebook.com/amila.paranawithana [image:
 Twitter] https://twitter.com/AmilaPara [image: LinkedIn]
 http://www.linkedin.com/profile/view?id=66289851trk=tab_pro [image:
 Skype] amila.paranawithana
 ​


-- 

Follow IndiaMART.com http://www.indiamart.com for latest updates on this 
and more: https://plus.google.com/+indiamart 
https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART Mobile 
Channel: 
https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 
https://play.google.com/store/apps/details?id=com.indiamart.m 
http://m.indiamart.com/
https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
Watch how IndiaMART Maximiser helped Mr. Khanna expand his business. kyunki 
Kaam 
Yahin Banta Hai 
https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!


Re: Replication to second data center with different number of nodes

2015-03-30 Thread Carlos Rolo
Sharing my experience here.

1) Never had any issues with different size DCs. If the hardware is the
same, keep the # to 256.
2) In most of the cases I keep the 256 vnodes and no performance problems
(when they are triggered, the cause is not the vnodes #)

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
http://linkedin.com/in/carlosjuzarterolo*
Tel: 1649
www.pythian.com

On Mon, Mar 30, 2015 at 6:31 AM, Anishek Agarwal anis...@gmail.com wrote:

 Colin,

 When you said larger number of tokens has Query performance hit, is it
 read or write performance. Also if you have any links you could share to
 shed some light on this it would be great.

 Thanks
 Anishek

 On Sun, Mar 29, 2015 at 2:20 AM, Colin Clark co...@clark.ws wrote:

 I typically use a # a lot lower than 256, usually less than 20 for
 num_tokens as a larger number has historically had a dramatic impact on
 query performance.
 —
 Colin Clark
 co...@clark.ws
 +1 612-859-6129
 skype colin.p.clark

 On Mar 28, 2015, at 3:46 PM, Eric Stevens migh...@gmail.com wrote:

 If you're curious about how Cassandra knows how to replicate data in the
 remote DC, it's the same as in the local DC, replication is independent in
 each, and you can even set a different replication strategy per keyspace
 per datacenter.  Nodes in each DC take up num_tokens positions on a ring,
 each partition key is mapped to a position on that ring, and whomever owns
 that part of the ring is the primary for that data.  Then (oversimplified)
 r-1 adjacent nodes become replicas for that same data.

 On Fri, Mar 27, 2015 at 6:55 AM, Sibbald, Charles 
 charles.sibb...@bskyb.com wrote:


 http://www.datastax.com/documentation/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html?scroll=reference_ds_qfg_n1r_1k__num_tokens

  So go with a default 256, and leave initial token empty:

  num_tokens: 256

 # initial_token:


  Cassandra will always give each node the same number of tokens, the
 only time you might want to distribute this is if your instances are of
 different sizing/capability which is also a bad scenario.

   From: Björn Hachmann bjoern.hachm...@metrigo.de
 Reply-To: user@cassandra.apache.org user@cassandra.apache.org
 Date: Friday, 27 March 2015 12:11
 To: user user@cassandra.apache.org
 Subject: Re: Replication to second data center with different number of
 nodes


 2015-03-27 11:58 GMT+01:00 Sibbald, Charles charles.sibb...@bskyb.com:

 Cassandra’s Vnodes config


 ​Thank you. Yes, we are using vnodes! The num_token parameter controls
 the number of vnodes assigned to a specific node.​

  Might be I am seeing problems where are none.

  Let me rephrase my question: How does Cassandra know it has to
 replicate 1/3 of all keys to each single node in the second DC? I can see
 two ways:
  1. It has to be configured explicitly.
  2. It is derived from the number of nodes available in the data center
 at the time `nodetool rebuild` is started.

  Kind regards
 Björn
   Information in this email including any attachments may be
 privileged, confidential and is intended exclusively for the addressee. The
 views expressed may not be official policy, but the personal views of the
 originator. If you have received it in error, please notify the sender by
 return e-mail and delete it from your system. You should not reproduce,
 distribute, store, retransmit, use or disclose its contents to anyone.
 Please note we reserve the right to monitor all e-mail communication
 through our internal and external networks. SKY and the SKY marks are
 trademarks of Sky plc and Sky International AG and are used under licence.
 Sky UK Limited (Registration No. 2906991), Sky-In-Home Service Limited
 (Registration No. 2067075) and Sky Subscribers Services Limited
 (Registration No. 2340150) are direct or indirect subsidiaries of Sky plc
 (Registration No. 2247735). All of the companies mentioned in this
 paragraph are incorporated in England and Wales and share the same
 registered office at Grant Way, Isleworth, Middlesex TW7 5QD.






-- 


--





Re: sstable loader

2015-03-30 Thread Vanessa Gligor
Hi,

I used this https://github.com/yukim/cassandra-bulkload-example/ (I have
modified BulkLoad.java for my needs) for the sstable loader and it works
ok. You can take a look, maybe it will help you.

Regards,
Vanessa.

On Mon, Mar 30, 2015 at 10:04 AM, Rahul Bhardwaj 
rahul.bhard...@indiamart.com wrote:

 Hi Amila ,

 I tried your code. In which we made some modification accordingly like:

 public class DataImportExample
 {
 static String filename;

 public static void main(String[] args) throws IOException
 {
 filename = /root/perl_work/abc.csv;
 BufferedReader reader = new BufferedReader(new
 FileReader(filename));
 String keyspace = mesh_glusr;
 File directory = new File(keyspace);
 if (!directory.exists()){
   directory.mkdir();}

 // random partitioner is created, u can give the partitioner as u
 want
 IPartitioner partitioner = new RandomPartitioner();

 SSTableSimpleUnsortedWriter usersWriter = new
 SSTableSimpleUnsortedWriter(

 directory,partitioner,keyspace,test1,AsciiType.instance,null,64);


 after compiling when we executed it with below command :
 java  -cp /root/perl_work DataImportExample

 it throws below error:

 Expecting URI in variable: [cassandra.config].  Please prefix the file
 with file:/// for local files or file://server/ for remote files.
 Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.

 We are not able to find what went wrong , actually not good java
 developer, so plz guide.


 Regards:
 Rahul Bhardwaj

 On Fri, Mar 27, 2015 at 2:55 PM, Amila Paranawithana amila1...@gmail.com
 wrote:

 Hi,

 This post[1] may be useful. But note that this was done with cassandra
 older version. So there may be new way to do this.

 [1].
 http://amilaparanawithana.blogspot.com/2012/06/bulk-loading-external-data-to-cassandra.html

 Thanks,


 On Fri, Mar 27, 2015 at 11:40 AM, Rahul Bhardwaj 
 rahul.bhard...@indiamart.com wrote:

 Hi All,

  Can we use sstable loader for loading external flat file or csv file.
 If yes , kindly share the steps or manual.

 I need to put 40 million data into a table of around 70 columns



 Regards:
 Rahul Bhardwaj





 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna expand his business.
 kyunki Kaam Yahin Banta Hai
 https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!




 --

  *Amila Iroshani Paranawithana , **Senior Software Engineer* *,
 AdroitLogic http://adroitlogic.org*
   | ☎: +94779747398
 | ✍:  http://amilaparanawithana.blogspot.com
 [image: Facebook] https://www.facebook.com/amila.paranawithana [image:
 Twitter] https://twitter.com/AmilaPara [image: LinkedIn]
 http://www.linkedin.com/profile/view?id=66289851trk=tab_pro [image:
 Skype] amila.paranawithana
 ​




 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna expand his business.
 kyunki Kaam Yahin Banta Hai
 https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!



SSTable structure

2015-03-30 Thread Pierre

Hi,

Does anyone know if there is a more complete and up to date documentation about the sstable files 
structure (data, index, stats etc.) than this one : http://wiki.apache.org/cassandra/ArchitectureSSTable


I'm looking for a full specification, with schema of the structure if possible.

Thanks.


Re: sstable loader

2015-03-30 Thread Amila Paranawithana
Hi Rahul,

By just seeing the error I guess the file name you have given need to be
changed to to format file:/// .Not sure anyway.

Cheers!
Amila

On Mon, Mar 30, 2015 at 1:46 PM, Rahul Bhardwaj 
rahul.bhard...@indiamart.com wrote:

 Hi Venessa,

 Thanks for sharing.

 But when after compiling BulkLoad.java, on executing it is returning

 Exception in thread main java.lang.NoClassDefFoundError: BulkLoad (wrong
 name: bulkload/BulkLoad)

 had you also seen this.


 Regards:
 Rahul Bhardwaj

 On Mon, Mar 30, 2015 at 12:57 PM, Vanessa Gligor vanessagli...@gmail.com
 wrote:

 Hi,

 I used this https://github.com/yukim/cassandra-bulkload-example/ (I have
 modified BulkLoad.java for my needs) for the sstable loader and it works
 ok. You can take a look, maybe it will help you.

 Regards,
 Vanessa.

 On Mon, Mar 30, 2015 at 10:04 AM, Rahul Bhardwaj 
 rahul.bhard...@indiamart.com wrote:

 Hi Amila ,

 I tried your code. In which we made some modification accordingly like:

 public class DataImportExample
 {
 static String filename;

 public static void main(String[] args) throws IOException
 {
 filename = /root/perl_work/abc.csv;
 BufferedReader reader = new BufferedReader(new
 FileReader(filename));
 String keyspace = mesh_glusr;
 File directory = new File(keyspace);
 if (!directory.exists()){
   directory.mkdir();}

 // random partitioner is created, u can give the partitioner as
 u want
 IPartitioner partitioner = new RandomPartitioner();

 SSTableSimpleUnsortedWriter usersWriter = new
 SSTableSimpleUnsortedWriter(

 directory,partitioner,keyspace,test1,AsciiType.instance,null,64);


 after compiling when we executed it with below command :
 java  -cp /root/perl_work DataImportExample

 it throws below error:

 Expecting URI in variable: [cassandra.config].  Please prefix the file
 with file:/// for local files or file://server/ for remote files.
 Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.

 We are not able to find what went wrong , actually not good java
 developer, so plz guide.


 Regards:
 Rahul Bhardwaj

 On Fri, Mar 27, 2015 at 2:55 PM, Amila Paranawithana 
 amila1...@gmail.com wrote:

 Hi,

 This post[1] may be useful. But note that this was done with cassandra
 older version. So there may be new way to do this.

 [1].
 http://amilaparanawithana.blogspot.com/2012/06/bulk-loading-external-data-to-cassandra.html

 Thanks,


 On Fri, Mar 27, 2015 at 11:40 AM, Rahul Bhardwaj 
 rahul.bhard...@indiamart.com wrote:

 Hi All,

  Can we use sstable loader for loading external flat file or csv file.
 If yes , kindly share the steps or manual.

 I need to put 40 million data into a table of around 70 columns



 Regards:
 Rahul Bhardwaj





 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna expand his business.
 kyunki Kaam Yahin Banta Hai
 https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!




 --

  *Amila Iroshani Paranawithana , **Senior Software Engineer* *,
 AdroitLogic http://adroitlogic.org*
   | ☎: +94779747398
 | ✍:  http://amilaparanawithana.blogspot.com
 [image: Facebook] https://www.facebook.com/amila.paranawithana [image:
 Twitter] https://twitter.com/AmilaPara [image: LinkedIn]
 http://www.linkedin.com/profile/view?id=66289851trk=tab_pro [image:
 Skype] amila.paranawithana
 ​




 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna expand his business.
 kyunki Kaam Yahin Banta Hai
 https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!





 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna 

Cassandra 2.1.3: OOM during bootstrap

2015-03-30 Thread Nathan Bijnens
We are getting a OOM when adding a new node to an existing cluster. In the
heapdump we found that this thread caused the OutOfMemory exception:

SharedPool-Worker-10 daemon prio=5 tid=440 RUNNABLE

at java.lang.OutOfMemoryError.init(OutOfMemoryError.java:48)
at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at
org.apache.cassandra.utils.memory.SlabAllocator.getRegion(SlabAllocator.java:137)
at
org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:97)
at
org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
at
org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
Local Variable: java.nio.HeapByteBuffer#3483982
at
org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61)
at org.apache.cassandra.db.Memtable.put(Memtable.java:191)
Local Variable: org.apache.cassandra.db.ArrayBackedSortedColumns#26
Local Variable: org.apache.cassandra.db.AtomicBTreeColumns#138773
at
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1180)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:385)
Local Variable: org.apache.cassandra.db.BufferDecoratedKey#152828
Local Variable: org.apache.cassandra.db.commitlog.ReplayPosition#8
Local Variable: java.util.Collections$1#6
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:348)
at org.apache.cassandra.db.Mutation.apply(Mutation.java:214)
at
org.apache.cassandra.db.MutationVerbHandler.doVerb(MutationVerbHandler.java:54)
Local Variable: java.net.Inet4Address#208
at
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
Local Variable: org.apache.cassandra.net.MessageDeliveryTask#55
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
Local Variable: java.util.concurrent.Executors$RunnableAdapter#70
at
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
Local Variable:
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask#9
Local Variable: org.apache.cassandra.concurrent.SEPWorker#34
at java.lang.Thread.run(Thread.java:745)

We are using Oracle (and tried with OpenJDK before):
java version 1.7.0_76
Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)

Best regards,
  Nathan


Re: sstable loader

2015-03-30 Thread Rahul Bhardwaj
Hi Venessa,

Thanks for sharing.

But when after compiling BulkLoad.java, on executing it is returning

Exception in thread main java.lang.NoClassDefFoundError: BulkLoad (wrong
name: bulkload/BulkLoad)

had you also seen this.


Regards:
Rahul Bhardwaj

On Mon, Mar 30, 2015 at 12:57 PM, Vanessa Gligor vanessagli...@gmail.com
wrote:

 Hi,

 I used this https://github.com/yukim/cassandra-bulkload-example/ (I have
 modified BulkLoad.java for my needs) for the sstable loader and it works
 ok. You can take a look, maybe it will help you.

 Regards,
 Vanessa.

 On Mon, Mar 30, 2015 at 10:04 AM, Rahul Bhardwaj 
 rahul.bhard...@indiamart.com wrote:

 Hi Amila ,

 I tried your code. In which we made some modification accordingly like:

 public class DataImportExample
 {
 static String filename;

 public static void main(String[] args) throws IOException
 {
 filename = /root/perl_work/abc.csv;
 BufferedReader reader = new BufferedReader(new
 FileReader(filename));
 String keyspace = mesh_glusr;
 File directory = new File(keyspace);
 if (!directory.exists()){
   directory.mkdir();}

 // random partitioner is created, u can give the partitioner as u
 want
 IPartitioner partitioner = new RandomPartitioner();

 SSTableSimpleUnsortedWriter usersWriter = new
 SSTableSimpleUnsortedWriter(

 directory,partitioner,keyspace,test1,AsciiType.instance,null,64);


 after compiling when we executed it with below command :
 java  -cp /root/perl_work DataImportExample

 it throws below error:

 Expecting URI in variable: [cassandra.config].  Please prefix the file
 with file:/// for local files or file://server/ for remote files.
 Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.

 We are not able to find what went wrong , actually not good java
 developer, so plz guide.


 Regards:
 Rahul Bhardwaj

 On Fri, Mar 27, 2015 at 2:55 PM, Amila Paranawithana amila1...@gmail.com
  wrote:

 Hi,

 This post[1] may be useful. But note that this was done with cassandra
 older version. So there may be new way to do this.

 [1].
 http://amilaparanawithana.blogspot.com/2012/06/bulk-loading-external-data-to-cassandra.html

 Thanks,


 On Fri, Mar 27, 2015 at 11:40 AM, Rahul Bhardwaj 
 rahul.bhard...@indiamart.com wrote:

 Hi All,

  Can we use sstable loader for loading external flat file or csv file.
 If yes , kindly share the steps or manual.

 I need to put 40 million data into a table of around 70 columns



 Regards:
 Rahul Bhardwaj





 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna expand his business.
 kyunki Kaam Yahin Banta Hai
 https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!




 --

  *Amila Iroshani Paranawithana , **Senior Software Engineer* *,
 AdroitLogic http://adroitlogic.org*
   | ☎: +94779747398
 | ✍:  http://amilaparanawithana.blogspot.com
 [image: Facebook] https://www.facebook.com/amila.paranawithana [image:
 Twitter] https://twitter.com/AmilaPara [image: LinkedIn]
 http://www.linkedin.com/profile/view?id=66289851trk=tab_pro [image:
 Skype] amila.paranawithana
 ​




 Follow IndiaMART.com http://www.indiamart.com for latest updates on
 this and more: https://plus.google.com/+indiamart
 https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART
 Mobile Channel:
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 https://play.google.com/store/apps/details?id=com.indiamart.m
 http://m.indiamart.com/

 https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
 Watch how IndiaMART Maximiser helped Mr. Khanna expand his business.
 kyunki Kaam Yahin Banta Hai
 https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!




-- 

Follow IndiaMART.com http://www.indiamart.com for latest updates on this 
and more: https://plus.google.com/+indiamart 
https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART Mobile 
Channel: 
https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 
https://play.google.com/store/apps/details?id=com.indiamart.m 
http://m.indiamart.com/
https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
Watch how IndiaMART Maximiser helped Mr. Khanna expand his business. kyunki 
Kaam 
Yahin Banta Hai 
https://www.youtube.com/watch?v=Q9fZ5ILY3w8feature=youtu.be!!!


Re: SSTable structure

2015-03-30 Thread ssiv...@gmail.com

+1

On 03/30/2015 11:38 AM, Pierre wrote:

Hi,

Does anyone know if there is a more complete and up to date 
documentation about the sstable files structure (data, index, stats 
etc.) than this one : 
http://wiki.apache.org/cassandra/ArchitectureSSTable


I'm looking for a full specification, with schema of the structure if 
possible.


Thanks.


--
Thanks,
Serj



Re: sstable writer and creating bytebuffers

2015-03-30 Thread Sylvain Lebresne
No, it's not a bug. In a composite every elements start by a 2 short
indicating the size of the element, plus an extra byte that is used for
sorting purposes. A little bit more details can be found in the
CompositeType class javadoc if you're interested. It's not the most compact
format there is but changing it would break backward compatibility anyway.

On Mon, Mar 30, 2015 at 12:38 PM, Peer, Oded oded.p...@rsa.com wrote:

  I am writing code to bulk load data into Cassandra using
 SSTableSimpleUnsortedWriter

 I changed my partition key from a composite key (long, int) to a single
 column key (long).

 For creating the composite key I used a CompositeType, and I kept using it
 after changing the key to a single column.

 My code didn’t work until I changed the way I create the ByteBuffer not to
 use CompositeType.



 The following code prints ‘false’.

 Do you consider this a bug?



   *long* val = 123L;

   ByteBuffer direct = *bytes*( val );

   ByteBuffer composite = CompositeType.*getInstance*(
 LongType.*instance* ).builder().add( *bytes*( val ) ).build();

   System.*out*.println( direct.equals( composite ) );





sstable writer and creating bytebuffers

2015-03-30 Thread Peer, Oded
I am writing code to bulk load data into Cassandra using 
SSTableSimpleUnsortedWriter
I changed my partition key from a composite key (long, int) to a single column 
key (long).
For creating the composite key I used a CompositeType, and I kept using it 
after changing the key to a single column.
My code didn't work until I changed the way I create the ByteBuffer not to use 
CompositeType.

The following code prints 'false'.
Do you consider this a bug?

  long val = 123L;
  ByteBuffer direct = bytes( val );
  ByteBuffer composite = CompositeType.getInstance( 
LongType.instance ).builder().add( bytes( val ) ).build();
  System.out.println( direct.equals( composite ) );



importing files into cassandra, and feauture enhancements in cassandra

2015-03-30 Thread Divya Divs
hi everyone..
 I'm a m-tech student. my academic project is cassandra. I have run the
source code of cassandra in eclipse juno using ant build.
https://github.com/apache/cassandra. i have to do some feature enhancement
in cassandra. After enhancing i hav to import crime datasets into the
cassandra for forensics analysis. as i'm new to this, its very hard for me
to proceed. please help me to proceed further. I'm believing you guys. I
have to take some sample crime datasets for making forensic investigation
in cassandra. but i  don't have any datasets. i couldn't find in net also.
some 100 data is  sufficient for me.. so please help me in this part also.


Thanks  regards,
Divya


nodetool cleanup error

2015-03-30 Thread Amlan Roy
Hi,

I have added new nodes to an existing cluster and ran the “nodetool cleanup”. I 
am getting the following error. Wanted to know if there is any solution to it.

Regards,
Amlan

Error occurred during cleanup
java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory was 
freed
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:234)
at 
org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:272)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1115)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.AssertionError: Memory was freed
at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
at 
org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
at 
org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
at 
org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
at 
org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:602)
at 
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:947)
at 
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:910)
at 
org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:819)
at 
org.apache.cassandra.db.ColumnFamilyStore.getExpectedCompactedFileSize(ColumnFamilyStore.java:1088)
at 
org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:564)
at 

Re: nodetool cleanup error

2015-03-30 Thread Duncan Sands

Hi Amlan,

On 30/03/15 22:12, Amlan Roy wrote:

Hi,

I have added new nodes to an existing cluster and ran the “nodetool cleanup”. I
am getting the following error. Wanted to know if there is any solution to it.

Regards,
Amlan

Error occurred during cleanup
java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory was 
freed


this is fixed in 2.0.13.

Best wishes, Duncan.


Re: nodetool cleanup error

2015-03-30 Thread Jeff Ferland
Code problem that was patched in 
https://issues.apache.org/jira/browse/CASSANDRA-8716 
https://issues.apache.org/jira/browse/CASSANDRA-8716. Upgrade to 2.0.13


 On Mar 30, 2015, at 1:12 PM, Amlan Roy amlan@cleartrip.com wrote:
 
 Hi,
 
 I have added new nodes to an existing cluster and ran the “nodetool cleanup”. 
 I am getting the following error. Wanted to know if there is any solution to 
 it.
 
 Regards,
 Amlan
 
 Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory was 
 freed
   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
   at java.util.concurrent.FutureTask.get(FutureTask.java:188)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:234)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:272)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1115)
   at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2177)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
   at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
   at 
 org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
   at 
 org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
   at 
 org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:602)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:947)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:910)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:819)
   at 
 

Why select returns tombstoned results?

2015-03-30 Thread Benyi Wang
Create table tomb_test (
   guid text,
   content text,
   range text,
   rank int,
   id text,
   cnt int
   primary key (guid, content, range, rank)
)

Sometime I delete the rows using cassandra java driver using this query

DELETE FROM tomb_test WHERE guid=? and content=? and range=?

in Batch statement with UNLOGGED. CONSISTENCE_LEVEL is local_one.

But if I run

SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and
range='week'
or
SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and
range='week' and rank = 1

The result shows the deleted rows.

If I run this select, the deleted rows are not shown

SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1'

If I run delete statement in cqlsh, the deleted rows won't show up.

How can I fix this?


Re: Why select returns tombstoned results?

2015-03-30 Thread Prem Yadav
Increase the read CL to quorum and you should get correct results.
How many nodes do you have in the cluster and what is the replication
factor for the keyspace?

On Mon, Mar 30, 2015 at 7:41 PM, Benyi Wang bewang.t...@gmail.com wrote:

 Create table tomb_test (
guid text,
content text,
range text,
rank int,
id text,
cnt int
primary key (guid, content, range, rank)
 )

 Sometime I delete the rows using cassandra java driver using this query

 DELETE FROM tomb_test WHERE guid=? and content=? and range=?

 in Batch statement with UNLOGGED. CONSISTENCE_LEVEL is local_one.

 But if I run

 SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and
 range='week'
 or
 SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and
 range='week' and rank = 1

 The result shows the deleted rows.

 If I run this select, the deleted rows are not shown

 SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1'

 If I run delete statement in cqlsh, the deleted rows won't show up.

 How can I fix this?




Re: Issue with removing a node and adding it back

2015-03-30 Thread Robert Coli
On Fri, Mar 27, 2015 at 4:27 PM, Shiwen Cheng cheng.shiwen...@gmail.com
wrote:

 Thanks Robert!
 Yes I tried what you said: clean the data and re-bootstrap. But still it
 failed, once at the point of 600GB transferred and once at 1.1TB :(



1) figure out what is making your streams die (usually either flaky network
(AWS) or stop-the-world GC) and fix that
OR
2) try tuning streaming_socket_timeout_in_ms

=Rob


Re: upgrade from 1.0.12 to 1.1.12

2015-03-30 Thread Robert Coli
On Fri, Mar 27, 2015 at 4:01 AM, Jason Wee peich...@gmail.com wrote:

 Rob, the cluster now upgraded to cassandra 1.0.12 (default hd version,
 in Descriptor.java) and I ensure all sstables in current cluster are
 hd version before upgrade to cassandra 1.1. I have also checked in
 cassandra 1.1.12 , the sstable is version hf version. so i guess,
 nodetool upgradesstables is needed?


Yes, upgradesstables is needed.

As mentioned down-thread, upgradesstables is now optimized to be a NOOP
when the sstables are already of the current version, so you should always
run upgradesstables, even after a minor version upgrade.

=Rob


Re: Cassandra 2.1.3: OOM during bootstrap

2015-03-30 Thread Robert Coli
On Mon, Mar 30, 2015 at 2:08 AM, Nathan Bijnens nat...@nathan.gs wrote:

 We are getting a OOM when adding a new node to an existing cluster. In the
 heapdump we found that this thread caused the OutOfMemory exception:

 SharedPool-Worker-10 daemon prio=5 tid=440 RUNNABLE


This type of post is probably better handled as a JIRA ticket at
http://issues.apache.org

Please let the list know the URL if you do file a JIRA.

=Rob


Re: nodetool cleanup error

2015-03-30 Thread Robert Coli
On Mon, Mar 30, 2015 at 4:21 PM, Amlan Roy amlan@cleartrip.com wrote:

 Thanks for the reply. I have upgraded to 2.0.13. Now I get the following
 error.


If cleanup is still excepting for you on 2.0.13 with some sstables you
have, I would strongly consider :

1) file a JIRA (http://issues.apache.org) and attach / offer the sstables
for debugging
2) let the list know the JIRA id of the ticket

=Rob


Re: nodetool cleanup error

2015-03-30 Thread Amlan Roy
Hi,

Thanks for the reply. I have upgraded to 2.0.13. Now I get the following error.

Regards,
Amlan

Exception in thread main java.lang.AssertionError: 
[SSTableReader(path='/data/1/cassandra/data/xxx/xxx/xxx.db'), 
SSTableReader(path='/data/1/cassandra/data/xxx/xxx/xxx.db')]
at 
org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2176)
at 
org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2173)
at 
org.apache.cassandra.db.ColumnFamilyStore.runWithCompactionsDisabled(ColumnFamilyStore.java:2155)
at 
org.apache.cassandra.db.ColumnFamilyStore.markAllCompacting(ColumnFamilyStore.java:2186)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:215)
at 
org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:272)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1117)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

On 31-Mar-2015, at 1:50 am, Duncan Sands duncan.sa...@gmail.com wrote:

 Hi Amlan,
 
 On 30/03/15 22:12, Amlan Roy wrote:
 Hi,
 
 I have added new nodes to an existing cluster and ran the “nodetool 
 cleanup”. I
 am getting the following error. Wanted to know if there is any solution to 
 it.
 
 Regards,
 Amlan
 
 Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory 
 was freed
 
 this is fixed in 2.0.13.
 
 Best wishes, Duncan.



Re: Why select returns tombstoned results?

2015-03-30 Thread Benyi Wang
Thanks for replying.

In cqlsh, if I change to Quorum (Consistency quorum), sometime the select
return the deleted row, sometime not.

I have two virtual data centers: service (3 nodes) and analytics(4 nodes
collocate with Hadoop data nodes).The table has 3 replicas in service and 2
in analytics. When I wrote, I wrote into analytics using local_one. So I
guest the data may not replicated to all nodes yet.

I will try to use strong consistency for write.



On Mon, Mar 30, 2015 at 11:59 AM, Prem Yadav ipremya...@gmail.com wrote:

 Increase the read CL to quorum and you should get correct results.
 How many nodes do you have in the cluster and what is the replication
 factor for the keyspace?

 On Mon, Mar 30, 2015 at 7:41 PM, Benyi Wang bewang.t...@gmail.com wrote:

 Create table tomb_test (
guid text,
content text,
range text,
rank int,
id text,
cnt int
primary key (guid, content, range, rank)
 )

 Sometime I delete the rows using cassandra java driver using this query

 DELETE FROM tomb_test WHERE guid=? and content=? and range=?

 in Batch statement with UNLOGGED. CONSISTENCE_LEVEL is local_one.

 But if I run

 SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and
 range='week'
 or
 SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and
 range='week' and rank = 1

 The result shows the deleted rows.

 If I run this select, the deleted rows are not shown

 SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1'

 If I run delete statement in cqlsh, the deleted rows won't show up.

 How can I fix this?





Re: upgrade from 1.0.12 to 1.1.12

2015-03-30 Thread Robert Coli
On Fri, Mar 27, 2015 at 7:20 AM, Jonathan Haddad j...@jonhaddad.com wrote:

 Running upgrade is a noop if the tables don't need to be upgraded. I
 consider the cost of this to be less than the cost of missing an upgrade.


Oh, right! This optimization was added, which means I totally and
completely agree with you.

tl;dr - run upgradesstables after every version upgrade, it can't hurt and
can only help.

=Rob


Re: nodetool cleanup error

2015-03-30 Thread Yuki Morishita
Looks like the issue is https://issues.apache.org/jira/browse/CASSANDRA-9070.

On Mon, Mar 30, 2015 at 6:25 PM, Robert Coli rc...@eventbrite.com wrote:
 On Mon, Mar 30, 2015 at 4:21 PM, Amlan Roy amlan@cleartrip.com wrote:

 Thanks for the reply. I have upgraded to 2.0.13. Now I get the following
 error.


 If cleanup is still excepting for you on 2.0.13 with some sstables you have,
 I would strongly consider :

 1) file a JIRA (http://issues.apache.org) and attach / offer the sstables
 for debugging
 2) let the list know the JIRA id of the ticket

 =Rob




-- 
Yuki Morishita
 t:yukim (http://twitter.com/yukim)


Re: SSTable structure

2015-03-30 Thread Robert Coli
On Mon, Mar 30, 2015 at 1:38 AM, Pierre pierredev...@gmail.com wrote:

 Does anyone know if there is a more complete and up to date documentation
 about the sstable files structure (data, index, stats etc.) than this one :
 http://wiki.apache.org/cassandra/ArchitectureSSTable


No, there isn't. Unfortunately you will have to read the source.


 I'm looking for a full specification, with schema of the structure if
 possible.


It would be nice if such fundamental things were documented, wouldn't it?

=Rob


Re: SSTable structure

2015-03-30 Thread daemeon reiydelle
why? Then there are 2 places 2 maintain or get jira'ed for a discrepancy.
On Mar 30, 2015 4:46 PM, Robert Coli rc...@eventbrite.com wrote:

 On Mon, Mar 30, 2015 at 1:38 AM, Pierre pierredev...@gmail.com wrote:

 Does anyone know if there is a more complete and up to date documentation
 about the sstable files structure (data, index, stats etc.) than this one :
 http://wiki.apache.org/cassandra/ArchitectureSSTable


 No, there isn't. Unfortunately you will have to read the source.


 I'm looking for a full specification, with schema of the structure if
 possible.


 It would be nice if such fundamental things were documented, wouldn't it?

 =Rob




Re: SSTable structure

2015-03-30 Thread Kirk True
The tricky thing with documenting the SS tables is that there are a lot
of conditionals in the structure, so it makes for twisty reading. Just
for fun, here's a terrible start I made once:

https://github.com/mustardgrain/cassandra-notes/blob/master/SSTables.md


On Mon, Mar 30, 2015, at 05:12 PM, Robert Coli wrote:
 On Mon, Mar 30, 2015 at 5:07 PM, daemeon reiydelle
 daeme...@gmail.com wrote:



 why? Then there are 2 places 2 maintain or get jira'ed for a
 discrepancy.


 If you are asserting that code is capable of documenting itself, we
 will just have to agree to disagree.

 =Rob





Re: SSTable structure

2015-03-30 Thread Robert Coli
On Mon, Mar 30, 2015 at 5:07 PM, daemeon reiydelle daeme...@gmail.com
wrote:

 why? Then there are 2 places 2 maintain or get jira'ed for a discrepancy.

If you are asserting that code is capable of documenting itself, we will
just have to agree to disagree.

=Rob


Re: SSTable structure

2015-03-30 Thread Jacob Rhoden
Yes updating code and documentation can sometimes be annoying, you would only 
ever maintain both if it were important. It comes down or is having the format 
of the data files documented for everyone to understand an important thing? 

__
Sent from iPhone

 On 31 Mar 2015, at 11:07 am, daemeon reiydelle daeme...@gmail.com wrote:
 
 why? Then there are 2 places 2 maintain or get jira'ed for a discrepancy.
 
 On Mar 30, 2015 4:46 PM, Robert Coli rc...@eventbrite.com wrote:
 On Mon, Mar 30, 2015 at 1:38 AM, Pierre pierredev...@gmail.com wrote:
 Does anyone know if there is a more complete and up to date documentation 
 about the sstable files structure (data, index, stats etc.) than this one : 
 http://wiki.apache.org/cassandra/ArchitectureSSTable
 
 No, there isn't. Unfortunately you will have to read the source.
  
 I'm looking for a full specification, with schema of the structure if 
 possible.
 
 It would be nice if such fundamental things were documented, wouldn't it?
 
 =Rob


RE: sstable writer and creating bytebuffers

2015-03-30 Thread Peer, Oded
Thanks Sylvain.
Is there any way to create a composite key with only one column in Cassandra 
when creating a table, or should creating a CompositeType instance with a 
single column be prohibited?


From: Sylvain Lebresne [mailto:sylv...@datastax.com]
Sent: Monday, March 30, 2015 1:57 PM
To: user@cassandra.apache.org
Subject: Re: sstable writer and creating bytebuffers

No, it's not a bug. In a composite every elements start by a 2 short indicating 
the size of the element, plus an extra byte that is used for sorting purposes. 
A little bit more details can be found in the CompositeType class javadoc if 
you're interested. It's not the most compact format there is but changing it 
would break backward compatibility anyway.

On Mon, Mar 30, 2015 at 12:38 PM, Peer, Oded 
oded.p...@rsa.commailto:oded.p...@rsa.com wrote:
I am writing code to bulk load data into Cassandra using 
SSTableSimpleUnsortedWriter
I changed my partition key from a composite key (long, int) to a single column 
key (long).
For creating the composite key I used a CompositeType, and I kept using it 
after changing the key to a single column.
My code didn’t work until I changed the way I create the ByteBuffer not to use 
CompositeType.

The following code prints ‘false’.
Do you consider this a bug?

  long val = 123L;
  ByteBuffer direct = bytes( val );
  ByteBuffer composite = CompositeType.getInstance( 
LongType.instance ).builder().add( bytes( val ) ).build();
  System.out.println( direct.equals( composite ) );




RE: Saving a file using cassandra

2015-03-30 Thread Peer, Oded
Try this
http://stackoverflow.com/a/17208343/248656


From: jean paul [mailto:researche...@gmail.com]
Sent: Wednesday, March 18, 2015 7:06 PM
To: user@cassandra.apache.org
Subject: Saving a file using cassandra

Hello,
Finally, i have created my ring using cassandra.
Please, i'd like to store a file replicated 2 times in my cluster.
is that possible ? can you please send me a link for a tutorial ?

Thanks a lot.
Best Regards.