Re: Cassandra mad GC

2014-01-20 Thread Dimetrio
I think Read 1001 live and 1518 is not too many tombstones and its normal





--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-mad-GC-tp7592248p7592297.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Upgrading 1.0.9 to 2.0

2014-01-20 Thread Or Sher
Thanks.

Can I use sstableloader to load SSTables from a RandomPartitioner cluster
to a Murmuer3Partitioner cluster?




On Thu, Jan 16, 2014 at 9:24 PM, Arya Goudarzi gouda...@gmail.com wrote:

 Read the upgrade best practices

 http://www.datastax.com/docs/1.1/install/upgrading#best-practices

 You cannot change partitioner


 http://www.datastax.com/documentation/cassandra/1.2/webhelp/cassandra/architecture/architecturePartitionerAbout_c.html


 On Thu, Jan 16, 2014 at 2:04 AM, Or Sher or.sh...@gmail.com wrote:

 Hi,

 In order to upgrade our env from 1.0.9 to 2.0 I thought about the
 following steps:

 - Creating a new 1.0.9 cluster
 - Creating the keyspaces and column families
 (I need to move one keyspace data to the new cluster and so:)
 - Moving all xKS SSTables from old cluster to every node in the new
 cluster
 - compact  cleanup
 - upgrading to 1.2.13 (all at once)
 -- upgrade sstables?
 - upgrading to 2.0 (all at once)

 1. I'd like to use new features such as Murmur3 Partitioner and Vnodes -
 How can I accomplish that?

 2. Is there any other features that would be hard to enable?

 3. What am I'm missing in the process?

 Thanks in advance,
 --
 Or Sher




 --
 Cheers,
 -Arya




-- 
Or Sher


Re: Cassandra mad GC

2014-01-20 Thread Dimetrio
no triggers, no custom comparators

i have a data model that creates a lot of tombstones (users home timeline
with many inserts and deletes). how can i reduce tombstones count it this
case?


Is this all from cassandra ? 

yes, with multithread compaction (for c3.4 - 6 threads),
compaction_throughput = 128







--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-mad-GC-tp7592248p7592299.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Cassandra mad GC

2014-01-20 Thread Dimetrio
btw, cassandra cluster is more stable with turned off multithread compaction. 

One node have more keys than other nodes

normal node

Keyspace: Social
Read Count: 65530294
Read Latency: 2.010432367020969 ms.
Write Count: 183948607
Write Latency: 0.04994240148825917 ms.
Pending Tasks: 0
Table: home_timeline
SSTable count: 505
SSTables in each level: [1, 10, 102/100, 392, 0, 0, 0, 0, 0]
Space used (live), bytes: 80346741272
Space used (total), bytes: 80365804136
SSTable Compression Ratio: 0.35779981803726807
Number of keys (estimate): 808064
Memtable cell count: 1209392
Memtable data size, bytes: 270605395
Memtable switch count: 763
Local read count: 65530294
Local read latency: 1.807 ms
Local write count: 183948608
Local write latency: 0.053 ms
Pending tasks: 0
Bloom filter false positives: 12109
Bloom filter false ratio: 0.00633
Bloom filter space used, bytes: 897296
Compacted partition minimum bytes: 73
Compacted partition maximum bytes: 962624926
Compacted partition mean bytes: 629847
Average live cells per slice (last five minutes): 51.0
Average tombstones per slice (last five minutes): 0.0

node with mad gc


Keyspace: Social
Read Count: 746862
Read Latency: 144.83174594905083 ms.
Write Count: 6387984
Write Latency: 2.4869556636334718 ms.
Pending Tasks: 0
Table: home_timeline
SSTable count: 625
SSTables in each level: [132/4, 10, 99, 384, 0, 0, 0, 0, 0]
Space used (live), bytes: 109408488817
Space used (total), bytes: 109408549785
SSTable Compression Ratio: 0.3814217358554331
!   Number of keys (estimate): 5950208
Memtable cell count: 130084
Memtable data size, bytes: 66904752
Memtable switch count: 52
Local read count: 746862
Local read latency: 33.855 ms
Local write count: 6387984
Local write latency: 0.050 ms
Pending tasks: 0
Bloom filter false positives: 3307633
Bloom filter false ratio: 0.07478
Bloom filter space used, bytes: 3575936
Compacted partition minimum bytes: 73
Compacted partition maximum bytes: 2874382626
Compacted partition mean bytes: 546236
Average live cells per slice (last five minutes): 51.0
Average tombstones per slice (last five minutes): 0.0



--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-mad-GC-tp7592248p7592300.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: upgrade from cassandra 1.2.3 - 1.2.13 + start using SSL

2014-01-20 Thread Cyril Scetbon
Hi made some tests which succeed
-- 
Cyril SCETBON

On 19 Jan 2014, at 01:14, Cyril Scetbon cyril.scet...@free.fr wrote:

 So 1.2.2 and 1.2.13 have different file versions (ib vs ic)
 We'll test if repairs are impacted by this change 
 
 Thanks
 
 Cyril Scetbon
 
 Le 17 janv. 2014 à 05:07, Aaron Morton aa...@thelastpickle.com a écrit :
 
 Can you confirm that, cause we'll add a new DC with version 1.2.13 
 (read-only) and we'll upgarde other DCs to 1.2.13 weeks later. We made some 
 tests and didn't notice anything. But we didn't test a node failure
 
 Depending on the other version you may not be able to run repair. All nodes 
 have to use the same file version, file versions are here 
 https://github.com/apache/cassandra/blob/cassandra-1.2/src/java/org/apache/cassandra/io/sstable/Descriptor.java#L52
 
 Cheers
 
 -
 Aaron Morton
 New Zealand
 @aaronmorton
 
 Co-Founder  Principal Consultant
 Apache Cassandra Consulting
 http://www.thelastpickle.com
 
 On 14/01/2014, at 7:30 am, Robert Coli rc...@eventbrite.com wrote:
 
 On Mon, Jan 13, 2014 at 3:38 AM, Cyril Scetbon cyril.scet...@free.fr 
 wrote:
 Can you confirm that, cause we'll add a new DC with version 1.2.13 
 (read-only) and we'll upgarde other DCs to 1.2.13 weeks later. We made some 
 tests and didn't notice anything. But we didn't test a node failure
 
 In general adding nodes at a new version is not supported, whether a single 
 node or an entire DC of nodes.
 
 =Rob
  
 



Re: upgrade from cassandra 1.2.3 - 1.2.13 + start using SSL

2014-01-20 Thread Cyril Scetbon
(Forget my last mail)

Hi made some tests which succeed with all our operations (repair, add/remove 
nodes ...). The only thing I'm worrying about is that I met a situation where I 
had a lot of flushes on some nodes. You can find one of my system logs at 
http://pastebin.com/YZKUQLXz. I'm not sure as I didn't let it run for more than 
4 minutes, but it seems that there was an infinite loop flushing system column 
families. A whole restart made this error go away but I'mn not sure if I can 
have this one come back.

Regards 
 -- 
Cyril SCETBON

On 19 Jan 2014, at 01:14, Cyril Scetbon cyril.scet...@free.fr wrote:

 So 1.2.2 and 1.2.13 have different file versions (ib vs ic)
 We'll test if repairs are impacted by this change 
 
 Thanks
 
 Cyril Scetbon
 
 Le 17 janv. 2014 à 05:07, Aaron Morton aa...@thelastpickle.com a écrit :
 
 Can you confirm that, cause we'll add a new DC with version 1.2.13 
 (read-only) and we'll upgarde other DCs to 1.2.13 weeks later. We made some 
 tests and didn't notice anything. But we didn't test a node failure
 
 Depending on the other version you may not be able to run repair. All nodes 
 have to use the same file version, file versions are here 
 https://github.com/apache/cassandra/blob/cassandra-1.2/src/java/org/apache/cassandra/io/sstable/Descriptor.java#L52
 
 Cheers
 
 -
 Aaron Morton
 New Zealand
 @aaronmorton
 
 Co-Founder  Principal Consultant
 Apache Cassandra Consulting
 http://www.thelastpickle.com
 
 On 14/01/2014, at 7:30 am, Robert Coli rc...@eventbrite.com wrote:
 
 On Mon, Jan 13, 2014 at 3:38 AM, Cyril Scetbon cyril.scet...@free.fr 
 wrote:
 Can you confirm that, cause we'll add a new DC with version 1.2.13 
 (read-only) and we'll upgarde other DCs to 1.2.13 weeks later. We made some 
 tests and didn't notice anything. But we didn't test a node failure
 
 In general adding nodes at a new version is not supported, whether a single 
 node or an entire DC of nodes.
 
 =Rob
  
 



Re: Tracking word frequencies

2014-01-20 Thread David Tinker
I haven't actually tried to use that schema yet, it was just my first idea.
If we use that solution our app would have to read the whole table once a
day or so to find the top 5000'ish words.


On Fri, Jan 17, 2014 at 2:49 PM, Jonathan Lacefield jlacefi...@datastax.com
 wrote:

 Hi David,

   How do you know that you are receiving a seek for each row?  Are you
 querying for a specific word at a time or do the queries span multiple
 words, i.e. what's the query pattern? Also, what is your goal for read
 latency?  Most customers can achieve microsecond partition key base query
 reads with Cassanda.  This can be done through tuning, data modeling,
 and/or scaling.  Please post a cfhistograms for this table as well as
 provide some details on the specific queries you are running.

 Thanks,

 Jonathan

 Jonathan Lacefield
 Solutions Architect, DataStax
 (404) 822 3487
  http://www.linkedin.com/in/jlacefield



 http://www.datastax.com/what-we-offer/products-services/training/virtual-training


 On Fri, Jan 17, 2014 at 1:41 AM, David Tinker david.tin...@gmail.comwrote:

 I have an app that stores lots of bits of text in Cassandra. One of
 the things I need to do is keep a global word frequency table.
 Something like this:

 CREATE TABLE IF NOT EXISTS word_count (
   word text,
   count value,
   PRIMARY KEY (word)
 );

 This is slow to read as the rows (100's of thousands of them) each
 need a seek. Is there a better way to model this in Cassandra? I could
 periodically snapshot the rows into a fat row in another table I
 suppose.

 Or should I use Redis or something instead? I would prefer to keep it
 all Cassandra if possible.





-- 
http://qdb.io/ Persistent Message Queues With Replay and #RabbitMQ
Integration


Re: Tracking word frequencies

2014-01-20 Thread Colin
When updating, use table that uses rows of words and increment the count?

--
Colin 
+1 320 221 9531

 

 On Jan 20, 2014, at 6:58 AM, David Tinker david.tin...@gmail.com wrote:
 
 I haven't actually tried to use that schema yet, it was just my first idea. 
 If we use that solution our app would have to read the whole table once a day 
 or so to find the top 5000'ish words.
 
 
 On Fri, Jan 17, 2014 at 2:49 PM, Jonathan Lacefield 
 jlacefi...@datastax.com wrote:
 Hi David,
 
   How do you know that you are receiving a seek for each row?  Are you 
 querying for a specific word at a time or do the queries span multiple 
 words, i.e. what's the query pattern? Also, what is your goal for read 
 latency?  Most customers can achieve microsecond partition key base query 
 reads with Cassanda.  This can be done through tuning, data modeling, and/or 
 scaling.  Please post a cfhistograms for this table as well as provide some 
 details on the specific queries you are running.
 
 Thanks,
 
 Jonathan
 
 Jonathan Lacefield
 Solutions Architect, DataStax
 (404) 822 3487
 
 
 
 
 
 
 On Fri, Jan 17, 2014 at 1:41 AM, David Tinker david.tin...@gmail.com 
 wrote:
 I have an app that stores lots of bits of text in Cassandra. One of
 the things I need to do is keep a global word frequency table.
 Something like this:
 
 CREATE TABLE IF NOT EXISTS word_count (
   word text,
   count value,
   PRIMARY KEY (word)
 );
 
 This is slow to read as the rows (100's of thousands of them) each
 need a seek. Is there a better way to model this in Cassandra? I could
 periodically snapshot the rows into a fat row in another table I
 suppose.
 
 Or should I use Redis or something instead? I would prefer to keep it
 all Cassandra if possible.
 
 
 
 -- 
 http://qdb.io/ Persistent Message Queues With Replay and #RabbitMQ Integration


one or more nodes were unavailable.

2014-01-20 Thread Vivek Mishra
Hi,
Trying CAS feature of cassandra 2.x and somehow getting given below error:


cqlsh:sample insert into User(user_id,first_name) values(
fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
Unable to complete request: one or more nodes were unavailable.
cqlsh:training


cqlsh:sample insert into User(user_id,first_name) values(
fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

It works fine.

Any idea?

-Vivek


Re: one or more nodes were unavailable.

2014-01-20 Thread sankalp kohli
What consistency level are you using?


On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below error:


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

 It works fine.

 Any idea?

 -Vivek





Re: one or more nodes were unavailable.

2014-01-20 Thread sankalp kohli
Also do you have any nodes down...because it is possible to reach write
consistency and not do CAS because some machines are down.


On Mon, Jan 20, 2014 at 12:16 PM, sankalp kohli kohlisank...@gmail.comwrote:

 What consistency level are you using?


 On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below error:


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

 It works fine.

 Any idea?

 -Vivek






HintedHandoff Exception and node holding hints to random tokens

2014-01-20 Thread Allan C
Hi ,

I’m hitting a very odd issue with HintedHandoff on 1 node in my 12 node cluster 
running 1.2.13. Somehow it’s holding a large amount of hints for tokens that 
have never been part of the cluster. Pretty sure this is causing a bunch of 
memory pressure somehow that’s causing the node to go down.

I’d like to find out if I can just reset by deleting the hints CF or if there’s 
actually important data in there. I’m tempted to clear the CF and hope that 
fixes it, but a few nodes have been up and down (especially this one) since my 
last repair and I worry that I won’t be able to get through a full repair given 
the problems with the node currently.

Here’s what I see so far:


* listEndpointsPendingHints returns a list of about 20 tokens that are not part 
of the ring and have never been part of it. I’m not using vnodes, fwiw. 
deleteHintsForEndpoint doesn’t work. It tells me that the there’s no host for 
the token.


* The hints CF is oddly large:

     Column Family: hints
SSTable count: 260
Space used (live): 124904685
Space used (total): 124904685
SSTable Compression Ratio: 0.394676439667606
Number of Keys (estimate): 66560
Memtable Columns Count: 0
Memtable Data Size: 0
Memtable Switch Count: 14
Read Count: 113
Read Latency: 757.123 ms.
Write Count: 987
Write Latency: 0.044 ms.
Pending Tasks: 0
Bloom Filter False Positives: 10
Bloom Filter False Ratio: 0.00209
Bloom Filter Space Used: 6528
Compacted row minimum size: 36
Compacted row maximum size: 107964792
Compacted row mean size: 787505
Average live cells per slice (last five minutes): 0.0


* I get this assertion in the logs often:

ERROR [CompactionExecutor:81] 2014-01-20 12:31:22,652 CassandraDaemon.java 
(line 191) Exception in thread Thread[CompactionExecutor:81,1,main]
java.lang.AssertionError: originally calculated column size of 71868452 but now 
it is 71869026
        at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
        at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
        at 
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
ERROR [HintedHandoff:52] 2014-01-20 12:31:22,652 CassandraDaemon.java (line 
191) Exception in thread Thread[HintedHandoff:52,1,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.AssertionError: originally calculated column size of 71868452 but now 
it is 71869026
        at 
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:436)
        at 
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
        at 
org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
        at 
org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:502)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: 
originally calculated column size of 71868452 but now it is 71869026
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
        at 
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:432)
        ... 6 more
Caused by: java.lang.AssertionError: originally calculated column size of 
71868452 but now it is 71869026
        at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
        at 

Re: HintedHandoff Exception and node holding hints to random tokens

2014-01-20 Thread sankalp kohli
Is this happening in one node or all. Did you try to delete the hints via
JMX in other nodes?


On Mon, Jan 20, 2014 at 12:18 PM, Allan C alla...@gmail.com wrote:

 Hi ,

 I’m hitting a very odd issue with HintedHandoff on 1 node in my 12 node
 cluster running 1.2.13. Somehow it’s holding a large amount of hints for
 tokens that have never been part of the cluster. Pretty sure this is
 causing a bunch of memory pressure somehow that’s causing the node to go
 down.

 I’d like to find out if I can just reset by deleting the hints CF or if
 there’s actually important data in there. I’m tempted to clear the CF and
 hope that fixes it, but a few nodes have been up and down (especially this
 one) since my last repair and I worry that I won’t be able to get through a
 full repair given the problems with the node currently.

 Here’s what I see so far:


 * listEndpointsPendingHints returns a list of about 20 tokens that are not
 part of the ring and have never been part of it. I’m not using vnodes,
 fwiw. deleteHintsForEndpoint doesn’t work. It tells me that the there’s no
 host for the token.


 * The hints CF is oddly large:

  Column Family: hints
 SSTable count: 260
 Space used (live): 124904685
 Space used (total): 124904685
 SSTable Compression Ratio: 0.394676439667606
 Number of Keys (estimate): 66560
 Memtable Columns Count: 0
 Memtable Data Size: 0
 Memtable Switch Count: 14
 Read Count: 113
 Read Latency: 757.123 ms.
 Write Count: 987
 Write Latency: 0.044 ms.
 Pending Tasks: 0
 Bloom Filter False Positives: 10
 Bloom Filter False Ratio: 0.00209
 Bloom Filter Space Used: 6528
 Compacted row minimum size: 36
 Compacted row maximum size: 107964792
 Compacted row mean size: 787505
 Average live cells per slice (last five minutes): 0.0


 * I get this assertion in the logs often:

 ERROR [CompactionExecutor:81] 2014-01-20 
 12:31:22,652http://airmail.calendar/2014-01-20%2012:31:22%20PST 
 CassandraDaemon.java
 (line 191) Exception in thread Thread[CompactionExecutor:81,1,main]
 java.lang.AssertionError: originally calculated column size of 71868452
 but now it is 71869026
 at
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
 at
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
 at
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 at
 org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 at
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 ERROR [HintedHandoff:52] 2014-01-20 
 12:31:22,652http://airmail.calendar/2014-01-20%2012:31:22%20PST 
 CassandraDaemon.java
 (line 191) Exception in thread Thread[HintedHandoff:52,1,main]
 java.lang.RuntimeException: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of 71868452 but
 now it is 71869026
 at
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:436)
 at
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
 at
 org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
 at
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:502)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of 71868452 but
 now it is 71869026
 at
 java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:432)
 ... 6 more
 Caused by: java.lang.AssertionError: originally calculated column size of
 71868452 

Re: HintedHandoff Exception and node holding hints to random tokens

2014-01-20 Thread Allan C
There are 3 other nodes that have a mild case. This is one node is worse by an 
order of magnitude. deleteHintsForEndpoint fails with the same error  on any of 
the affected nodes.

-Allan

On January 20, 2014 at 12:24:33 PM, sankalp kohli (kohlisank...@gmail.com) 
wrote:

Is this happening in one node or all. Did you try to delete the hints via JMX 
in other nodes? 


On Mon, Jan 20, 2014 at 12:18 PM, Allan C alla...@gmail.com wrote:
Hi ,

I’m hitting a very odd issue with HintedHandoff on 1 node in my 12 node cluster 
running 1.2.13. Somehow it’s holding a large amount of hints for tokens that 
have never been part of the cluster. Pretty sure this is causing a bunch of 
memory pressure somehow that’s causing the node to go down.

I’d like to find out if I can just reset by deleting the hints CF or if there’s 
actually important data in there. I’m tempted to clear the CF and hope that 
fixes it, but a few nodes have been up and down (especially this one) since my 
last repair and I worry that I won’t be able to get through a full repair given 
the problems with the node currently.

Here’s what I see so far:


* listEndpointsPendingHints returns a list of about 20 tokens that are not part 
of the ring and have never been part of it. I’m not using vnodes, fwiw. 
deleteHintsForEndpoint doesn’t work. It tells me that the there’s no host for 
the token.


* The hints CF is oddly large:

     Column Family: hints
SSTable count: 260
Space used (live): 124904685
Space used (total): 124904685
SSTable Compression Ratio: 0.394676439667606
Number of Keys (estimate): 66560
Memtable Columns Count: 0
Memtable Data Size: 0
Memtable Switch Count: 14
Read Count: 113
Read Latency: 757.123 ms.
Write Count: 987
Write Latency: 0.044 ms.
Pending Tasks: 0
Bloom Filter False Positives: 10
Bloom Filter False Ratio: 0.00209
Bloom Filter Space Used: 6528
Compacted row minimum size: 36
Compacted row maximum size: 107964792
Compacted row mean size: 787505
Average live cells per slice (last five minutes): 0.0


* I get this assertion in the logs often:

ERROR [CompactionExecutor:81] 2014-01-20 12:31:22,652 CassandraDaemon.java 
(line 191) Exception in thread Thread[CompactionExecutor:81,1,main]
java.lang.AssertionError: originally calculated column size of 71868452 but now 
it is 71869026
        at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
        at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
        at 
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
ERROR [HintedHandoff:52] 2014-01-20 12:31:22,652 CassandraDaemon.java (line 
191) Exception in thread Thread[HintedHandoff:52,1,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.AssertionError: originally calculated column size of 71868452 but now 
it is 71869026
        at 
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:436)
        at 
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
        at 
org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
        at 
org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:502)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: 
originally calculated column size of 71868452 but now it is 71869026
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
        at 
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:432)
   

Re: HintedHandoff Exception and node holding hints to random tokens

2014-01-20 Thread sankalp kohli
Yes as per code you cannot delete hints for endpoints which are not part of
the ring.

 if (!StorageService.instance.getTokenMetadata().isMember(endpoint))
return;


On Mon, Jan 20, 2014 at 12:34 PM, Allan C alla...@gmail.com wrote:

 There are 3 other nodes that have a mild case. This is one node is worse
 by an order of magnitude. deleteHintsForEndpoint fails with the same error
  on any of the affected nodes.

 -Allan

 On January 20, 2014 at 12:24:33 PM, sankalp kohli 
 (kohlisank...@gmail.com//kohlisank...@gmail.com)
 wrote:

 Is this happening in one node or all. Did you try to delete the hints via
 JMX in other nodes?


 On Mon, Jan 20, 2014 at 12:18 PM, Allan C alla...@gmail.com wrote:

  Hi ,

 I’m hitting a very odd issue with HintedHandoff on 1 node in my 12 node
 cluster running 1.2.13. Somehow it’s holding a large amount of hints for
 tokens that have never been part of the cluster. Pretty sure this is
 causing a bunch of memory pressure somehow that’s causing the node to go
 down.

 I’d like to find out if I can just reset by deleting the hints CF or if
 there’s actually important data in there. I’m tempted to clear the CF and
 hope that fixes it, but a few nodes have been up and down (especially this
 one) since my last repair and I worry that I won’t be able to get through a
 full repair given the problems with the node currently.

 Here’s what I see so far:


 * listEndpointsPendingHints returns a list of about 20 tokens that are
 not part of the ring and have never been part of it. I’m not using vnodes,
 fwiw. deleteHintsForEndpoint doesn’t work. It tells me that the there’s no
 host for the token.


 * The hints CF is oddly large:

   Column Family: hints
 SSTable count: 260
 Space used (live): 124904685
 Space used (total): 124904685
 SSTable Compression Ratio: 0.394676439667606
 Number of Keys (estimate): 66560
 Memtable Columns Count: 0
 Memtable Data Size: 0
 Memtable Switch Count: 14
 Read Count: 113
 Read Latency: 757.123 ms.
 Write Count: 987
 Write Latency: 0.044 ms.
 Pending Tasks: 0
 Bloom Filter False Positives: 10
 Bloom Filter False Ratio: 0.00209
 Bloom Filter Space Used: 6528
 Compacted row minimum size: 36
 Compacted row maximum size: 107964792
 Compacted row mean size: 787505
 Average live cells per slice (last five minutes): 0.0


 * I get this assertion in the logs often:

  ERROR [CompactionExecutor:81] 2014-01-20 
 12:31:22,652http://airmail.calendar/2014-01-20%2012:31:22%20PST 
 CassandraDaemon.java
 (line 191) Exception in thread Thread[CompactionExecutor:81,1,main]
 java.lang.AssertionError: originally calculated column size of 71868452
 but now it is 71869026
 at
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
 at
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
 at
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 at
 org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 at
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 ERROR [HintedHandoff:52] 2014-01-20 
 12:31:22,652http://airmail.calendar/2014-01-20%2012:31:22%20PST 
 CassandraDaemon.java
 (line 191) Exception in thread Thread[HintedHandoff:52,1,main]
 java.lang.RuntimeException: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of 71868452 but
 now it is 71869026
 at
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:436)
 at
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
 at
 org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
 at
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:502)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at 

Re: HintedHandoff Exception and node holding hints to random tokens

2014-01-20 Thread Allan C
 Certainly makes sense to not allow it. Any idea why the node would be holding 
hints for tokens that don’t exist?

-Allan

On January 20, 2014 at 1:09:51 PM, sankalp kohli (kohlisank...@gmail.com) wrote:

Yes as per code you cannot delete hints for endpoints which are not part of the 
ring. 

 if (!StorageService.instance.getTokenMetadata().isMember(endpoint))
            return;


On Mon, Jan 20, 2014 at 12:34 PM, Allan C alla...@gmail.com wrote:
There are 3 other nodes that have a mild case. This is one node is worse by an 
order of magnitude. deleteHintsForEndpoint fails with the same error  on any of 
the affected nodes.

-Allan

On January 20, 2014 at 12:24:33 PM, sankalp kohli (kohlisank...@gmail.com) 
wrote:

Is this happening in one node or all. Did you try to delete the hints via JMX 
in other nodes? 


On Mon, Jan 20, 2014 at 12:18 PM, Allan C alla...@gmail.com wrote:
Hi ,

I’m hitting a very odd issue with HintedHandoff on 1 node in my 12 node cluster 
running 1.2.13. Somehow it’s holding a large amount of hints for tokens that 
have never been part of the cluster. Pretty sure this is causing a bunch of 
memory pressure somehow that’s causing the node to go down.

I’d like to find out if I can just reset by deleting the hints CF or if there’s 
actually important data in there. I’m tempted to clear the CF and hope that 
fixes it, but a few nodes have been up and down (especially this one) since my 
last repair and I worry that I won’t be able to get through a full repair given 
the problems with the node currently.

Here’s what I see so far:


* listEndpointsPendingHints returns a list of about 20 tokens that are not part 
of the ring and have never been part of it. I’m not using vnodes, fwiw. 
deleteHintsForEndpoint doesn’t work. It tells me that the there’s no host for 
the token.


* The hints CF is oddly large:

     Column Family: hints
SSTable count: 260
Space used (live): 124904685
Space used (total): 124904685
SSTable Compression Ratio: 0.394676439667606
Number of Keys (estimate): 66560
Memtable Columns Count: 0
Memtable Data Size: 0
Memtable Switch Count: 14
Read Count: 113
Read Latency: 757.123 ms.
Write Count: 987
Write Latency: 0.044 ms.
Pending Tasks: 0
Bloom Filter False Positives: 10
Bloom Filter False Ratio: 0.00209
Bloom Filter Space Used: 6528
Compacted row minimum size: 36
Compacted row maximum size: 107964792
Compacted row mean size: 787505
Average live cells per slice (last five minutes): 0.0


* I get this assertion in the logs often:

ERROR [CompactionExecutor:81] 2014-01-20 12:31:22,652 CassandraDaemon.java 
(line 191) Exception in thread Thread[CompactionExecutor:81,1,main]
java.lang.AssertionError: originally calculated column size of 71868452 but now 
it is 71869026
        at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
        at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
        at 
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
ERROR [HintedHandoff:52] 2014-01-20 12:31:22,652 CassandraDaemon.java (line 
191) Exception in thread Thread[HintedHandoff:52,1,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.AssertionError: originally calculated column size of 71868452 but now 
it is 71869026
        at 
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:436)
        at 
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
        at 
org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
        at 
org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:502)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
     

Data modeling users table with CQL

2014-01-20 Thread Drew Kutcharian
Hey Guys,

I’m new to CQL (but have been using C* for a while now). What would be the best 
way to model a users table using CQL/Cassandra 2.0 Lightweight Transactions 
where we would like to have:
- A unique TimeUUID as the primary key of the user
- A unique email address used for logging in

In the past I would use Zookeeper and/or Astyanax’s Uniqueness Constraint” but 
I want to see how can this be handled natively.

Cheers,

Drew



Question about node tool repair

2014-01-20 Thread Logendran, Dharsan (Dharsan)
Hi,

We have a two  node cluster with the replication factor of 2.   The db has more 
than 2500 column families(tables).   The nodetool -pr repair on an empty 
database(one or table has a litter data) takes about 30 hours to complete.   We 
are using Cassandra Version 2.0.4.   Is there any way for us to speed up this?.

Thanks
Dharsan


Re: Question about node tool repair

2014-01-20 Thread sankalp kohli
Can you give the logs of both the machines. Logs will tell why it is taken
so long.

On a side note, you are using 2500 Cfs. I think you need to redesign this
schema.

Also 2 node cluster with RF=2, you might want to add a machine if it is
prod.


On Mon, Jan 20, 2014 at 2:47 PM, Logendran, Dharsan (Dharsan) 
dharsan.logend...@alcatel-lucent.com wrote:

  Hi,



 We have a two  node cluster with the replication factor of 2.   The db has
 more than 2500 column families(tables).   The nodetool -pr repair on an
 empty database(one or table has a litter data) takes about 30 hours to
 complete.   We are using Cassandra Version 2.0.4.   Is there any way for us
 to speed up this?.



 Thanks

 Dharsan




Exception in thread main java.lang.NoClassDefFoundError

2014-01-20 Thread Le Xu
Hello!
I got this error while trying out Cassandra 1.2.13. The error message looks
like:

Exception in thread main java.lang.
NoClassDefFoundError: org/apache/cassandra/service/CassandraDaemon
Caused by: java.lang.ClassNotFoundException:
org.apache.cassandra.service.CassandraDaemon
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
Could not find the main class:
org.apache.cassandra.service.CassandraDaemon. Program will exit.

I checked JAVA_HOME and CASSANDRA_HOME and they are both set but I still
got the error.

However, based on Brian's reply in this thread:
http://mail-archives.apache.org/mod_mbox/cassandra-user/201307.mbox/%3CCAJHHpg3Lf9tyxwgZNEN3cKH=p9xwms0w4rzqbpt8oriaq9r...@mail.gmail.com%3E
I followed the step and printed out the $CLASSPATH  variable and got :
/home/lexu1/scale/apache-cassandra-1.2.13-src//conf:/home/lexu1/scale/apache-cassandra-1.2.13-src//build/classes/main:/home/lexu1/scale/apache-cassandra-1.2.13-src//build/classes/thrift:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/antlr-3.2.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/avro-1.4.0-fixes.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/avro-1.4.0-sources-fixes.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/commons-cli-1.1.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/commons-codec-1.2.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/commons-lang-2.6.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/compress-lzf-0.8.4.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/concurrentlinkedhashmap-lru-1.3.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/guava-13.0.1.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/high-scale-lib-1.1.2.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/jackson-core-asl-1.9.2.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/jackson-mapper-asl-1.9.2.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/jamm-0.2.5.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/jbcrypt-0.3m.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/jline-1.0.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/json-simple-1.1.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/libthrift-0.7.0.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/log4j-1.2.16.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/lz4-1.1.0.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/metrics-core-2.2.0.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/netty-3.6.6.Final.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/servlet-api-2.5-20081211.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/slf4j-api-1.7.2.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/slf4j-log4j12-1.7.2.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/snakeyaml-1.6.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/snappy-java-1.0.5.jar:/home/lexu1/scale/apache-cassandra-1.2.13-src//lib/snaptree-0.1.jar

It includes apache-cassandra-1.2.13-src//build/classes/thrift but not
service. Does the location of CassandraDaemon seems to be the problem? If
it is, then how do I fix the problem?

Thanks!

Le


Re: one or more nodes were unavailable.

2014-01-20 Thread Vivek Mishra
Single node and default consistency. Running via cqsh


On Tue, Jan 21, 2014 at 1:47 AM, sankalp kohli kohlisank...@gmail.comwrote:

 Also do you have any nodes down...because it is possible to reach write
 consistency and not do CAS because some machines are down.


 On Mon, Jan 20, 2014 at 12:16 PM, sankalp kohli kohlisank...@gmail.comwrote:

 What consistency level are you using?


 On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below
 error:


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

 It works fine.

 Any idea?

 -Vivek







Re: one or more nodes were unavailable.

2014-01-20 Thread Drew Kutcharian
If you are trying this out on a single node, make sure you set the 
replication_factor of the keyspace to one.


On Jan 20, 2014, at 7:41 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Single node and default consistency. Running via cqsh
 
 
 On Tue, Jan 21, 2014 at 1:47 AM, sankalp kohli kohlisank...@gmail.com wrote:
 Also do you have any nodes down...because it is possible to reach write 
 consistency and not do CAS because some machines are down. 
 
 
 On Mon, Jan 20, 2014 at 12:16 PM, sankalp kohli kohlisank...@gmail.com 
 wrote:
 What consistency level are you using? 
 
 
 On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.com wrote:
 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below error:
 
 
 cqlsh:sample insert into User(user_id,first_name) values( 
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training
 
 
 cqlsh:sample insert into User(user_id,first_name) values( 
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') 
 
 It works fine.
 
 Any idea?
 
 -Vivek
 
 
 
 
 



Re: one or more nodes were unavailable.

2014-01-20 Thread Vivek Mishra
1 have downloaded cassandra 2.x and set up on single machine. Started
Cassandra server and connecting via cqlsh. Created a column family and
inserting a single record into it(via cqlsh).

Wondering why it gives No node available

Even though simple insert queries(without CAS) works!

-Vivek


On Tue, Jan 21, 2014 at 11:33 AM, Drew Kutcharian d...@venarc.com wrote:

 If you are trying this out on a single node, make sure you set the
 replication_factor of the keyspace to one.


 On Jan 20, 2014, at 7:41 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Single node and default consistency. Running via cqsh


 On Tue, Jan 21, 2014 at 1:47 AM, sankalp kohli kohlisank...@gmail.comwrote:

 Also do you have any nodes down...because it is possible to reach write
 consistency and not do CAS because some machines are down.


 On Mon, Jan 20, 2014 at 12:16 PM, sankalp kohli 
 kohlisank...@gmail.comwrote:

 What consistency level are you using?


 On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below
 error:


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

 It works fine.

 Any idea?

 -Vivek









Re: one or more nodes were unavailable.

2014-01-20 Thread Drew Kutcharian
What do you see when you run desc keyspace;” in cqlsh?


On Jan 20, 2014, at 10:10 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 1 have downloaded cassandra 2.x and set up on single machine. Started 
 Cassandra server and connecting via cqlsh. Created a column family and 
 inserting a single record into it(via cqlsh).  
 
 Wondering why it gives No node available
 
 Even though simple insert queries(without CAS) works!
 
 -Vivek
 
 
 On Tue, Jan 21, 2014 at 11:33 AM, Drew Kutcharian d...@venarc.com wrote:
 If you are trying this out on a single node, make sure you set the 
 replication_factor of the keyspace to one.
 
 
 On Jan 20, 2014, at 7:41 PM, Vivek Mishra mishra.v...@gmail.com wrote:
 
 Single node and default consistency. Running via cqsh
 
 
 On Tue, Jan 21, 2014 at 1:47 AM, sankalp kohli kohlisank...@gmail.com 
 wrote:
 Also do you have any nodes down...because it is possible to reach write 
 consistency and not do CAS because some machines are down. 
 
 
 On Mon, Jan 20, 2014 at 12:16 PM, sankalp kohli kohlisank...@gmail.com 
 wrote:
 What consistency level are you using? 
 
 
 On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.com wrote:
 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below error:
 
 
 cqlsh:sample insert into User(user_id,first_name) values( 
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training
 
 
 cqlsh:sample insert into User(user_id,first_name) values( 
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') 
 
 It works fine.
 
 Any idea?
 
 -Vivek