[jira] [Resolved] (CASSANDRA-7949) LCS compaction low performance, many pending compactions, nodes are almost idle

2014-10-17 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-7949.

Resolution: Duplicate

I think this is a problem that CASSANDRA-7409 will solve

If you have a system where you can test these things, we would love if you 
could test the patch in that ticket

 LCS compaction low performance, many pending compactions, nodes are almost 
 idle
 ---

 Key: CASSANDRA-7949
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7949
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.1-1, Cassandra 2.0.8
Reporter: Nikolai Grigoriev
 Attachments: iostats.txt, nodetool_compactionstats.txt, 
 nodetool_tpstats.txt, pending compactions 2day.png, system.log.gz, vmstat.txt


 I've been evaluating new cluster of 15 nodes (32 core, 6x800Gb SSD disks + 
 2x600Gb SAS, 128Gb RAM, OEL 6.5) and I've built a simulator that creates the 
 load similar to the load in our future product. Before running the simulator 
 I had to pre-generate enough data. This was done using Java code and DataStax 
 Java driver. To avoid going deep into details, two tables have been 
 generated. Each table currently has about 55M rows and between few dozens and 
 few thousands of columns in each row.
 This data generation process was generating massive amount of non-overlapping 
 data. Thus, the activity was write-only and highly parallel. This is not the 
 type of the traffic that the system will have ultimately to deal with, it 
 will be mix of reads and updates to the existing data in the future. This is 
 just to explain the choice of LCS, not mentioning the expensive SSD disk 
 space.
 At some point while generating the data I have noticed that the compactions 
 started to pile up. I knew that I was overloading the cluster but I still 
 wanted the genration test to complete. I was expecting to give the cluster 
 enough time to finish the pending compactions and get ready for real traffic.
 However, after the storm of write requests have been stopped I have noticed 
 that the number of pending compactions remained constant (and even climbed up 
 a little bit) on all nodes. After trying to tune some parameters (like 
 setting the compaction bandwidth cap to 0) I have noticed a strange pattern: 
 the nodes were compacting one of the CFs in a single stream using virtually 
 no CPU and no disk I/O. This process was taking hours. After that it would be 
 followed by a short burst of few dozens of compactions running in parallel 
 (CPU at 2000%, some disk I/O - up to 10-20%) and then getting stuck again for 
 many hours doing one compaction at time. So it looks like this:
 # nodetool compactionstats
 pending tasks: 3351
   compaction typekeyspace   table   completed 
   total  unit  progress
Compaction  myks table_list1 66499295588   
 1910515889913 bytes 3.48%
 Active compaction remaining time :n/a
 # df -h
 ...
 /dev/sdb1.5T  637G  854G  43% /cassandra-data/disk1
 /dev/sdc1.5T  425G  1.1T  29% /cassandra-data/disk2
 /dev/sdd1.5T  429G  1.1T  29% /cassandra-data/disk3
 # find . -name **table_list1**Data** | grep -v snapshot | wc -l
 1310
 Among these files I see:
 1043 files of 161Mb (my sstable size is 160Mb)
 9 large files - 3 between 1 and 2Gb, 3 of 5-8Gb, 55Gb, 70Gb and 370Gb
 263 files of various sized - between few dozens of Kb and 160Mb
 I've been running the heavy load for about 1,5days and it's been close to 3 
 days after that and the number of pending compactions does not go down.
 I have applied one of the not-so-obvious recommendations to disable 
 multithreaded compactions and that seems to be helping a bit - I see some 
 nodes started to have fewer pending compactions. About half of the cluster, 
 in fact. But even there I see they are sitting idle most of the time lazily 
 compacting in one stream with CPU at ~140% and occasionally doing the bursts 
 of compaction work for few minutes.
 I am wondering if this is really a bug or something in the LCS logic that 
 would manifest itself only in such an edge case scenario where I have loaded 
 lots of unique data quickly.
 By the way, I see this pattern only for one of two tables - the one that has 
 about 4 times more data than another (space-wise, number of rows is the 
 same). Looks like all these pending compactions are really only for that 
 larger table.
 I'll be attaching the relevant logs shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-7368) Compaction stops after org.apache.cassandra.io.sstable.CorruptSSTableException

2014-10-17 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-7368.

Resolution: Cannot Reproduce

The compaction stopping can have a few causes (that are now fixed), first we 
have CASSANDRA-7745 where we wrongly said that there were no more compactions 
to do, and then we have the fact that multi threaded compaction was really 
shaky and it is now gone (in 2.0)

I would recommend upgrading to a newer version and try to reproduce it there

 Compaction stops after org.apache.cassandra.io.sstable.CorruptSSTableException
 --

 Key: CASSANDRA-7368
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7368
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: RHEL 6.5
 Cassandra version: 1.2.16
Reporter: Francois Richard
Assignee: Marcus Eriksson

 Hi,
 We are getting a case where compaction stops totally on a node after an 
 exception related to: org.apache.cassandra.io.sstable.CorruptSSTableException.
 nodetool compactionstats remains at the same level for hours:
 {code}
 pending tasks: 1451
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionSyncCoreContactPrefixBytesIndex   
 257799931   376785179 bytes68.42%
 Active compaction remaining time :n/a
 {code}
 Here is the exception log:
 {code}
 ERROR [Deserialize 
 SSTableReader(path='/home/y/var/cassandra/data/SyncCore/ContactPrefixBytesIndex/SyncCore-ContactPrefixBytesIndex-ic-116118-Data.db')]
  2014-06-09 06:39:37,570 CassandraDaemon.java (line 191) Exception in thread 
 Thread[Deserialize 
 SSTableReader(path='/home/y/var/cassandra/data/SyncCore/ContactPrefixBytesIndex/SyncCore-ContactPrefixBytesIndex-ic-116118-Data.db'),1,main]
 org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
 dataSize of 7421941880990663551 starting at 257836699 would be larger than 
 file 
 /home/y/var/cassandra/data/SyncCore/ContactPrefixBytesIndex/SyncCore-ContactPrefixBytesIndex-ic-116118-Data.db
  length 376785179
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.init(SSTableIdentityIterator.java:167)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.init(SSTableIdentityIterator.java:83)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.init(SSTableIdentityIterator.java:69)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:180)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:155)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:142)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:38)
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy$LeveledScanner.computeNext(LeveledCompactionStrategy.java:238)
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy$LeveledScanner.computeNext(LeveledCompactionStrategy.java:207)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 --
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7966) 1.2.18 - 2.0.10 upgrade compactions_in_progress: java.lang.IllegalArgumentException

2014-10-17 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174763#comment-14174763
 ] 

Marcus Eriksson commented on CASSANDRA-7966:


Could you post a bit more of the log leading up to that exception?

 1.2.18 - 2.0.10 upgrade compactions_in_progress: 
 java.lang.IllegalArgumentException
 

 Key: CASSANDRA-7966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7966
 Project: Cassandra
  Issue Type: Bug
 Environment: JDK 1.7
Reporter: Karl Mueller
Assignee: Marcus Eriksson
Priority: Minor

 This happened on a new node when starting 2.0.10 after 1.2.18 with complete 
 upgradesstables run:
 {noformat}
  INFO 15:31:11,532 Enqueuing flush of 
 Memtable-compactions_in_progress@1366724594(0/0 serialized/live bytes, 1 ops)
  INFO 15:31:11,532 Writing Memtable-compactions_in_progress@1366724594(0/0 
 serialized/live bytes, 1 ops)
  INFO 15:31:11,547 Completed flushing 
 /data2/data-cassandra/system/compactions_in_progress/system-compactions_in_progress-jb-10-Data.db
  (42 bytes) for commitlog position ReplayPosition(segmentId=1410993002452, 
 position=164409)
 ERROR 15:31:11,550 Exception in thread Thread[CompactionExecutor:36,1,main]
 java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:267)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:587)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytesWithShortLength(ByteBufferUtil.java:596)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:61)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:36)
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:112)
 at 
 org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:116)
 at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:150)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:143)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8134) cassandra crashes sporadically on windows

2014-10-17 Thread Stefan Gusenbauer (JIRA)
Stefan Gusenbauer created CASSANDRA-8134:


 Summary: cassandra crashes sporadically on windows
 Key: CASSANDRA-8134
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8134
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: Windows Server 2012 R2 , 64 bit Build 9600

CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 37 stepping 1, 
cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, tsc, 
tscinvbit

Memory: 4k page, physical 8388148k(3802204k free), swap 8388148k(4088948k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (24.60-b09) for windows-amd64 JRE 
(1.7.0_60-b19), built on May 7 2014 12:55:18 by java_re with unknown MS 
VC++:1600
Reporter: Stefan Gusenbauer
 Attachments: hs_err_pid1180.log, hs_err_pid5732.log

During our test runs cassandra crashes from time to time with the following 
stacktrace:

a similar bug can be found here 
https://issues.apache.org/jira/browse/CASSANDRA-5256

operating system is

--- S Y S T E M ---

OS: Windows Server 2012 R2 , 64 bit Build 9600

CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 37 stepping 1, 
cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, tsc, 
tscinvbit

Memory: 4k page, physical 8388148k(3802204k free), swap 8388148k(4088948k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (24.60-b09) for windows-amd64 JRE 
(1.7.0_60-b19), built on May 7 2014 12:55:18 by java_re with unknown MS 
VC++:1600

time: Wed Oct 15 09:32:30 2014 
elapsed time: 16 seconds

attached are several hs_err files too

j org.apache.cassandra.io.util.Memory.getLong(J)J+14 
j 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(J)Lorg/apache/cassandra/io/compress/CompressionMetadata$Chunk;+53
 
j org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer()V+9 
j org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer()V+13 
J 258 C2 org.apache.cassandra.io.util.RandomAccessReader.read()I (128 bytes) @ 
0x0250cbcc [0x0250cae0+0xec] 
J 306 C2 java.io.RandomAccessFile.readUnsignedShort()I (33 bytes) @ 
0x025475e4 [0x02547480+0x164] 
J 307 C2 
org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(Ljava/io/DataInput;)Ljava/nio/ByteBuffer;
 (9 bytes) @ 0x0254c290 [0x0254c140+0x150] 
j 
org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next()Lorg/apache/cassandra/db/columniterator/OnDiskAtomIterator;+65
 
j 
org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next()Ljava/lang/Object;+1
 
j 
org.apache.cassandra.io.sstable.SSTableScanner.next()Lorg/apache/cassandra/db/columniterator/OnDiskAtomIterator;+41
 
j org.apache.cassandra.io.sstable.SSTableScanner.next()Ljava/lang/Object;+1 
j org.apache.cassandra.utils.MergeIterator$Candidate.advance()Z+19 
j 
org.apache.cassandra.utils.MergeIterator$ManyToOne.init(Ljava/util/List;Ljava/util/Comparator;Lorg/apache/cassandra/utils/MergeIterator$Reducer;)V+71
 
j 
org.apache.cassandra.utils.MergeIterator.get(Ljava/util/List;Ljava/util/Comparator;Lorg/apache/cassandra/utils/MergeIterator$Reducer;)Lorg/apache/cassandra/utils/IMergeIterator;+46
 
j 
org.apache.cassandra.db.compaction.CompactionIterable.iterator()Lorg/apache/cassandra/utils/CloseableIterator;+15
 
j 
org.apache.cassandra.db.compaction.CompactionTask.runWith(Ljava/io/File;)V+319 
j org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow()V+89 
j org.apache.cassandra.utils.WrappedRunnable.run()V+1 
j 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I+6
 
j 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I+2
 
j 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run()V+164
 
j java.util.concurrent.Executors$RunnableAdapter.call()Ljava/lang/Object;+4 
j java.util.concurrent.FutureTask.run()V+42 
j 
java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
 
j java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5 
j java.lang.Thread.run()V+11 
v ~StubRoutines::call_stub 
V [jvm.dll+0x1ce043]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8127) Support vertical listing in cqlsh

2014-10-17 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174858#comment-14174858
 ] 

Jens Rantil commented on CASSANDRA-8127:


 This has been supported for ages using EXPAND ON.

Good news! I just found the documentation (for anyone curious): 
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/expand.html

 Support vertical listing in cqlsh
 -

 Key: CASSANDRA-8127
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8127
 Project: Cassandra
  Issue Type: Wish
  Components: Tools
Reporter: Jens Rantil
Priority: Minor
  Labels: cqlsh

 MySQL CLI has this neat feature that you can end queries with `\G` and it 
 will each result row vertically. For tables with many columns, or for users 
 with vertical screen orientation or smaller resolution, this is highly 
 useful. Every time I start `cqlsh` I feel this feature would be highly useful 
 for some of the tables that have many columns. See example below:
 {noformat}
 mysql SELECT * FROM testtable;
 +--+--+--+
 | a| b| c|
 +--+--+--+
 |1 |2 |3 |
 |4 |5 |6 |
 |6 |7 |8 |
 +--+--+--+
 3 rows in set (0.00 sec)
 mysql SELECT * FROM testtable\G
 *** 1. row ***
 a: 1
 b: 2
 c: 3
 *** 2. row ***
 a: 4
 b: 5
 c: 6
 *** 3. row ***
 a: 6
 b: 7
 c: 8
 3 rows in set (0.00 sec)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-17 Thread Catalin Alexandru Zamfir (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174860#comment-14174860
 ] 

Catalin Alexandru Zamfir commented on CASSANDRA-8131:
-

Here's our version of Cassandra (if it helps):
{noformat}
[cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh
{noformat}

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Sylvain Lebresne
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.0


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-17 Thread Catalin Alexandru Zamfir (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174860#comment-14174860
 ] 

Catalin Alexandru Zamfir edited comment on CASSANDRA-8131 at 10/17/14 8:54 AM:
---

Here's our version of Cassandra (if it helps):
{noformat}
# Note our keyspace has NetworkTopologyStrategy and RF: 3.

[cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh
{noformat}


was (Author: antauri):
Here's our version of Cassandra (if it helps):
{noformat}
[cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh
{noformat}

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Sylvain Lebresne
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.0


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8128) Exception when executing UPSERT

2014-10-17 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174866#comment-14174866
 ] 

Jens Rantil commented on CASSANDRA-8128:


 Can you paste the schema for the keyspace and table to help with 
 reproduction? Obfuscating the names of the keyspace, table, and columns is 
 fine.

Sure,

{noformat}
cqlsh:mykeyspace DESCRIBE TABLE mytable;

CREATE TABLE mytable (
  col1 uuid,
  col2 uuid,
  col3 uuid,
  col4 double,
  col5 uuid,
  col6 text,
  col7 uuid,
  col8 timestamp,
  col9 text,
  col10 text,
  col11 bigint,
  col12 timestamp,
  col13 double,
  col14 text,
  col15 text,
  col16 text,
  col17 double,
  col18 double,
  col19 uuid,
  col20 text,
  col21 double,
  col22 timestamp,
  col23 text,
  col24 text,
  col25 boolean,
  col62 bigint,
  col27 text,
  col28 boolean,
  col29 boolean,
  col30 boolean,
  col31 boolean,
  col32 boolean,
  PRIMARY KEY ((col1), col2)
) WITH
  bloom_filter_fp_chance=0.10 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.10 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=0.00 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'LeveledCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};
{noformat}

 What do you mean by UPSERT? We have no such keyword in CQL. Do you mean 
 INSERT? or UPDATE? or INSERT ... IF NOT EXISTS? How many rows in the batch? 
 How are you building it?

Sorry, there's no logical difference between INSERT and UPDATE (right?), but I 
should obviously be more clear. I am using spring-data-cassandra to store list 
of objects. spring-data-cassandra uses Datastax Java Driver and generates the 
CQL itself. The exception I am getting on the client end can be found here: 
https://jira.spring.io/browse/DATACASS-161. Based on it, I am doing an INSERT 
(the rows don't exist previously in the database). Usually Batches around 
1000-3000 rows. Like I said, smaller batches work.

 Exception when executing UPSERT
 ---

 Key: CASSANDRA-8128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jens Rantil
Priority: Critical
  Labels: cql3

 I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
 for a single partition key with up to ~3000 clustering keys. I understand to 
 large upsert aren't recommended, but I wouldn't expect to be getting the 
 following exception anyway:
 {noformat}
 ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
 ErrorMessage.java (line 222) Unexpected exception during request
 java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
 at java.util.ArrayList.rangeCheck(ArrayList.java:635)
 at java.util.ArrayList.get(ArrayList.java:411)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
 at 
 

git commit: Properly reject token function in DELETE statements

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 e916dff8b - 6cabd252b


Properly reject token function in DELETE statements

patch by slebresne; reviewed by thobbs for CASSANDRA-7747


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6cabd252
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6cabd252
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6cabd252

Branch: refs/heads/cassandra-2.0
Commit: 6cabd252be8fd08faf500bf327f14270411af569
Parents: e916dff
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:29:34 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:29:34 2014 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/cql3/SingleColumnRelation.java| 2 ++
 .../apache/cassandra/cql3/statements/ModificationStatement.java | 5 -
 3 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 73aaab0..8c4fc15 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.11:
+ * Properly reject the token function DELETE (CASSANDRA-7747)
  * Force batchlog replay before decommissioning a node (CASSANDRA-7446)
  * Fix hint replay with many accumulated expired hints (CASSANDRA-6998)
  * Fix duplicate results in DISTINCT queries on static columns with query

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java 
b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
index 642be66..ee95da0 100644
--- a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
+++ b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
@@ -99,6 +99,8 @@ public class SingleColumnRelation extends Relation
 {
 if (relationType == Type.IN)
 return String.format(%s IN %s, entity, inValues);
+else if (onToken)
+return String.format(token(%s) %s %s, entity, relationType, 
value);
 else
 return String.format(%s %s %s, entity, relationType, value);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index b214e76..006873d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -264,10 +264,13 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 if (!(relation instanceof SingleColumnRelation))
 {
 throw new InvalidRequestException(
-String.format(Multi-column relations cannot be used 
in WHERE clauses for modification statements: %s, relation));
+String.format(Multi-column relations cannot be used 
in WHERE clauses for UPDATE and DELETE statements: %s, relation));
 }
 SingleColumnRelation rel = (SingleColumnRelation) relation;
 
+if (rel.onToken)
+throw new InvalidRequestException(String.format(The token 
function cannot be used in WHERE clauses for UPDATE and DELETE statements: %s, 
relation));
+
 CFDefinition.Name name = cfDef.get(rel.getEntity());
 if (name == null)
 throw new InvalidRequestException(String.format(Unknown key 
identifier %s, rel.getEntity()));



[1/2] git commit: Properly reject token function in DELETE statements

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 440824c1a - 24e4210a4


Properly reject token function in DELETE statements

patch by slebresne; reviewed by thobbs for CASSANDRA-7747


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6cabd252
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6cabd252
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6cabd252

Branch: refs/heads/cassandra-2.1
Commit: 6cabd252be8fd08faf500bf327f14270411af569
Parents: e916dff
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:29:34 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:29:34 2014 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/cql3/SingleColumnRelation.java| 2 ++
 .../apache/cassandra/cql3/statements/ModificationStatement.java | 5 -
 3 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 73aaab0..8c4fc15 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.11:
+ * Properly reject the token function DELETE (CASSANDRA-7747)
  * Force batchlog replay before decommissioning a node (CASSANDRA-7446)
  * Fix hint replay with many accumulated expired hints (CASSANDRA-6998)
  * Fix duplicate results in DISTINCT queries on static columns with query

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java 
b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
index 642be66..ee95da0 100644
--- a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
+++ b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
@@ -99,6 +99,8 @@ public class SingleColumnRelation extends Relation
 {
 if (relationType == Type.IN)
 return String.format(%s IN %s, entity, inValues);
+else if (onToken)
+return String.format(token(%s) %s %s, entity, relationType, 
value);
 else
 return String.format(%s %s %s, entity, relationType, value);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index b214e76..006873d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -264,10 +264,13 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 if (!(relation instanceof SingleColumnRelation))
 {
 throw new InvalidRequestException(
-String.format(Multi-column relations cannot be used 
in WHERE clauses for modification statements: %s, relation));
+String.format(Multi-column relations cannot be used 
in WHERE clauses for UPDATE and DELETE statements: %s, relation));
 }
 SingleColumnRelation rel = (SingleColumnRelation) relation;
 
+if (rel.onToken)
+throw new InvalidRequestException(String.format(The token 
function cannot be used in WHERE clauses for UPDATE and DELETE statements: %s, 
relation));
+
 CFDefinition.Name name = cfDef.get(rel.getEntity());
 if (name == null)
 throw new InvalidRequestException(String.format(Unknown key 
identifier %s, rel.getEntity()));



[1/3] git commit: Properly reject token function in DELETE statements

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0ca6beb68 - c183f5506


Properly reject token function in DELETE statements

patch by slebresne; reviewed by thobbs for CASSANDRA-7747


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6cabd252
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6cabd252
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6cabd252

Branch: refs/heads/trunk
Commit: 6cabd252be8fd08faf500bf327f14270411af569
Parents: e916dff
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:29:34 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:29:34 2014 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/cql3/SingleColumnRelation.java| 2 ++
 .../apache/cassandra/cql3/statements/ModificationStatement.java | 5 -
 3 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 73aaab0..8c4fc15 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.11:
+ * Properly reject the token function DELETE (CASSANDRA-7747)
  * Force batchlog replay before decommissioning a node (CASSANDRA-7446)
  * Fix hint replay with many accumulated expired hints (CASSANDRA-6998)
  * Fix duplicate results in DISTINCT queries on static columns with query

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java 
b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
index 642be66..ee95da0 100644
--- a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
+++ b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
@@ -99,6 +99,8 @@ public class SingleColumnRelation extends Relation
 {
 if (relationType == Type.IN)
 return String.format(%s IN %s, entity, inValues);
+else if (onToken)
+return String.format(token(%s) %s %s, entity, relationType, 
value);
 else
 return String.format(%s %s %s, entity, relationType, value);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cabd252/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index b214e76..006873d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -264,10 +264,13 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 if (!(relation instanceof SingleColumnRelation))
 {
 throw new InvalidRequestException(
-String.format(Multi-column relations cannot be used 
in WHERE clauses for modification statements: %s, relation));
+String.format(Multi-column relations cannot be used 
in WHERE clauses for UPDATE and DELETE statements: %s, relation));
 }
 SingleColumnRelation rel = (SingleColumnRelation) relation;
 
+if (rel.onToken)
+throw new InvalidRequestException(String.format(The token 
function cannot be used in WHERE clauses for UPDATE and DELETE statements: %s, 
relation));
+
 CFDefinition.Name name = cfDef.get(rel.getEntity());
 if (name == null)
 throw new InvalidRequestException(String.format(Unknown key 
identifier %s, rel.getEntity()));



[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-17 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/24e4210a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/24e4210a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/24e4210a

Branch: refs/heads/trunk
Commit: 24e4210a4f7e6d18346aed6114c39e85a115dc6c
Parents: 440824c 6cabd25
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:34:03 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:34:03 2014 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/cql3/SingleColumnRelation.java| 2 ++
 .../apache/cassandra/cql3/statements/ModificationStatement.java | 5 -
 3 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/24e4210a/CHANGES.txt
--
diff --cc CHANGES.txt
index d7a8904,8c4fc15..942236c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,89 -1,5 +1,90 @@@
 -2.0.11:
 +2.1.1
 + * Fix IllegalArgumentException when a list of IN values containing tuples
 +   is passed as a single arg to a prepared statement with the v1 or v2
 +   protocol (CASSANDRA-8062)
 + * Fix ClassCastException in DISTINCT query on static columns with
 +   query paging (CASSANDRA-8108)
 + * Fix NPE on null nested UDT inside a set (CASSANDRA-8105)
 + * Fix exception when querying secondary index on set items or map keys
 +   when some clustering columns are specified (CASSANDRA-8073)
 + * Send proper error response when there is an error during native
 +   protocol message decode (CASSANDRA-8118)
 + * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)
 + * Fix NPE when creating a table with frozen sets, lists (CASSANDRA-8104)
 + * Fix high memory use due to tracking reads on incrementally opened sstable
 +   readers (CASSANDRA-8066)
 + * Fix EXECUTE request with skipMetadata=false returning no metadata
 +   (CASSANDRA-8054)
 + * Allow concurrent use of CQLBulkOutputFormat (CASSANDRA-7776)
 + * Shutdown JVM on OOM (CASSANDRA-7507)
 + * Upgrade netty version and enable epoll event loop (CASSANDRA-7761)
 + * Don't duplicate sstables smaller than split size when using
 +   the sstablesplitter tool (CASSANDRA-7616)
 + * Avoid re-parsing already prepared statements (CASSANDRA-7923)
 + * Fix some Thrift slice deletions and updates of COMPACT STORAGE
 +   tables with some clustering columns omitted (CASSANDRA-7990)
 + * Fix filtering for CONTAINS on sets (CASSANDRA-8033)
 + * Properly track added size (CASSANDRA-7239)
 + * Allow compilation in java 8 (CASSANDRA-7208)
 + * Fix Assertion error on RangeTombstoneList diff (CASSANDRA-8013)
 + * Release references to overlapping sstables during compaction 
(CASSANDRA-7819)
 + * Send notification when opening compaction results early (CASSANDRA-8034)
 + * Make native server start block until properly bound (CASSANDRA-7885)
 + * (cqlsh) Fix IPv6 support (CASSANDRA-7988)
 + * Ignore fat clients when checking for endpoint collision (CASSANDRA-7939)
 + * Make sstablerepairedset take a list of files (CASSANDRA-7995)
 + * (cqlsh) Tab completeion for indexes on map keys (CASSANDRA-7972)
 + * (cqlsh) Fix UDT field selection in select clause (CASSANDRA-7891)
 + * Fix resource leak in event of corrupt sstable
 + * (cqlsh) Add command line option for cqlshrc file path (CASSANDRA-7131)
 + * Provide visibility into prepared statements churn (CASSANDRA-7921, 
CASSANDRA-7930)
 + * Invalidate prepared statements when their keyspace or table is
 +   dropped (CASSANDRA-7566)
 + * cassandra-stress: fix support for NetworkTopologyStrategy (CASSANDRA-7945)
 + * Fix saving caches when a table is dropped (CASSANDRA-7784)
 + * Add better error checking of new stress profile (CASSANDRA-7716)
 + * Use ThreadLocalRandom and remove FBUtilities.threadLocalRandom 
(CASSANDRA-7934)
 + * Prevent operator mistakes due to simultaneous bootstrap (CASSANDRA-7069)
 + * cassandra-stress supports whitelist mode for node config (CASSANDRA-7658)
 + * GCInspector more closely tracks GC; cassandra-stress and nodetool report 
it (CASSANDRA-7916)
 + * nodetool won't output bogus ownership info without a keyspace 
(CASSANDRA-7173)
 + * Add human readable option to nodetool commands (CASSANDRA-5433)
 + * Don't try to set repairedAt on old sstables (CASSANDRA-7913)
 + * Add metrics for tracking PreparedStatement use (CASSANDRA-7719)
 + * (cqlsh) tab-completion for triggers (CASSANDRA-7824)
 + * (cqlsh) Support for query paging (CASSANDRA-7514)
 + * (cqlsh) Show progress of COPY operations (CASSANDRA-7789)
 + * Add 

[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-17 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/24e4210a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/24e4210a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/24e4210a

Branch: refs/heads/cassandra-2.1
Commit: 24e4210a4f7e6d18346aed6114c39e85a115dc6c
Parents: 440824c 6cabd25
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:34:03 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:34:03 2014 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/cql3/SingleColumnRelation.java| 2 ++
 .../apache/cassandra/cql3/statements/ModificationStatement.java | 5 -
 3 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/24e4210a/CHANGES.txt
--
diff --cc CHANGES.txt
index d7a8904,8c4fc15..942236c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,89 -1,5 +1,90 @@@
 -2.0.11:
 +2.1.1
 + * Fix IllegalArgumentException when a list of IN values containing tuples
 +   is passed as a single arg to a prepared statement with the v1 or v2
 +   protocol (CASSANDRA-8062)
 + * Fix ClassCastException in DISTINCT query on static columns with
 +   query paging (CASSANDRA-8108)
 + * Fix NPE on null nested UDT inside a set (CASSANDRA-8105)
 + * Fix exception when querying secondary index on set items or map keys
 +   when some clustering columns are specified (CASSANDRA-8073)
 + * Send proper error response when there is an error during native
 +   protocol message decode (CASSANDRA-8118)
 + * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)
 + * Fix NPE when creating a table with frozen sets, lists (CASSANDRA-8104)
 + * Fix high memory use due to tracking reads on incrementally opened sstable
 +   readers (CASSANDRA-8066)
 + * Fix EXECUTE request with skipMetadata=false returning no metadata
 +   (CASSANDRA-8054)
 + * Allow concurrent use of CQLBulkOutputFormat (CASSANDRA-7776)
 + * Shutdown JVM on OOM (CASSANDRA-7507)
 + * Upgrade netty version and enable epoll event loop (CASSANDRA-7761)
 + * Don't duplicate sstables smaller than split size when using
 +   the sstablesplitter tool (CASSANDRA-7616)
 + * Avoid re-parsing already prepared statements (CASSANDRA-7923)
 + * Fix some Thrift slice deletions and updates of COMPACT STORAGE
 +   tables with some clustering columns omitted (CASSANDRA-7990)
 + * Fix filtering for CONTAINS on sets (CASSANDRA-8033)
 + * Properly track added size (CASSANDRA-7239)
 + * Allow compilation in java 8 (CASSANDRA-7208)
 + * Fix Assertion error on RangeTombstoneList diff (CASSANDRA-8013)
 + * Release references to overlapping sstables during compaction 
(CASSANDRA-7819)
 + * Send notification when opening compaction results early (CASSANDRA-8034)
 + * Make native server start block until properly bound (CASSANDRA-7885)
 + * (cqlsh) Fix IPv6 support (CASSANDRA-7988)
 + * Ignore fat clients when checking for endpoint collision (CASSANDRA-7939)
 + * Make sstablerepairedset take a list of files (CASSANDRA-7995)
 + * (cqlsh) Tab completeion for indexes on map keys (CASSANDRA-7972)
 + * (cqlsh) Fix UDT field selection in select clause (CASSANDRA-7891)
 + * Fix resource leak in event of corrupt sstable
 + * (cqlsh) Add command line option for cqlshrc file path (CASSANDRA-7131)
 + * Provide visibility into prepared statements churn (CASSANDRA-7921, 
CASSANDRA-7930)
 + * Invalidate prepared statements when their keyspace or table is
 +   dropped (CASSANDRA-7566)
 + * cassandra-stress: fix support for NetworkTopologyStrategy (CASSANDRA-7945)
 + * Fix saving caches when a table is dropped (CASSANDRA-7784)
 + * Add better error checking of new stress profile (CASSANDRA-7716)
 + * Use ThreadLocalRandom and remove FBUtilities.threadLocalRandom 
(CASSANDRA-7934)
 + * Prevent operator mistakes due to simultaneous bootstrap (CASSANDRA-7069)
 + * cassandra-stress supports whitelist mode for node config (CASSANDRA-7658)
 + * GCInspector more closely tracks GC; cassandra-stress and nodetool report 
it (CASSANDRA-7916)
 + * nodetool won't output bogus ownership info without a keyspace 
(CASSANDRA-7173)
 + * Add human readable option to nodetool commands (CASSANDRA-5433)
 + * Don't try to set repairedAt on old sstables (CASSANDRA-7913)
 + * Add metrics for tracking PreparedStatement use (CASSANDRA-7719)
 + * (cqlsh) tab-completion for triggers (CASSANDRA-7824)
 + * (cqlsh) Support for query paging (CASSANDRA-7514)
 + * (cqlsh) Show progress of COPY operations (CASSANDRA-7789)
 

[jira] [Updated] (CASSANDRA-8132) Save or stream hints to a safe place in node replacement

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8132:

Fix Version/s: (was: 2.1.1)
   2.1.2

 Save or stream hints to a safe place in node replacement
 

 Key: CASSANDRA-8132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8132
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Minh Do
Assignee: Minh Do
 Fix For: 2.1.2


 Often, we need to replace a node with a new instance in the cloud environment 
 where we have all nodes are still alive. To be safe without losing data, we 
 usually make sure all hints are gone before we do this operation.
 Replacement means we just want to shutdown C* process on a node and bring up 
 another instance to take over that node's token.
 However, if a node to be replaced has a lot of stored hints, its 
 HintedHandofManager seems very slow to send the hints to other nodes.  In our 
 case, we tried to replace a node and had to wait for several days before its 
 stored hints are clear out.  As mentioned above, we need all hints on this 
 node to clear out before we can terminate it and replace it by a new 
 instance/machine.
 Since this is not a decommission, I am proposing that we have the same 
 hints-streaming mechanism as in the decommission code.  Furthermore, there 
 needs to be a cmd for NodeTool to trigger this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7927) Kill daemon on any disk error

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7927:

Fix Version/s: (was: 2.1.1)
   2.1.2

 Kill daemon on any disk error
 -

 Key: CASSANDRA-7927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7927
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
 Environment: aws, stock cassandra or dse
Reporter: John Sumsion
Assignee: John Sumsion
  Labels: bootcamp, lhf
 Fix For: 2.1.2

 Attachments: 7927-v1-die.patch


 We got a disk read error on 1.2.13 that didn't trigger the disk failure 
 policy, and I'm trying to hunt down why, but in doing so, I saw that there is 
 no disk_failure_policy option for just killing the daemon.
 If we ever get a corrupt sstable, we want to replace the node anyway, because 
 some aws instance store disks just go bad.
 I want to use the JVMStabilityInspector from CASSANDRA-7507 to kill so that 
 remains standard, so I will base my patch on CASSANDRA-7507.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6430) DELETE with IF field=value clause doesn't work properly if more then one row are going to be deleted

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6430:

Reviewer: Sylvain Lebresne  (was: Benjamin Lerer)

 DELETE with IF field=value clause doesn't work properly if more then one 
 row are going to be deleted
 

 Key: CASSANDRA-6430
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6430
 Project: Cassandra
  Issue Type: Bug
Reporter: Dmitriy Ukhlov
Assignee: Tyler Hobbs
 Fix For: 2.0.11, 2.1.1

 Attachments: 6430-2.0.txt


 CREATE TABLE test(key int, sub_key int, value text, PRIMARY KEY(key, sub_key) 
 );
 INSERT INTO test(key, sub_key, value) VALUES(1,1, '1.1');
 INSERT INTO test(key, sub_key, value) VALUES(1,2, '1.2');
 INSERT INTO test(key, sub_key, value) VALUES(1,3, '1.3');
 SELECT * from test;
  key | sub_key | value
 -+-+---
1 |   1 |   1.1
1 |   2 |   1.2
1 |   3 |   1.3
 DELETE FROM test WHERE key=1 IF value='1.2';
  [applied]
 ---
  False === I guess second row should be removed
 SELECT * from test;
  key | sub_key | value
 -+-+---
1 |   1 |   1.1
1 |   2 |   1.2
1 |   3 |   1.3
 (3 rows) 
 DELETE FROM test WHERE key=1;
 SELECT * from test;
 (0 rows)  === all rows were removed: OK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: Refuse conditions on deletes unless full PK is given

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 6cabd252b - 29a8b882d


Refuse conditions on deletes unless full PK is given

patch by thobbs; reviewed by slebresne for CASSANDRA-6430


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29a8b882
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29a8b882
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29a8b882

Branch: refs/heads/cassandra-2.0
Commit: 29a8b882d8f4192588b85b77c41c00942508b8ce
Parents: 6cabd25
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:43:31 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:43:31 2014 +0200

--
 CHANGES.txt  |  1 +
 .../cql3/statements/DeleteStatement.java | 19 +++
 .../cql3/statements/ModificationStatement.java   | 14 +-
 3 files changed, 33 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8c4fc15..544cf9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.11:
+ * Reject conditions on DELETE unless full PK is given (CASSANDRA-6430)
  * Properly reject the token function DELETE (CASSANDRA-7747)
  * Force batchlog replay before decommissioning a node (CASSANDRA-7446)
  * Fix hint replay with many accumulated expired hints (CASSANDRA-6998)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
index 902add4..6c1c6ed 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.cql3.statements;
 import java.nio.ByteBuffer;
 import java.util.*;
 
+import com.google.common.collect.Iterators;
+
 import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
@@ -92,6 +94,23 @@ public class DeleteStatement extends ModificationStatement
 }
 }
 
+protected void validateWhereClauseForConditions() throws 
InvalidRequestException
+{
+IteratorCFDefinition.Name iterator = 
Iterators.concat(cfm.getCfDef().partitionKeys().iterator(), 
cfm.getCfDef().clusteringColumns().iterator());
+while (iterator.hasNext())
+{
+CFDefinition.Name name = iterator.next();
+Restriction restriction = processedKeys.get(name.name);
+if (restriction == null || !(restriction.isEQ() || 
restriction.isIN()))
+{
+throw new InvalidRequestException(
+String.format(DELETE statements must restrict all 
PRIMARY KEY columns with equality relations in order  +
+  to use IF conditions, but column '%s' 
is not restricted, name.name));
+}
+}
+
+}
+
 public static class Parsed extends ModificationStatement.Parsed
 {
 private final ListOperation.RawDeletion deletions;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 006873d..adb0084 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -62,7 +62,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 public final CFMetaData cfm;
 public final Attributes attrs;
 
-private final MapColumnIdentifier, Restriction processedKeys = new 
HashMapColumnIdentifier, Restriction();
+protected final MapColumnIdentifier, Restriction processedKeys = new 
HashMap();
 private final ListOperation columnOperations = new 
ArrayListOperation();
 
 private int boundTerms;
@@ -747,6 +747,16 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 return new UpdateParameters(cfm, variables, getTimestamp(now, 
variables), getTimeToLive(variables), rows);
 }
 
+/**
+ * If there are conditions on the statement, this is called after the 
where clause and conditions have 

[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-17 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f4037edb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f4037edb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f4037edb

Branch: refs/heads/cassandra-2.1
Commit: f4037edbfb1e471f104e836e96f61619ae030d42
Parents: 24e4210 29a8b88
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:54:43 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:54:43 2014 +0200

--
 CHANGES.txt  |  1 +
 .../cql3/statements/DeleteStatement.java | 19 +++
 .../cql3/statements/ModificationStatement.java   | 14 +-
 3 files changed, 33 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4037edb/CHANGES.txt
--
diff --cc CHANGES.txt
index 942236c,544cf9a..7cd5154
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,89 -1,5 +1,90 @@@
 -2.0.11:
 +2.1.1
 + * Fix IllegalArgumentException when a list of IN values containing tuples
 +   is passed as a single arg to a prepared statement with the v1 or v2
 +   protocol (CASSANDRA-8062)
 + * Fix ClassCastException in DISTINCT query on static columns with
 +   query paging (CASSANDRA-8108)
 + * Fix NPE on null nested UDT inside a set (CASSANDRA-8105)
 + * Fix exception when querying secondary index on set items or map keys
 +   when some clustering columns are specified (CASSANDRA-8073)
 + * Send proper error response when there is an error during native
 +   protocol message decode (CASSANDRA-8118)
 + * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)
 + * Fix NPE when creating a table with frozen sets, lists (CASSANDRA-8104)
 + * Fix high memory use due to tracking reads on incrementally opened sstable
 +   readers (CASSANDRA-8066)
 + * Fix EXECUTE request with skipMetadata=false returning no metadata
 +   (CASSANDRA-8054)
 + * Allow concurrent use of CQLBulkOutputFormat (CASSANDRA-7776)
 + * Shutdown JVM on OOM (CASSANDRA-7507)
 + * Upgrade netty version and enable epoll event loop (CASSANDRA-7761)
 + * Don't duplicate sstables smaller than split size when using
 +   the sstablesplitter tool (CASSANDRA-7616)
 + * Avoid re-parsing already prepared statements (CASSANDRA-7923)
 + * Fix some Thrift slice deletions and updates of COMPACT STORAGE
 +   tables with some clustering columns omitted (CASSANDRA-7990)
 + * Fix filtering for CONTAINS on sets (CASSANDRA-8033)
 + * Properly track added size (CASSANDRA-7239)
 + * Allow compilation in java 8 (CASSANDRA-7208)
 + * Fix Assertion error on RangeTombstoneList diff (CASSANDRA-8013)
 + * Release references to overlapping sstables during compaction 
(CASSANDRA-7819)
 + * Send notification when opening compaction results early (CASSANDRA-8034)
 + * Make native server start block until properly bound (CASSANDRA-7885)
 + * (cqlsh) Fix IPv6 support (CASSANDRA-7988)
 + * Ignore fat clients when checking for endpoint collision (CASSANDRA-7939)
 + * Make sstablerepairedset take a list of files (CASSANDRA-7995)
 + * (cqlsh) Tab completeion for indexes on map keys (CASSANDRA-7972)
 + * (cqlsh) Fix UDT field selection in select clause (CASSANDRA-7891)
 + * Fix resource leak in event of corrupt sstable
 + * (cqlsh) Add command line option for cqlshrc file path (CASSANDRA-7131)
 + * Provide visibility into prepared statements churn (CASSANDRA-7921, 
CASSANDRA-7930)
 + * Invalidate prepared statements when their keyspace or table is
 +   dropped (CASSANDRA-7566)
 + * cassandra-stress: fix support for NetworkTopologyStrategy (CASSANDRA-7945)
 + * Fix saving caches when a table is dropped (CASSANDRA-7784)
 + * Add better error checking of new stress profile (CASSANDRA-7716)
 + * Use ThreadLocalRandom and remove FBUtilities.threadLocalRandom 
(CASSANDRA-7934)
 + * Prevent operator mistakes due to simultaneous bootstrap (CASSANDRA-7069)
 + * cassandra-stress supports whitelist mode for node config (CASSANDRA-7658)
 + * GCInspector more closely tracks GC; cassandra-stress and nodetool report 
it (CASSANDRA-7916)
 + * nodetool won't output bogus ownership info without a keyspace 
(CASSANDRA-7173)
 + * Add human readable option to nodetool commands (CASSANDRA-5433)
 + * Don't try to set repairedAt on old sstables (CASSANDRA-7913)
 + * Add metrics for tracking PreparedStatement use (CASSANDRA-7719)
 + * (cqlsh) tab-completion for triggers (CASSANDRA-7824)
 + * (cqlsh) Support for query paging (CASSANDRA-7514)
 + * (cqlsh) Show progress of COPY operations (CASSANDRA-7789)
 + * Add syntax to remove multiple elements from a map (CASSANDRA-6599)
 + * Support non-equals 

[1/2] git commit: Refuse conditions on deletes unless full PK is given

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 24e4210a4 - f4037edbf


Refuse conditions on deletes unless full PK is given

patch by thobbs; reviewed by slebresne for CASSANDRA-6430


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29a8b882
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29a8b882
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29a8b882

Branch: refs/heads/cassandra-2.1
Commit: 29a8b882d8f4192588b85b77c41c00942508b8ce
Parents: 6cabd25
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:43:31 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:43:31 2014 +0200

--
 CHANGES.txt  |  1 +
 .../cql3/statements/DeleteStatement.java | 19 +++
 .../cql3/statements/ModificationStatement.java   | 14 +-
 3 files changed, 33 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8c4fc15..544cf9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.11:
+ * Reject conditions on DELETE unless full PK is given (CASSANDRA-6430)
  * Properly reject the token function DELETE (CASSANDRA-7747)
  * Force batchlog replay before decommissioning a node (CASSANDRA-7446)
  * Fix hint replay with many accumulated expired hints (CASSANDRA-6998)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
index 902add4..6c1c6ed 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.cql3.statements;
 import java.nio.ByteBuffer;
 import java.util.*;
 
+import com.google.common.collect.Iterators;
+
 import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
@@ -92,6 +94,23 @@ public class DeleteStatement extends ModificationStatement
 }
 }
 
+protected void validateWhereClauseForConditions() throws 
InvalidRequestException
+{
+IteratorCFDefinition.Name iterator = 
Iterators.concat(cfm.getCfDef().partitionKeys().iterator(), 
cfm.getCfDef().clusteringColumns().iterator());
+while (iterator.hasNext())
+{
+CFDefinition.Name name = iterator.next();
+Restriction restriction = processedKeys.get(name.name);
+if (restriction == null || !(restriction.isEQ() || 
restriction.isIN()))
+{
+throw new InvalidRequestException(
+String.format(DELETE statements must restrict all 
PRIMARY KEY columns with equality relations in order  +
+  to use IF conditions, but column '%s' 
is not restricted, name.name));
+}
+}
+
+}
+
 public static class Parsed extends ModificationStatement.Parsed
 {
 private final ListOperation.RawDeletion deletions;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 006873d..adb0084 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -62,7 +62,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 public final CFMetaData cfm;
 public final Attributes attrs;
 
-private final MapColumnIdentifier, Restriction processedKeys = new 
HashMapColumnIdentifier, Restriction();
+protected final MapColumnIdentifier, Restriction processedKeys = new 
HashMap();
 private final ListOperation columnOperations = new 
ArrayListOperation();
 
 private int boundTerms;
@@ -747,6 +747,16 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 return new UpdateParameters(cfm, variables, getTimestamp(now, 
variables), getTimeToLive(variables), rows);
 }
 
+/**
+ * If there are conditions on the statement, this is called after the 
where clause and conditions have 

[1/3] git commit: Refuse conditions on deletes unless full PK is given

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk c183f5506 - fea7d9a03


Refuse conditions on deletes unless full PK is given

patch by thobbs; reviewed by slebresne for CASSANDRA-6430


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29a8b882
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29a8b882
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29a8b882

Branch: refs/heads/trunk
Commit: 29a8b882d8f4192588b85b77c41c00942508b8ce
Parents: 6cabd25
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:43:31 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:43:31 2014 +0200

--
 CHANGES.txt  |  1 +
 .../cql3/statements/DeleteStatement.java | 19 +++
 .../cql3/statements/ModificationStatement.java   | 14 +-
 3 files changed, 33 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8c4fc15..544cf9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.11:
+ * Reject conditions on DELETE unless full PK is given (CASSANDRA-6430)
  * Properly reject the token function DELETE (CASSANDRA-7747)
  * Force batchlog replay before decommissioning a node (CASSANDRA-7446)
  * Fix hint replay with many accumulated expired hints (CASSANDRA-6998)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
index 902add4..6c1c6ed 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.cql3.statements;
 import java.nio.ByteBuffer;
 import java.util.*;
 
+import com.google.common.collect.Iterators;
+
 import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
@@ -92,6 +94,23 @@ public class DeleteStatement extends ModificationStatement
 }
 }
 
+protected void validateWhereClauseForConditions() throws 
InvalidRequestException
+{
+IteratorCFDefinition.Name iterator = 
Iterators.concat(cfm.getCfDef().partitionKeys().iterator(), 
cfm.getCfDef().clusteringColumns().iterator());
+while (iterator.hasNext())
+{
+CFDefinition.Name name = iterator.next();
+Restriction restriction = processedKeys.get(name.name);
+if (restriction == null || !(restriction.isEQ() || 
restriction.isIN()))
+{
+throw new InvalidRequestException(
+String.format(DELETE statements must restrict all 
PRIMARY KEY columns with equality relations in order  +
+  to use IF conditions, but column '%s' 
is not restricted, name.name));
+}
+}
+
+}
+
 public static class Parsed extends ModificationStatement.Parsed
 {
 private final ListOperation.RawDeletion deletions;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29a8b882/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 006873d..adb0084 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -62,7 +62,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 public final CFMetaData cfm;
 public final Attributes attrs;
 
-private final MapColumnIdentifier, Restriction processedKeys = new 
HashMapColumnIdentifier, Restriction();
+protected final MapColumnIdentifier, Restriction processedKeys = new 
HashMap();
 private final ListOperation columnOperations = new 
ArrayListOperation();
 
 private int boundTerms;
@@ -747,6 +747,16 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 return new UpdateParameters(cfm, variables, getTimestamp(now, 
variables), getTimeToLive(variables), rows);
 }
 
+/**
+ * If there are conditions on the statement, this is called after the 
where clause and conditions have been
+ * 

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-17 Thread slebresne
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fea7d9a0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fea7d9a0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fea7d9a0

Branch: refs/heads/trunk
Commit: fea7d9a0311ad5d52696d8bfaf920acfd191be84
Parents: c183f55 f4037ed
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 11:55:26 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 11:55:26 2014 +0200

--
 CHANGES.txt  |  1 +
 .../cql3/statements/DeleteStatement.java | 19 +++
 .../cql3/statements/ModificationStatement.java   | 14 +-
 3 files changed, 33 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fea7d9a0/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fea7d9a0/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--



[jira] [Commented] (CASSANDRA-8054) EXECUTE request with skipMetadata=false gets no metadata in response

2014-10-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174901#comment-14174901
 ] 

Sylvain Lebresne commented on CASSANDRA-8054:
-

I don't think that's an issue with 2.0 because there we create a new Metadata 
instance every time we construct a new ResultSet (so there is no sharing of 
anything except for the names, and those are never modified in 2.0).

 EXECUTE request with skipMetadata=false gets no metadata in response
 

 Key: CASSANDRA-8054
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8054
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Olivier Michallat
Assignee: Aleksey Yeschenko
 Fix For: 2.0.11, 2.1.1

 Attachments: 8054-2.1.txt, 8054-fix.txt, 8054-v2.txt


 This has been reported independently with the 
 [Java|https://datastax-oss.atlassian.net/browse/JAVA-482] and 
 [C++|https://datastax-oss.atlassian.net/browse/CPP-174] drivers.
 This happens under heavy load, where multiple client threads prepare and 
 execute statements in parallel. One of them sends an EXECUTE request with 
 skipMetadata=false, but the returned ROWS response has no metadata in it.
 A patch of {{Message.Dispatcher.channelRead0}} confirmed that the flag was 
 incorrectly set on the response:
 {code}
 logger.debug(Received: {}, v={}, request, 
 connection.getVersion());
 boolean skipMetadataOnRequest = false;
 if (request instanceof ExecuteMessage) {
 ExecuteMessage execute = (ExecuteMessage)request;
 skipMetadataOnRequest = execute.options.skipMetadata();
 }
 response = request.execute(qstate);
 if (request instanceof ExecuteMessage) {
 Rows rows = (Rows)response;
 boolean skipMetadataOnResponse = 
 rows.result.metadata.flags.contains(Flag.NO_METADATA);
 if (skipMetadataOnResponse != skipMetadataOnRequest) {
 logger.warn(Inconsistent skipMetadata on streamId 
 {}, was {} in request but {} in response,
 request.getStreamId(),
 skipMetadataOnRequest,
 skipMetadataOnResponse);
 }
 }
 {code}
 We observed the warning with (false, true) during our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8054) EXECUTE request with skipMetadata=false gets no metadata in response

2014-10-17 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8054.
--
   Resolution: Fixed
Fix Version/s: (was: 2.0.11)
 Assignee: Sylvain Lebresne  (was: Aleksey Yeschenko)

True. Removing 2.0.11 fixver and resolving, then. A cleaner fix can wait so 
long as everything works.

 EXECUTE request with skipMetadata=false gets no metadata in response
 

 Key: CASSANDRA-8054
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8054
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Olivier Michallat
Assignee: Sylvain Lebresne
 Fix For: 2.1.1

 Attachments: 8054-2.1.txt, 8054-fix.txt, 8054-v2.txt


 This has been reported independently with the 
 [Java|https://datastax-oss.atlassian.net/browse/JAVA-482] and 
 [C++|https://datastax-oss.atlassian.net/browse/CPP-174] drivers.
 This happens under heavy load, where multiple client threads prepare and 
 execute statements in parallel. One of them sends an EXECUTE request with 
 skipMetadata=false, but the returned ROWS response has no metadata in it.
 A patch of {{Message.Dispatcher.channelRead0}} confirmed that the flag was 
 incorrectly set on the response:
 {code}
 logger.debug(Received: {}, v={}, request, 
 connection.getVersion());
 boolean skipMetadataOnRequest = false;
 if (request instanceof ExecuteMessage) {
 ExecuteMessage execute = (ExecuteMessage)request;
 skipMetadataOnRequest = execute.options.skipMetadata();
 }
 response = request.execute(qstate);
 if (request instanceof ExecuteMessage) {
 Rows rows = (Rows)response;
 boolean skipMetadataOnResponse = 
 rows.result.metadata.flags.contains(Flag.NO_METADATA);
 if (skipMetadataOnResponse != skipMetadataOnRequest) {
 logger.warn(Inconsistent skipMetadata on streamId 
 {}, was {} in request but {} in response,
 request.getStreamId(),
 skipMetadataOnRequest,
 skipMetadataOnResponse);
 }
 }
 {code}
 We observed the warning with (false, true) during our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Git Push Summary

2014-10-17 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.0.11-tentative [created] 3c8a2a766


git commit: Fix DynamicCompositeTypeTest

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f4037edbf - 049ace4c1


Fix DynamicCompositeTypeTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/049ace4c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/049ace4c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/049ace4c

Branch: refs/heads/cassandra-2.1
Commit: 049ace4c1847d39af5724538476971cbccce3ea9
Parents: f4037ed
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:08:09 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:08:09 2014 +0200

--
 .../cassandra/db/marshal/DynamicCompositeTypeTest.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/049ace4c/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java 
b/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
index e248eae..e9c47a9 100644
--- a/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
+++ b/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
@@ -219,11 +219,11 @@ public class DynamicCompositeTypeTest extends SchemaLoader
 
 IteratorCell iter = cf.getSortedColumns().iterator();
 
-assert iter.next().name().equals(cname5);
-assert iter.next().name().equals(cname4);
-assert iter.next().name().equals(cname1); // null UUID  reversed value
-assert iter.next().name().equals(cname3);
-assert iter.next().name().equals(cname2);
+assert iter.next().name().toByteBuffer().equals(cname5);
+assert iter.next().name().toByteBuffer().equals(cname4);
+assert iter.next().name().toByteBuffer().equals(cname1); // null UUID 
 reversed value
+assert iter.next().name().toByteBuffer().equals(cname3);
+assert iter.next().name().toByteBuffer().equals(cname2);
 }
 
 @Test



[jira] [Updated] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8131:

Assignee: Benjamin Lerer  (was: Sylvain Lebresne)

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Benjamin Lerer
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.0


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174954#comment-14174954
 ] 

Sylvain Lebresne commented on CASSANDRA-8131:
-

I don't really remember which issue fixed that, but it appears to have been 
fixed since 2.1.0 given what [~mshuler] reports (you are wrong in assuming that 
{{ SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 
'c';}} should return 3 rows. It's a {{AND}}, you're asking for the row whose 
{{datakeys}} contains both 'a' and 'c', and no row match that).

That said, there is a small validation bug in that we shouldn't be allowing 
those queries (the ones with 2 {{CONTAINS}}) without {{ALLOW FILTERING}} since 
we do use server side filtering to handle those. [~blerer] can you have a look 
at why that is?

bq. requires an allow filtering

Any 2ndary index query that that has more than one restriction will require 
{{ALLOW FILTERING}} because server side we only ever query a 2ndary index with 
one expression, and we filter out results if there is more expressions. And 
that is exactly the definition of when {{ALLOW FILTERING}} is required.

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Benjamin Lerer
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.0


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] git commit: Update news/version for 2.1.1

2014-10-17 Thread slebresne
Update news/version for 2.1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8fca88e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8fca88e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8fca88e3

Branch: refs/heads/cassandra-2.1
Commit: 8fca88e3068f7c1ec8aa36506643d2a044dd59e3
Parents: 049ace4
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:45:08 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:45:08 2014 +0200

--
 NEWS.txt | 5 +
 debian/changelog | 6 ++
 2 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fca88e3/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 50b9c7e..ecdb47e 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -16,6 +16,11 @@ using the provided 'sstableupgrade' tool.
 2.1.1
 =
 
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 New features
 
- Netty support for epoll on linux is now enabled.  If for some

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fca88e3/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index f2ecceb..4e240eb 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.1.1) unstable; urgency=medium
+
+  * New release
+
+ -- Sylvain Lebresne slebre...@apache.org  Fri, 17 Oct 2014 13:43:46 +0200
+
 cassandra (2.1.0) unstable; urgency=medium
 
   * New release



[1/3] git commit: Update versions for 2.0.11

2014-10-17 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 049ace4c1 - b84d06f4c


Update versions for 2.0.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c8a2a76
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c8a2a76
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c8a2a76

Branch: refs/heads/cassandra-2.1
Commit: 3c8a2a7660f156c41260019965d9e345d934eb01
Parents: 29a8b88
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:02:29 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:02:29 2014 +0200

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c8a2a76/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 102a87b..6f6b795 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -15,14 +15,22 @@ using the provided 'sstableupgrade' tool.
 
 2.0.11
 ==
+
+Upgrading
+-
+- Nothing specific to this release, but refer to previous entries if you
+  are upgrading from a previous version.
+
 New features
 
 - DateTieredCompactionStrategy added, optimized for time series data and 
groups
   data that is written closely in time (CASSANDRA-6602 for details). 
Consider
   this experimental for now.
 
+
 2.0.10
 ==
+
 New features
 
 - CqlPaginRecordReader and CqlPagingInputFormat have both been removed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c8a2a76/build.xml
--
diff --git a/build.xml b/build.xml
index 829c873..8c23407 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=2.0.10/
+property name=base.version value=2.0.11/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c8a2a76/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index e0b1eae..39d9520 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.0.11) unstable; urgency=medium
+
+  * New release
+
+ -- Sylvain Lebresne slebre...@apache.org  Fri, 17 Oct 2014 13:01:02 +0200
+
 cassandra (2.0.10) unstable; urgency=medium
 
   * New release



[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-17 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
build.xml
debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b84d06f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b84d06f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b84d06f4

Branch: refs/heads/cassandra-2.1
Commit: b84d06f4c77032855e5b9e57c6132a5d2600a933
Parents: 8fca88e 3c8a2a7
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:45:47 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:45:47 2014 +0200

--
 NEWS.txt | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b84d06f4/NEWS.txt
--
diff --cc NEWS.txt
index ecdb47e,6f6b795..d3d7b76
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -13,89 -13,14 +13,95 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 +2.1.1
 +=
 +
 +Upgrading
 +-
 +- Nothing specific to this release, but please see 2.1 if you are 
upgrading
 +  from a previous version.
 +
 +New features
 +
 +   - Netty support for epoll on linux is now enabled.  If for some
 + reason you want to disable it pass, the following system property
 + -Dcassandra.native.epoll.enabled=false
 +
 +2.1
 +===
 +
 +New features
 +
 +   - Default data and log locations have changed.  If not set in
 + cassandra.yaml, the data file directory, commitlog directory,
 + and saved caches directory will default to $CASSANDRA_HOME/data/data,
 + $CASSANDRA_HOME/data/commitlog, and $CASSANDRA_HOME/data/saved_caches,
 + respectively.  The log directory now defaults to $CASSANDRA_HOME/logs.
 + If not set, $CASSANDRA_HOME, defaults to the top-level directory of
 + the installation.
 + Note that this should only affect source checkouts and tarballs.
 + Deb and RPM packages will continue to use /var/lib/cassandra and
 + /var/log/cassandra in cassandra.yaml.
 +   - SSTable data directory name is slightly changed. Each directory will
 + have hex string appended after CF name, e.g.
 + ks/cf-5be396077b811e3a3ab9dc4b9ac088d/
 + This hex string part represents unique ColumnFamily ID.
 + Note that existing directories are used as is, so only newly created
 + directories after upgrade have new directory name format.
 +   - Saved key cache files also have ColumnFamily ID in their file name.
 +   - It is now possible to do incremental repairs, sstables that have been
 + repaired are marked with a timestamp and not included in the next
 + repair session. Use nodetool repair -par -inc to use this feature.
 + A tool to manually mark/unmark sstables as repaired is available in
 + tools/bin/sstablerepairedset. This is particularly important when
 + using LCS, or any data not repaired in your first incremental repair
 + will be put back in L0.
 +   - Bootstrapping now ensures that range movements are consistent,
 + meaning the data for the new node is taken from the node that is no 
 + longer a responsible for that range of keys.
 + If you want the old behavior (due to a lost node perhaps)
 + you can set the following property 
(-Dcassandra.consistent.rangemovement=false)
 +   - It is now possible to use quoted identifiers in triggers' names. 
 + WARNING: if you previously used triggers with capital letters in their 
 + names, then you must quote them from now on.
 +   - Improved stress tool (http://goo.gl/OTNqiQ)
 +   - New incremental repair option (http://goo.gl/MjohJp, 
http://goo.gl/f8jSme)
 +   - Incremental replacement of compacted SSTables (http://goo.gl/JfDBGW)
 +   - The row cache can now cache only the head of partitions 
(http://goo.gl/6TJPH6)
 +   - Off-heap memtables (http://goo.gl/YT7znJ)
 +   - CQL improvements and additions: User-defined types, tuple types, 2ndary
 + indexing of collections, ... (http://goo.gl/kQl7GW)
 +
 +Upgrading
 +-
 +   - Rolling upgrades from anything pre-2.0.7 is not supported. Furthermore
 + pre-2.0 sstables are not supported. This means that before upgrading
 + a node on 2.1, this node must be started on 2.0 and
 + 'nodetool upgdradesstables' must be run (and this even in the case
 + of not-rolling upgrades).
 +   - For size-tiered compaction users, Cassandra now defaults to ignoring
 + the coldest 5% of sstables.  This can be customized with the
 + cold_reads_to_omit compaction option; 0.0 omits nothing (the old
 + behavior) and 1.0 omits everything.
 +   - Multithreaded compaction has been removed.
 + 

Git Push Summary

2014-10-17 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [created] b84d06f4c


[jira] [Updated] (CASSANDRA-8125) nodetool statusgossip doesn't exist

2014-10-17 Thread Jan Karlsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Karlsson updated CASSANDRA-8125:

Attachment: 8125-2.1.txt

Uploaded a patch for 2.1

 nodetool statusgossip doesn't exist
 ---

 Key: CASSANDRA-8125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8125
 Project: Cassandra
  Issue Type: Improvement
Reporter: Connor Warrington
Priority: Minor
  Labels: lhf
 Attachments: 8125-2.0.txt, 8125-2.1.txt


 nodetool supports different checks for status on thrift and for binary but 
 does not support a check for gossip. You can get this information from 
 nodetool info.
 The ones that exist are:
 nodetool statusbinary
 nodetool statusthrift
 It would be nice if the following existed:
 nodetool statusgossip



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8058) local consistency level during boostrap (may cause a write timeout on each write request)

2014-10-17 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175028#comment-14175028
 ] 

Aleksey Yeschenko commented on CASSANDRA-8058:
--

bq. About the first comment, concerning the assureSufficientLiveNodes, should I 
create a new issue ?

Sure. Make it a Minor one though.

 local consistency level during boostrap (may cause a write timeout on each 
 write request)
 -

 Key: CASSANDRA-8058
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8058
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Nicolas DOUILLET
Assignee: Nicolas DOUILLET
 Fix For: 2.0.11, 2.1.1

 Attachments: 
 0001-during-boostrap-block-only-for-local-pending-endpoin.patch.txt, 
 0001-during-boostrap-block-only-for-local-pending-endpoint-v2.patch, 
 0001-during-boostrap-block-only-for-local-pending-endpoints-v2-1.patch


 Hi, 
 During bootstrap, for {{LOCAL_QUORUM}} and {{LOCAL_ONE}} consistencies, the 
 {{DatacenterWriteResponseHandler}} were waiting for pending remote endpoints.
 I think that's a regression, because it seems that it has been correctly 
 implemented in CASSANDRA-833, but removed later.
 It was specifically annoying in the case of {{RF=2}} and {{cl=LOCAL_QUORUM}}, 
 because during a bootstrap of a remote node, all requests ended in 
 {{WriteTimeout}}, because they were waiting for a response that would never 
 happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-17 Thread Catalin Alexandru Zamfir (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175030#comment-14175030
 ] 

Catalin Alexandru Zamfir commented on CASSANDRA-8131:
-

True. I was threw off by the CONTAINS which I interpreted as a search, 
basically linking to searches (CONTAINS) in the same result set (at least in 
my mind). Cassandra does not support OR to reach the goal we're trying to 
achieve.

Instead, I've tried this, which I guess it's already fixed (should have 
returned one row):
{noformat}
insert into by_sets (id, datakeys, datavars) values (5, {'a', 'c'}, {'q'});
select * from by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c' ;

 id | datakeys   | datavars
++--
  5 | {'a', 'c'} |{'q'}
  1 |  {'a'} |{'b'}
  4 |  {'a'} |{'z'}

(3 rows)
{noformat}

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Benjamin Lerer
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.0


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8027) Assertion error in CompressionParameters

2014-10-17 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8027:
--
Since Version: 2.0.11  (was: 2.1.1)

 Assertion error in CompressionParameters
 

 Key: CASSANDRA-8027
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8027
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: T Jake Luciani
 Fix For: 2.1.1

 Attachments: 8027.txt


 Compacting a CF with a secondary index throws an assertion error while 
 opening readers for the secondary index components. It is trying to update 
 the CFMD to null because it could not find a CFMD which describes the 
 secondary index. The CompressionParameters are shared between the data and 
 the secondary indices.
 Was introduced in CASSANDRA-7978.
 {noformat}
 java.lang.AssertionError: null
   at 
 org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1131)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1878)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1664)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1676)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:275)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:236)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8027) Assertion error in CompressionParameters

2014-10-17 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8027:
--
Fix Version/s: 2.0.11

 Assertion error in CompressionParameters
 

 Key: CASSANDRA-8027
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8027
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: T Jake Luciani
 Fix For: 2.0.11, 2.1.1

 Attachments: 8027.txt


 Compacting a CF with a secondary index throws an assertion error while 
 opening readers for the secondary index components. It is trying to update 
 the CFMD to null because it could not find a CFMD which describes the 
 secondary index. The CompressionParameters are shared between the data and 
 the secondary indices.
 Was introduced in CASSANDRA-7978.
 {noformat}
 java.lang.AssertionError: null
   at 
 org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1131)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1878)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1664)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1676)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:275)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:236)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7801) A successful INSERT with CAS does not always store data in the DB after a DELETE

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7801:

Attachment: (was: 7801.txt)

 A successful INSERT with CAS does not always store data in the DB after a 
 DELETE
 

 Key: CASSANDRA-7801
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7801
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: PC with Windows 7 and on Linux installation.
 Have seen the fault on Cassandra 2.0.9 and Cassandra 2.1.0-rc5 
Reporter: Martin Fransson
Assignee: Sylvain Lebresne
 Fix For: 2.1.2

 Attachments: 7801-2.1.txt, cas.zip


 When I run a loop with CQL statements to DELETE, INSERT with CAS and then a 
 GET.
 The INSERT opertion is successful (Applied), but no data is stored in the 
 database. I have checked the database manually after the test to verify that 
 the DB is empty.
 {code}
 for (int i = 0; i  1; ++i)
 {
 try
 {
 t.del();
 t.cas();
 t.select();
 }
 catch (Exception e)
 {
 System.err.println(i= + i);
 e.printStackTrace();
 break;
 }
 }
 myCluster = 
 Cluster.builder().addContactPoint(localhost).withPort(12742).build();
 mySession = myCluster.connect();
 mySession.execute(CREATE KEYSPACE IF NOT EXISTS castest WITH 
 REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };);
 mySession.execute(CREATE TABLE IF NOT EXISTS castest.users (userid 
 text PRIMARY KEY, name text));
 myInsert = mySession.prepare(INSERT INTO castest.users (userid, 
 name) values ('user1', 'calle') IF NOT EXISTS);
 myDelete = mySession.prepare(DELETE FROM castest.users where 
 userid='user1');
 myGet = mySession.prepare(SELECT * FROM castest.users where 
 userid='user1');
 }
 {code}
 I can reproduce the fault with the attached program on a PC with windows 7.
 You need a cassandra runing and you need to set the port in the program.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7801) A successful INSERT with CAS does not always store data in the DB after a DELETE

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7801:

Attachment: 7801-2.1.txt

 A successful INSERT with CAS does not always store data in the DB after a 
 DELETE
 

 Key: CASSANDRA-7801
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7801
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: PC with Windows 7 and on Linux installation.
 Have seen the fault on Cassandra 2.0.9 and Cassandra 2.1.0-rc5 
Reporter: Martin Fransson
Assignee: Sylvain Lebresne
 Fix For: 2.1.2

 Attachments: 7801-2.1.txt, cas.zip


 When I run a loop with CQL statements to DELETE, INSERT with CAS and then a 
 GET.
 The INSERT opertion is successful (Applied), but no data is stored in the 
 database. I have checked the database manually after the test to verify that 
 the DB is empty.
 {code}
 for (int i = 0; i  1; ++i)
 {
 try
 {
 t.del();
 t.cas();
 t.select();
 }
 catch (Exception e)
 {
 System.err.println(i= + i);
 e.printStackTrace();
 break;
 }
 }
 myCluster = 
 Cluster.builder().addContactPoint(localhost).withPort(12742).build();
 mySession = myCluster.connect();
 mySession.execute(CREATE KEYSPACE IF NOT EXISTS castest WITH 
 REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };);
 mySession.execute(CREATE TABLE IF NOT EXISTS castest.users (userid 
 text PRIMARY KEY, name text));
 myInsert = mySession.prepare(INSERT INTO castest.users (userid, 
 name) values ('user1', 'calle') IF NOT EXISTS);
 myDelete = mySession.prepare(DELETE FROM castest.users where 
 userid='user1');
 myGet = mySession.prepare(SELECT * FROM castest.users where 
 userid='user1');
 }
 {code}
 I can reproduce the fault with the attached program on a PC with windows 7.
 You need a cassandra runing and you need to set the port in the program.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7801) A successful INSERT with CAS does not always store data in the DB after a DELETE

2014-10-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175053#comment-14175053
 ] 

Sylvain Lebresne commented on CASSANDRA-7801:
-

Attached rebased version against 2.1.

 A successful INSERT with CAS does not always store data in the DB after a 
 DELETE
 

 Key: CASSANDRA-7801
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7801
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: PC with Windows 7 and on Linux installation.
 Have seen the fault on Cassandra 2.0.9 and Cassandra 2.1.0-rc5 
Reporter: Martin Fransson
Assignee: Sylvain Lebresne
 Fix For: 2.1.2

 Attachments: 7801-2.1.txt, cas.zip


 When I run a loop with CQL statements to DELETE, INSERT with CAS and then a 
 GET.
 The INSERT opertion is successful (Applied), but no data is stored in the 
 database. I have checked the database manually after the test to verify that 
 the DB is empty.
 {code}
 for (int i = 0; i  1; ++i)
 {
 try
 {
 t.del();
 t.cas();
 t.select();
 }
 catch (Exception e)
 {
 System.err.println(i= + i);
 e.printStackTrace();
 break;
 }
 }
 myCluster = 
 Cluster.builder().addContactPoint(localhost).withPort(12742).build();
 mySession = myCluster.connect();
 mySession.execute(CREATE KEYSPACE IF NOT EXISTS castest WITH 
 REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };);
 mySession.execute(CREATE TABLE IF NOT EXISTS castest.users (userid 
 text PRIMARY KEY, name text));
 myInsert = mySession.prepare(INSERT INTO castest.users (userid, 
 name) values ('user1', 'calle') IF NOT EXISTS);
 myDelete = mySession.prepare(DELETE FROM castest.users where 
 userid='user1');
 myGet = mySession.prepare(SELECT * FROM castest.users where 
 userid='user1');
 }
 {code}
 I can reproduce the fault with the attached program on a PC with windows 7.
 You need a cassandra runing and you need to set the port in the program.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6492) Have server pick query page size by default

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6492:

Assignee: Benjamin Lerer  (was: Sylvain Lebresne)

 Have server pick query page size by default
 ---

 Key: CASSANDRA-6492
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6492
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Benjamin Lerer
Priority: Minor

 We're almost always going to do a better job picking a page size based on 
 sstable stats, than users will guesstimating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5239) Asynchronous (non-blocking) StorageProxy

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5239:

Assignee: (was: Sylvain Lebresne)

 Asynchronous (non-blocking) StorageProxy
 

 Key: CASSANDRA-5239
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5239
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0 beta 1
Reporter: Vijay
  Labels: performance
 Fix For: 3.0


 Problem Statement: 
 Currently we have rpc_min_threads, rpc_max_threads/ 
 native_transport_min_threads/native_transport_max_threads all of the 
 threads in the TPE are blocking and takes resources, the threads are mostly 
 sleeping. Increasing the Context switch costs.
 Details: 
 We should change StorageProxy methods to provide a callback which contains 
 the location where the results has to be written. When the response arrive 
 StorageProxy callback can write the results directly into the connection. 
 Timeouts can be handled in the same way.
 Fixing Netty should be trivial with some refactor in the storage proxy 
 (currently it is one method call for sending the request and waiting) we need 
 callback.
 Fixing Thrift may be harder because thrift calls the method and expects a 
 return value. We might need to write a custom Codec on Netty for thrift 
 support, which can potentially do callbacks (A Custom codec may be similar to 
 http://engineering.twitter.com/2011/04/twitter-search-is-now-3x-faster_1656.html
  but we dont know details about it). Another option is to update thrift to 
 have a callback.
 FYI, The motivation for this ticket is from another project which i am 
 working on with similar Proxy (blocking Netty transport) and making it Async 
 gave us 2x throughput improvement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-6592) IllegalArgumentException when Preparing Statements

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6592.
-
Resolution: Fixed

As Jonathan said, please open a separate ticket with details on your exact 
problem since some patch has already be committed for this.

 IllegalArgumentException when Preparing Statements
 --

 Key: CASSANDRA-6592
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6592
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Sylvain Lebresne
Priority: Critical
 Fix For: 2.0.5, 1.2.14

 Attachments: 6592-2.0.txt


 When preparing a lot of statements with the python native driver, I 
 occasionally get an error response with an error that corresponds to the 
 following stacktrace in the cassandra logs:
 {noformat}
 ERROR [Native-Transport-Requests:126] 2014-01-11 13:58:05,503 
 ErrorMessage.java (line 210) Unexpected exception during request
 java.lang.IllegalArgumentException
 at 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.checkArgument(ConcurrentLinkedHashMap.java:259)
 at 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$BoundedEntryWeigher.weightOf(ConcurrentLinkedHashMap.java:1448)
 at 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.put(ConcurrentLinkedHashMap.java:764)
 at 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.put(ConcurrentLinkedHashMap.java:743)
 at 
 org.apache.cassandra.cql3.QueryProcessor.storePreparedStatement(QueryProcessor.java:255)
 at 
 org.apache.cassandra.cql3.QueryProcessor.prepare(QueryProcessor.java:221)
 at 
 org.apache.cassandra.transport.messages.PrepareMessage.execute(PrepareMessage.java:77)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:287)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}
 Looking at the CLHM source, this means we're giving the statement a weight 
 that's less than 1.  I'll also note that these errors frequently happen in 
 clumps of 2 or 3 at a time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7538) Truncate of a CF should also delete Paxos CF

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7538:

Assignee: Aleksey Yeschenko  (was: Sylvain Lebresne)

 Truncate of a CF should also delete Paxos CF
 

 Key: CASSANDRA-7538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7538
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Assignee: Aleksey Yeschenko
Priority: Minor

 We don't delete data from Paxos CF during truncate. This will cause data to 
 come back in the next CAS round for incomplete commits. 
 Also I am not sure whether we already do this but should we also not truncate 
 hints for that CF. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8027) Assertion error in CompressionParameters

2014-10-17 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reopened CASSANDRA-8027:
---

Forgot this was affecting 2.0 as well. Patch attached with v2.0 test

 Assertion error in CompressionParameters
 

 Key: CASSANDRA-8027
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8027
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: T Jake Luciani
 Fix For: 2.0.11, 2.1.1

 Attachments: 8027.txt


 Compacting a CF with a secondary index throws an assertion error while 
 opening readers for the secondary index components. It is trying to update 
 the CFMD to null because it could not find a CFMD which describes the 
 secondary index. The CompressionParameters are shared between the data and 
 the secondary indices.
 Was introduced in CASSANDRA-7978.
 {noformat}
 java.lang.AssertionError: null
   at 
 org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1131)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1878)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1664)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1676)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:275)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:236)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8027) Assertion error in CompressionParameters

2014-10-17 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8027:
--
Attachment: 8027-2.0.txt

 Assertion error in CompressionParameters
 

 Key: CASSANDRA-8027
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8027
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: T Jake Luciani
 Fix For: 2.0.11, 2.1.1

 Attachments: 8027-2.0.txt, 8027.txt


 Compacting a CF with a secondary index throws an assertion error while 
 opening readers for the secondary index components. It is trying to update 
 the CFMD to null because it could not find a CFMD which describes the 
 secondary index. The CompressionParameters are shared between the data and 
 the secondary indices.
 Was introduced in CASSANDRA-7978.
 {noformat}
 java.lang.AssertionError: null
   at 
 org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1131)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1878)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1664)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1676)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:275)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:236)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-10-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7886:

Fix Version/s: (was: 2.1.2)
   3.0

 TombstoneOverwhelmingException should not wait for timeout
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 3.0

 Attachments: 7886_v1.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-10-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175064#comment-14175064
 ] 

Sylvain Lebresne commented on CASSANDRA-7886:
-

bq. The Exceptions I added were internal ones.

What I meant is that you added a new error code. This can't be done in existing 
protocol versions as it will break clients. We'd need to only return this code 
in the upcoming v4 protocol (CASSANDRA-8043), document such change in the 
protocol v4 spec, and return an existing code for other versions of the 
protocol. And for old version of the protocol and thrift, I think we should 
return an timeout exception, not an unavailable one since that's what we return 
now.

 TombstoneOverwhelmingException should not wait for timeout
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 3.0

 Attachments: 7886_v1.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8124) Stopping a node during compaction can make already written files stay around

2014-10-17 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-8124:
--

Assignee: Marcus Eriksson

 Stopping a node during compaction can make already written files stay around
 

 Key: CASSANDRA-8124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8124
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: triaged
 Fix For: 2.1.2


 In leveled compaction we generally create many files during compaction, in 
 2.0 we left the ones we had written as -tmp- files, in 2.1 we close and open 
 the readers, removing the -tmp- markers.
 This means that any ongoing compactions will leave the resulting files around 
 if we restart. Note that stop:ing the compaction will cause an exception and 
 that makes us call abort() on the SSTableRewriter which removes the files.
 Guess a fix could be to keep the -tmp- marker and make -tmplink- files until 
 we are actually done with the compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8090) NullPointerException when using prepared statements

2014-10-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175089#comment-14175089
 ] 

Sylvain Lebresne commented on CASSANDRA-8090:
-

bq. I try both approach but finally settled for the second one as it was making 
the code easier to understand.

Could you try to explain why the first one was less easy to understand? A 
priori I'm not a huge fan of copying selectors every time (and of all those 
factories) and I think I would have naturally gone towards some per-query state 
that would be passed to selectors.

 NullPointerException when using prepared statements
 ---

 Key: CASSANDRA-8090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8090
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Due to the changes in CASSANDRA-4914, using a prepared statement from 
 multiple threads leads to a race condition where the simple selection may be 
 reset from a different thread, causing the following NPE:
 {noformat}
 java.lang.NullPointerException: null
   at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
 ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.build(Selection.java:372)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1120)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:283)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:260)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:213)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:63)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:226)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:481)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:438)
  [main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:334)
  [main/:na]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_67]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [main/:na]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [main/:na]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}
 Reproduced this using the stress tool:
 {noformat}
  ./tools/bin/cassandra-stress user profile=tools/cqlstress-example.yaml 
 ops\(insert=1,simple1=1\)
 {noformat}
 You'll need to change the {noformat}select:{noformat} line to be /1000 to 
 prevent the illegal query exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-10-17 Thread Christian Spriegel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175093#comment-14175093
 ] 

Christian Spriegel commented on CASSANDRA-7886:
---

[~slebresne]: I am not sure if we are talking about the same thing :-)

I am pretty sure that I was using the standard CQL client in my test. It showed 
me the new error code I added.

My new exceptions extend RequestExecutionException, which I assume the CQL 
server side is able to handle.

 TombstoneOverwhelmingException should not wait for timeout
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 3.0

 Attachments: 7886_v1.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7713) CommitLogTest failure causes cascading unit test failures

2014-10-17 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175097#comment-14175097
 ] 

Joshua McKenzie commented on CASSANDRA-7713:


7927 removes the logic to set the directory read-only and instead creates the 
exception and pushes it into the handler to let it flow through and shut things 
down so this would no longer be an issue.

 CommitLogTest failure causes cascading unit test failures
 -

 Key: CASSANDRA-7713
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7713
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
Assignee: Bogdan Kanivets
 Fix For: 2.0.12

 Attachments: CommitLogTest.system.log.txt


 When CommitLogTest.testCommitFailurePolicy_stop fails or times out, 
 {{commitDir.setWritable(true)}} is never reached, so the 
 build/test/cassandra/commitlog directory is left without write permissions, 
 causing cascading failure of all subsequent tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8132) Save or stream hints to a safe place in node replacement

2014-10-17 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175121#comment-14175121
 ] 

Brandon Williams commented on CASSANDRA-8132:
-

I see, you want the decom behavior, but only for hints.

 Save or stream hints to a safe place in node replacement
 

 Key: CASSANDRA-8132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8132
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Minh Do
Assignee: Minh Do
 Fix For: 2.1.2


 Often, we need to replace a node with a new instance in the cloud environment 
 where we have all nodes are still alive. To be safe without losing data, we 
 usually make sure all hints are gone before we do this operation.
 Replacement means we just want to shutdown C* process on a node and bring up 
 another instance to take over that node's token.
 However, if a node to be replaced has a lot of stored hints, its 
 HintedHandofManager seems very slow to send the hints to other nodes.  In our 
 case, we tried to replace a node and had to wait for several days before its 
 stored hints are clear out.  As mentioned above, we need all hints on this 
 node to clear out before we can terminate it and replace it by a new 
 instance/machine.
 Since this is not a decommission, I am proposing that we have the same 
 hints-streaming mechanism as in the decommission code.  Furthermore, there 
 needs to be a cmd for NodeTool to trigger this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8090) NullPointerException when using prepared statements

2014-10-17 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175172#comment-14175172
 ] 

Benjamin Lerer commented on CASSANDRA-8090:
---

With the fact that functions can be nested within functions you end up with a 
tree structure of Selectors were each one of them might or might not need to 
store some state.
The first approach to solve that problem is to use a Map to store the state and 
to pass that map to each selector.
The problems that I see with that approach is that we will end up doing a lot 
of map lookups (e.g. if we have 5 columns and 10 000 rows we will end up doing 
50 000 map lookups). To avoid that performance cost I tried to have a state 
container which is somehow iterable in the selector order but as the selectors 
have a tree structure the result that I got was looking a bit hacky and it was 
alway involving casting each time we were retrieving the state (which I found 
quite ugly)
As we were any way forced to create the same (or a bit less) number of state 
objects than selectors I choosed to keep the state and the method together and 
to create a new set of selectors each time but for that I needed a factory.

 NullPointerException when using prepared statements
 ---

 Key: CASSANDRA-8090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8090
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Due to the changes in CASSANDRA-4914, using a prepared statement from 
 multiple threads leads to a race condition where the simple selection may be 
 reset from a different thread, causing the following NPE:
 {noformat}
 java.lang.NullPointerException: null
   at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
 ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.build(Selection.java:372)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1120)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:283)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:260)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:213)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:63)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:226)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:481)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:438)
  [main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:334)
  [main/:na]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_67]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [main/:na]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [main/:na]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}
 Reproduced this using the stress tool:
 {noformat}
  ./tools/bin/cassandra-stress user profile=tools/cqlstress-example.yaml 
 ops\(insert=1,simple1=1\)
 {noformat}
 You'll need to change the {noformat}select:{noformat} line to be /1000 to 
 prevent the illegal query exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6226) Slow query log

2014-10-17 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175171#comment-14175171
 ] 

Jon Haddad commented on CASSANDRA-6226:
---

I agree that a slow query log would be very useful.  As an operator, tracing is 
just a guess.  What I personally need is a log where I can say slow_query_time 
= 3s.  Tracing just generates a lot of noise that I have to wade through.  
Additionally, tracing can only identify a general query pattern that is being 
non-performant, rather than an outlier (a partition with a ton of tombstones).

On the mysql side, I've used tools like mysqldumpslow to aggregate slow queries 
and do analysis of my system.  From an operational perspective this would help 
give feedback to developers.

 Slow query log
 --

 Key: CASSANDRA-6226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6226
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Priority: Minor

 We need a slow query log for Cassandra similar to the one in mysql. 
 Tracing does not work because you don't want to enable it for all the 
 requests and want to catch the slow queries. 
 We already have a JIRA to display queries which are going over lot of 
 tombstones. But a query can be slow also because it is returning lot of data. 
 We can store all the slow queries for a day in system table. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-6226) Slow query log

2014-10-17 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175171#comment-14175171
 ] 

Jon Haddad edited comment on CASSANDRA-6226 at 10/17/14 3:49 PM:
-

I agree that a slow query log would be very useful.  What I personally need is 
a log where I can say slow_query_time = 3s.  Tracing just generates a lot of 
noise that I have to wade through.  Additionally, tracing can only identify a 
general query pattern that is being non-performant, rather than an outlier (a 
partition with a ton of tombstones).

On the mysql side, I've used tools like mysqldumpslow to aggregate slow queries 
and do analysis of my system.  From an operational perspective this would help 
give feedback to developers.


was (Author: rustyrazorblade):
I agree that a slow query log would be very useful.  As an operator, tracing is 
just a guess.  What I personally need is a log where I can say slow_query_time 
= 3s.  Tracing just generates a lot of noise that I have to wade through.  
Additionally, tracing can only identify a general query pattern that is being 
non-performant, rather than an outlier (a partition with a ton of tombstones).

On the mysql side, I've used tools like mysqldumpslow to aggregate slow queries 
and do analysis of my system.  From an operational perspective this would help 
give feedback to developers.

 Slow query log
 --

 Key: CASSANDRA-6226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6226
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Priority: Minor

 We need a slow query log for Cassandra similar to the one in mysql. 
 Tracing does not work because you don't want to enable it for all the 
 requests and want to catch the slow queries. 
 We already have a JIRA to display queries which are going over lot of 
 tombstones. But a query can be slow also because it is returning lot of data. 
 We can store all the slow queries for a day in system table. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repair

2014-10-17 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8084:
---
 Reviewer: Joshua McKenzie
Reproduced In: 2.0.10, 2.0.8  (was: 2.0.8, 2.0.10)

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.12

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8134) cassandra crashes sporadically on windows

2014-10-17 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175203#comment-14175203
 ] 

Philip Thompson commented on CASSANDRA-8134:


You marked the Since version as 1.2.2. What cassandra version are you running 
right now, and are having these issues with?

 cassandra crashes sporadically on windows
 -

 Key: CASSANDRA-8134
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8134
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: Windows Server 2012 R2 , 64 bit Build 9600
 CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 37 stepping 
 1, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, 
 tsc, tscinvbit
 Memory: 4k page, physical 8388148k(3802204k free), swap 8388148k(4088948k 
 free)
 vm_info: Java HotSpot(TM) 64-Bit Server VM (24.60-b09) for windows-amd64 JRE 
 (1.7.0_60-b19), built on May 7 2014 12:55:18 by java_re with unknown MS 
 VC++:1600
Reporter: Stefan Gusenbauer
 Attachments: hs_err_pid1180.log, hs_err_pid5732.log


 During our test runs cassandra crashes from time to time with the following 
 stacktrace:
 a similar bug can be found here 
 https://issues.apache.org/jira/browse/CASSANDRA-5256
 operating system is
 --- S Y S T E M ---
 OS: Windows Server 2012 R2 , 64 bit Build 9600
 CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 37 stepping 
 1, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, 
 tsc, tscinvbit
 Memory: 4k page, physical 8388148k(3802204k free), swap 8388148k(4088948k 
 free)
 vm_info: Java HotSpot(TM) 64-Bit Server VM (24.60-b09) for windows-amd64 JRE 
 (1.7.0_60-b19), built on May 7 2014 12:55:18 by java_re with unknown MS 
 VC++:1600
 time: Wed Oct 15 09:32:30 2014 
 elapsed time: 16 seconds
 attached are several hs_err files too
 {code}
 j org.apache.cassandra.io.util.Memory.getLong(J)J+14 
 j 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(J)Lorg/apache/cassandra/io/compress/CompressionMetadata$Chunk;+53
  
 j org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer()V+9 
 j org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer()V+13 
 J 258 C2 org.apache.cassandra.io.util.RandomAccessReader.read()I (128 bytes) 
 @ 0x0250cbcc [0x0250cae0+0xec] 
 J 306 C2 java.io.RandomAccessFile.readUnsignedShort()I (33 bytes) @ 
 0x025475e4 [0x02547480+0x164] 
 J 307 C2 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(Ljava/io/DataInput;)Ljava/nio/ByteBuffer;
  (9 bytes) @ 0x0254c290 [0x0254c140+0x150] 
 j 
 org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next()Lorg/apache/cassandra/db/columniterator/OnDiskAtomIterator;+65
  
 j 
 org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next()Ljava/lang/Object;+1
  
 j 
 org.apache.cassandra.io.sstable.SSTableScanner.next()Lorg/apache/cassandra/db/columniterator/OnDiskAtomIterator;+41
  
 j org.apache.cassandra.io.sstable.SSTableScanner.next()Ljava/lang/Object;+1 
 j org.apache.cassandra.utils.MergeIterator$Candidate.advance()Z+19 
 j 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(Ljava/util/List;Ljava/util/Comparator;Lorg/apache/cassandra/utils/MergeIterator$Reducer;)V+71
  
 j 
 org.apache.cassandra.utils.MergeIterator.get(Ljava/util/List;Ljava/util/Comparator;Lorg/apache/cassandra/utils/MergeIterator$Reducer;)Lorg/apache/cassandra/utils/IMergeIterator;+46
  
 j 
 org.apache.cassandra.db.compaction.CompactionIterable.iterator()Lorg/apache/cassandra/utils/CloseableIterator;+15
  
 j 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(Ljava/io/File;)V+319
  
 j org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow()V+89 
 j org.apache.cassandra.utils.WrappedRunnable.run()V+1 
 j 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I+6
  
 j 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I+2
  
 j 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run()V+164
  
 j java.util.concurrent.Executors$RunnableAdapter.call()Ljava/lang/Object;+4 
 j java.util.concurrent.FutureTask.run()V+42 
 j 
 java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
  
 j java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5 
 j java.lang.Thread.run()V+11 {code}
 v ~StubRoutines::call_stub 
 V [jvm.dll+0x1ce043]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8134) cassandra crashes sporadically on windows

2014-10-17 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8134:
---
Description: 
During our test runs cassandra crashes from time to time with the following 
stacktrace:

a similar bug can be found here 
https://issues.apache.org/jira/browse/CASSANDRA-5256

operating system is

--- S Y S T E M ---

OS: Windows Server 2012 R2 , 64 bit Build 9600

CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 37 stepping 1, 
cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, tsc, 
tscinvbit

Memory: 4k page, physical 8388148k(3802204k free), swap 8388148k(4088948k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (24.60-b09) for windows-amd64 JRE 
(1.7.0_60-b19), built on May 7 2014 12:55:18 by java_re with unknown MS 
VC++:1600

time: Wed Oct 15 09:32:30 2014 
elapsed time: 16 seconds

attached are several hs_err files too
{code}
j org.apache.cassandra.io.util.Memory.getLong(J)J+14 
j 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(J)Lorg/apache/cassandra/io/compress/CompressionMetadata$Chunk;+53
 
j org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer()V+9 
j org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer()V+13 
J 258 C2 org.apache.cassandra.io.util.RandomAccessReader.read()I (128 bytes) @ 
0x0250cbcc [0x0250cae0+0xec] 
J 306 C2 java.io.RandomAccessFile.readUnsignedShort()I (33 bytes) @ 
0x025475e4 [0x02547480+0x164] 
J 307 C2 
org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(Ljava/io/DataInput;)Ljava/nio/ByteBuffer;
 (9 bytes) @ 0x0254c290 [0x0254c140+0x150] 
j 
org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next()Lorg/apache/cassandra/db/columniterator/OnDiskAtomIterator;+65
 
j 
org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next()Ljava/lang/Object;+1
 
j 
org.apache.cassandra.io.sstable.SSTableScanner.next()Lorg/apache/cassandra/db/columniterator/OnDiskAtomIterator;+41
 
j org.apache.cassandra.io.sstable.SSTableScanner.next()Ljava/lang/Object;+1 
j org.apache.cassandra.utils.MergeIterator$Candidate.advance()Z+19 
j 
org.apache.cassandra.utils.MergeIterator$ManyToOne.init(Ljava/util/List;Ljava/util/Comparator;Lorg/apache/cassandra/utils/MergeIterator$Reducer;)V+71
 
j 
org.apache.cassandra.utils.MergeIterator.get(Ljava/util/List;Ljava/util/Comparator;Lorg/apache/cassandra/utils/MergeIterator$Reducer;)Lorg/apache/cassandra/utils/IMergeIterator;+46
 
j 
org.apache.cassandra.db.compaction.CompactionIterable.iterator()Lorg/apache/cassandra/utils/CloseableIterator;+15
 
j 
org.apache.cassandra.db.compaction.CompactionTask.runWith(Ljava/io/File;)V+319 
j org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow()V+89 
j org.apache.cassandra.utils.WrappedRunnable.run()V+1 
j 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I+6
 
j 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I+2
 
j 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run()V+164
 
j java.util.concurrent.Executors$RunnableAdapter.call()Ljava/lang/Object;+4 
j java.util.concurrent.FutureTask.run()V+42 
j 
java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
 
j java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5 
j java.lang.Thread.run()V+11 {code}
v ~StubRoutines::call_stub 
V [jvm.dll+0x1ce043]

  was:
During our test runs cassandra crashes from time to time with the following 
stacktrace:

a similar bug can be found here 
https://issues.apache.org/jira/browse/CASSANDRA-5256

operating system is

--- S Y S T E M ---

OS: Windows Server 2012 R2 , 64 bit Build 9600

CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 37 stepping 1, 
cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, tsc, 
tscinvbit

Memory: 4k page, physical 8388148k(3802204k free), swap 8388148k(4088948k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (24.60-b09) for windows-amd64 JRE 
(1.7.0_60-b19), built on May 7 2014 12:55:18 by java_re with unknown MS 
VC++:1600

time: Wed Oct 15 09:32:30 2014 
elapsed time: 16 seconds

attached are several hs_err files too

j org.apache.cassandra.io.util.Memory.getLong(J)J+14 
j 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(J)Lorg/apache/cassandra/io/compress/CompressionMetadata$Chunk;+53
 
j org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer()V+9 
j org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer()V+13 
J 258 C2 org.apache.cassandra.io.util.RandomAccessReader.read()I (128 bytes) 

[jira] [Commented] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repai

2014-10-17 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175219#comment-14175219
 ] 

Joshua McKenzie commented on CASSANDRA-8084:


Thoughts:
* I don't like persisting both the peer and the connecting address all the way 
down the abstraction stack through StreamPlan to StreamSession.  Both 
StreamPlan and StreamSession are incurring a pretty big burden just to be able 
to print the peer address alongside the private we're streaming to.
* The name preferred used in StreamPlan implies that it will fall back to the 
from option if it can't hit preferred but that doesn't appear to be the case. 
 Maybe a rename to 'connecting' in this context as well would be appropriate?
* (nit) reseted should probably be resetted in OutboundTcpConnectionPool.java 
(predates this ticket, but while we're in the neighborhood...)

Functionally looks sound to me. Only other thing I'd recommend is testing 
sstableloader as suggested previously.

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.12

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repai

2014-10-17 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175223#comment-14175223
 ] 

Yuki Morishita commented on CASSANDRA-8084:
---

bq. I don't like persisting both the peer and the connecting address all the 
way down the abstraction stack

That's what I wanted to avoid, but one reason we need both is that 'convic'ing 
from gossip does not work with private IPs, and I don't want to introduce 
system table lookup from inside streaming.

I work on other renaming issues.

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.12

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repai

2014-10-17 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175242#comment-14175242
 ] 

Joshua McKenzie commented on CASSANDRA-8084:


Fair point on the compromise.  Better to persist that data locally than add the 
system table lookups into the streaming process and add an external dependency 
in that way.  Consistency on naming should help with that part a bit.

One last thing - it looks like there's some unused method signatures in 
StreamPlan we could take out and also normalize to (update testRequestEmpty() 
for instance) to help clear up some of the clutter and duplication in there.

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.12

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8113) Gossip should ignore generation numbers too far in the future

2014-10-17 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reopened CASSANDRA-8113:
-

It turns out we probably also need to ignore this check when the localgen is 
zero:

{noformat}
WARN  [GossipStage:1] 2014-10-17 05:51:42,554 Gossiper.java:993 - received an 
invalid gossip generation for peer /127.0.0.3; local 
generation = 0, received generation = 1413524483.
{noformat}

 Gossip should ignore generation numbers too far in the future
 -

 Key: CASSANDRA-8113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8113
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Low
Assignee: Jason Brown
 Fix For: 2.1.1

 Attachments: 8113-v1.txt, 8113-v2.txt, 8113-v3.txt, 8113-v4.txt


 If a node sends corrupted gossip, it could set the generation numbers for 
 other nodes to arbitrarily large values. This is dangerous since one bad node 
 (e.g. with bad memory) could in theory bring down the cluster. Nodes should 
 refuse to accept generation numbers that are too far in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8113) Gossip should ignore generation numbers too far in the future

2014-10-17 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8113:

Attachment: 8133-fix.txt

Trivial patch attached for posterity; in testing now.

 Gossip should ignore generation numbers too far in the future
 -

 Key: CASSANDRA-8113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8113
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Low
Assignee: Jason Brown
 Fix For: 2.1.1

 Attachments: 8113-v1.txt, 8113-v2.txt, 8113-v3.txt, 8113-v4.txt, 
 8133-fix.txt


 If a node sends corrupted gossip, it could set the generation numbers for 
 other nodes to arbitrarily large values. This is dangerous since one bad node 
 (e.g. with bad memory) could in theory bring down the cluster. Nodes should 
 refuse to accept generation numbers that are too far in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8113) Gossip should ignore generation numbers too far in the future

2014-10-17 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175330#comment-14175330
 ] 

Brandon Williams edited comment on CASSANDRA-8113 at 10/17/14 6:12 PM:
---

To expound, we load saved endpoints with a generation of zero so it can be 
overridden on the first gossip round.  Trivial patch attached for posterity; in 
testing now.


was (Author: brandon.williams):
Trivial patch attached for posterity; in testing now.

 Gossip should ignore generation numbers too far in the future
 -

 Key: CASSANDRA-8113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8113
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Low
Assignee: Jason Brown
 Fix For: 2.1.1

 Attachments: 8113-v1.txt, 8113-v2.txt, 8113-v3.txt, 8113-v4.txt, 
 8133-fix.txt


 If a node sends corrupted gossip, it could set the generation numbers for 
 other nodes to arbitrarily large values. This is dangerous since one bad node 
 (e.g. with bad memory) could in theory bring down the cluster. Nodes should 
 refuse to accept generation numbers that are too far in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8135) documentation missing for CONTAINS keyword

2014-10-17 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-8135:
-

 Summary: documentation missing for CONTAINS keyword
 Key: CASSANDRA-8135
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8135
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jon Haddad


the contains keyword was covered in this blog entry 
http://www.datastax.com/dev/blog/cql-in-2-1 but is missing from the 
documentation https://cassandra.apache.org/doc/cql3/CQL.html#collections



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8115) Windows install scripts fail to set logdir and datadir

2014-10-17 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8115:
---
Attachment: 8115_v2.txt

Attached v2 that disallows unknown arguments.  The limitations of checked 
params in PowerShell meant I had to roll my own using $args which was a little 
more involved so I'd prefer one more quick testing/review of it if it's not too 
much trouble [~philipthompson].

 Windows install scripts fail to set logdir and datadir
 --

 Key: CASSANDRA-8115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8115
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.2

 Attachments: 8115_v1.txt, 8115_v2.txt


 After CASSANDRA-7136, the install scripts to run Cassandra as a service fail 
 on both the legacy and the powershell paths.  Looks like they need to have
 {code}
 ++JvmOptions=-Dcassandra.logdir=%CASSANDRA_HOME%\logs ^
 ++JvmOptions=-Dcassandra.storagedir=%CASSANDRA_HOME%\data
 {code}
 added to function correctly.
 We should take this opportunity to make sure the source of the java options 
 is uniform for both running and installation to prevent mismatches like this 
 in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8135) documentation missing for CONTAINS keyword

2014-10-17 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8135:
---
Component/s: Documentation  website

 documentation missing for CONTAINS keyword
 --

 Key: CASSANDRA-8135
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8135
 Project: Cassandra
  Issue Type: New Feature
  Components: Documentation  website
Reporter: Jon Haddad

 the contains keyword was covered in this blog entry 
 http://www.datastax.com/dev/blog/cql-in-2-1 but is missing from the 
 documentation https://cassandra.apache.org/doc/cql3/CQL.html#collections



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8136) Windows Service never finishes shutting down

2014-10-17 Thread Joshua McKenzie (JIRA)
Joshua McKenzie created CASSANDRA-8136:
--

 Summary: Windows Service never finishes shutting down
 Key: CASSANDRA-8136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8136
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
 Fix For: 2.1.2


When using procrun and the -install combination on Windows and starting 
cassandra via services.msc, stopping the service never completes and gets stuck 
in stopping status forever.  Probably related to:

{code}
public void stop()
{
// this doesn't entirely shut down Cassandra, just the RPC server.
// jsvc takes care of taking the rest down
logger.info(Cassandra shutting down...);
thriftServer.stop();
nativeServer.stop();
}
{code}

procrun calls the StopMethod as CassandraDaemon.stop so we may need to either 
a) augment what procrun's doing or b) add a more comprehensive stop to be 
called on Windows shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7546) AtomicSortedColumns.addAllWithSizeDelta has a spin loop that allocates memory

2014-10-17 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175425#comment-14175425
 ] 

graham sanderson commented on CASSANDRA-7546:
-

Thanks [~yukim] ... note I just noticed that in CHANGES.txt this is recorded in 
the merge from 2.0: section

 AtomicSortedColumns.addAllWithSizeDelta has a spin loop that allocates memory
 -

 Key: CASSANDRA-7546
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7546
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson
Assignee: graham sanderson
 Fix For: 2.1.1

 Attachments: 7546.20.txt, 7546.20_2.txt, 7546.20_3.txt, 
 7546.20_4.txt, 7546.20_5.txt, 7546.20_6.txt, 7546.20_7.txt, 7546.20_7b.txt, 
 7546.20_alt.txt, 7546.20_async.txt, 7546.21_v1.txt, 
 cassandra-2.1-7546-v2.txt, cassandra-2.1-7546-v3.txt, cassandra-2.1-7546.txt, 
 graph2_7546.png, graph3_7546.png, graph4_7546.png, graphs1.png, 
 hint_spikes.png, suggestion1.txt, suggestion1_21.txt, young_gen_gc.png


 In order to preserve atomicity, this code attempts to read, clone/update, 
 then CAS the state of the partition.
 Under heavy contention for updating a single partition this can cause some 
 fairly staggering memory growth (the more cores on your machine the worst it 
 gets).
 Whilst many usage patterns don't do highly concurrent updates to the same 
 partition, hinting today, does, and in this case wild (order(s) of magnitude 
 more than expected) memory allocation rates can be seen (especially when the 
 updates being hinted are small updates to different partitions which can 
 happen very fast on their own) - see CASSANDRA-7545
 It would be best to eliminate/reduce/limit the spinning memory allocation 
 whilst not slowing down the very common un-contended case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repair

2014-10-17 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-8084:
--
Attachment: 8084-2.0-v4.txt

Updated based on Josh's review.

[~jblangs...@datastax.com] can you confirm that sstableloader work with the 
node with private IP (on AWS)?

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.12

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0-v4.txt, 
 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8113) Gossip should ignore generation numbers too far in the future

2014-10-17 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175487#comment-14175487
 ] 

Jason Brown commented on CASSANDRA-8113:


+1 to @driftx's fix-it patch 

 Gossip should ignore generation numbers too far in the future
 -

 Key: CASSANDRA-8113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8113
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Low
Assignee: Jason Brown
 Fix For: 2.1.1

 Attachments: 8113-v1.txt, 8113-v2.txt, 8113-v3.txt, 8113-v4.txt, 
 8133-fix.txt


 If a node sends corrupted gossip, it could set the generation numbers for 
 other nodes to arbitrarily large values. This is dangerous since one bad node 
 (e.g. with bad memory) could in theory bring down the cluster. Nodes should 
 refuse to accept generation numbers that are too far in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/7] git commit: Update news/version for 2.1.1

2014-10-17 Thread brandonwilliams
Update news/version for 2.1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8fca88e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8fca88e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8fca88e3

Branch: refs/heads/trunk
Commit: 8fca88e3068f7c1ec8aa36506643d2a044dd59e3
Parents: 049ace4
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:45:08 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:45:08 2014 +0200

--
 NEWS.txt | 5 +
 debian/changelog | 6 ++
 2 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fca88e3/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 50b9c7e..ecdb47e 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -16,6 +16,11 @@ using the provided 'sstableupgrade' tool.
 2.1.1
 =
 
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 New features
 
- Netty support for epoll on linux is now enabled.  If for some

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fca88e3/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index f2ecceb..4e240eb 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.1.1) unstable; urgency=medium
+
+  * New release
+
+ -- Sylvain Lebresne slebre...@apache.org  Fri, 17 Oct 2014 13:43:46 +0200
+
 cassandra (2.1.0) unstable; urgency=medium
 
   * New release



[4/7] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-17 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
build.xml
debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b84d06f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b84d06f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b84d06f4

Branch: refs/heads/trunk
Commit: b84d06f4c77032855e5b9e57c6132a5d2600a933
Parents: 8fca88e 3c8a2a7
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:45:47 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:45:47 2014 +0200

--
 NEWS.txt | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b84d06f4/NEWS.txt
--
diff --cc NEWS.txt
index ecdb47e,6f6b795..d3d7b76
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -13,89 -13,14 +13,95 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 +2.1.1
 +=
 +
 +Upgrading
 +-
 +- Nothing specific to this release, but please see 2.1 if you are 
upgrading
 +  from a previous version.
 +
 +New features
 +
 +   - Netty support for epoll on linux is now enabled.  If for some
 + reason you want to disable it pass, the following system property
 + -Dcassandra.native.epoll.enabled=false
 +
 +2.1
 +===
 +
 +New features
 +
 +   - Default data and log locations have changed.  If not set in
 + cassandra.yaml, the data file directory, commitlog directory,
 + and saved caches directory will default to $CASSANDRA_HOME/data/data,
 + $CASSANDRA_HOME/data/commitlog, and $CASSANDRA_HOME/data/saved_caches,
 + respectively.  The log directory now defaults to $CASSANDRA_HOME/logs.
 + If not set, $CASSANDRA_HOME, defaults to the top-level directory of
 + the installation.
 + Note that this should only affect source checkouts and tarballs.
 + Deb and RPM packages will continue to use /var/lib/cassandra and
 + /var/log/cassandra in cassandra.yaml.
 +   - SSTable data directory name is slightly changed. Each directory will
 + have hex string appended after CF name, e.g.
 + ks/cf-5be396077b811e3a3ab9dc4b9ac088d/
 + This hex string part represents unique ColumnFamily ID.
 + Note that existing directories are used as is, so only newly created
 + directories after upgrade have new directory name format.
 +   - Saved key cache files also have ColumnFamily ID in their file name.
 +   - It is now possible to do incremental repairs, sstables that have been
 + repaired are marked with a timestamp and not included in the next
 + repair session. Use nodetool repair -par -inc to use this feature.
 + A tool to manually mark/unmark sstables as repaired is available in
 + tools/bin/sstablerepairedset. This is particularly important when
 + using LCS, or any data not repaired in your first incremental repair
 + will be put back in L0.
 +   - Bootstrapping now ensures that range movements are consistent,
 + meaning the data for the new node is taken from the node that is no 
 + longer a responsible for that range of keys.
 + If you want the old behavior (due to a lost node perhaps)
 + you can set the following property 
(-Dcassandra.consistent.rangemovement=false)
 +   - It is now possible to use quoted identifiers in triggers' names. 
 + WARNING: if you previously used triggers with capital letters in their 
 + names, then you must quote them from now on.
 +   - Improved stress tool (http://goo.gl/OTNqiQ)
 +   - New incremental repair option (http://goo.gl/MjohJp, 
http://goo.gl/f8jSme)
 +   - Incremental replacement of compacted SSTables (http://goo.gl/JfDBGW)
 +   - The row cache can now cache only the head of partitions 
(http://goo.gl/6TJPH6)
 +   - Off-heap memtables (http://goo.gl/YT7znJ)
 +   - CQL improvements and additions: User-defined types, tuple types, 2ndary
 + indexing of collections, ... (http://goo.gl/kQl7GW)
 +
 +Upgrading
 +-
 +   - Rolling upgrades from anything pre-2.0.7 is not supported. Furthermore
 + pre-2.0 sstables are not supported. This means that before upgrading
 + a node on 2.1, this node must be started on 2.0 and
 + 'nodetool upgdradesstables' must be run (and this even in the case
 + of not-rolling upgrades).
 +   - For size-tiered compaction users, Cassandra now defaults to ignoring
 + the coldest 5% of sstables.  This can be customized with the
 + cold_reads_to_omit compaction option; 0.0 omits nothing (the old
 + behavior) and 1.0 omits everything.
 +   - Multithreaded compaction has been removed.
 +   - 

[2/7] git commit: Fix DynamicCompositeTypeTest

2014-10-17 Thread brandonwilliams
Fix DynamicCompositeTypeTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/049ace4c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/049ace4c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/049ace4c

Branch: refs/heads/trunk
Commit: 049ace4c1847d39af5724538476971cbccce3ea9
Parents: f4037ed
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:08:09 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:08:09 2014 +0200

--
 .../cassandra/db/marshal/DynamicCompositeTypeTest.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/049ace4c/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java 
b/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
index e248eae..e9c47a9 100644
--- a/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
+++ b/test/unit/org/apache/cassandra/db/marshal/DynamicCompositeTypeTest.java
@@ -219,11 +219,11 @@ public class DynamicCompositeTypeTest extends SchemaLoader
 
 IteratorCell iter = cf.getSortedColumns().iterator();
 
-assert iter.next().name().equals(cname5);
-assert iter.next().name().equals(cname4);
-assert iter.next().name().equals(cname1); // null UUID  reversed value
-assert iter.next().name().equals(cname3);
-assert iter.next().name().equals(cname2);
+assert iter.next().name().toByteBuffer().equals(cname5);
+assert iter.next().name().toByteBuffer().equals(cname4);
+assert iter.next().name().toByteBuffer().equals(cname1); // null UUID 
 reversed value
+assert iter.next().name().toByteBuffer().equals(cname3);
+assert iter.next().name().toByteBuffer().equals(cname2);
 }
 
 @Test



[6/7] git commit: Don't do generation safety check when the local gen is zero

2014-10-17 Thread brandonwilliams
Don't do generation safety check when the local gen is zero

Patch by brandonwilliams, reviewed by jasobrown for CASSANDRA-8113


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/42f85904
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/42f85904
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/42f85904

Branch: refs/heads/cassandra-2.1
Commit: 42f85904221aeaf0181f75af2fa5d469b8cbcee7
Parents: b84d06f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Oct 17 15:13:26 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Oct 17 15:13:26 2014 -0500

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/42f85904/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 5f0e576..3fdee88 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -987,7 +987,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 if (logger.isTraceEnabled())
 logger.trace(ep + local generation  + localGeneration + 
, remote generation  + remoteGeneration);
 
-if (remoteGeneration  localGeneration + 
MAX_GENERATION_DIFFERENCE)
+if (localGeneration != 0  remoteGeneration  localGeneration 
+ MAX_GENERATION_DIFFERENCE)
 {
 // assume some peer has corrupted memory and is 
broadcasting an unbelievable generation about another peer (or itself)
 logger.warn(received an invalid gossip generation for 
peer {}; local generation = {}, received generation = {}, ep, localGeneration, 
remoteGeneration);



[5/7] git commit: Don't do generation safety check when the local gen is zero

2014-10-17 Thread brandonwilliams
Don't do generation safety check when the local gen is zero

Patch by brandonwilliams, reviewed by jasobrown for CASSANDRA-8113


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/42f85904
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/42f85904
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/42f85904

Branch: refs/heads/trunk
Commit: 42f85904221aeaf0181f75af2fa5d469b8cbcee7
Parents: b84d06f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Oct 17 15:13:26 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Oct 17 15:13:26 2014 -0500

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/42f85904/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 5f0e576..3fdee88 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -987,7 +987,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 if (logger.isTraceEnabled())
 logger.trace(ep + local generation  + localGeneration + 
, remote generation  + remoteGeneration);
 
-if (remoteGeneration  localGeneration + 
MAX_GENERATION_DIFFERENCE)
+if (localGeneration != 0  remoteGeneration  localGeneration 
+ MAX_GENERATION_DIFFERENCE)
 {
 // assume some peer has corrupted memory and is 
broadcasting an unbelievable generation about another peer (or itself)
 logger.warn(received an invalid gossip generation for 
peer {}; local generation = {}, received generation = {}, ep, localGeneration, 
remoteGeneration);



[7/7] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-17 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b6b08f28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b6b08f28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b6b08f28

Branch: refs/heads/trunk
Commit: b6b08f281bc763a7d7a16d950593c7f8466d9328
Parents: fea7d9a 42f8590
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Oct 17 15:14:21 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Oct 17 15:14:21 2014 -0500

--
 NEWS.txt| 13 +
 debian/changelog|  6 ++
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 3 files changed, 20 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b6b08f28/NEWS.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b6b08f28/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index ebe05d2,3fdee88..e698adf
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -993,9 -985,9 +993,9 @@@ public class Gossiper implements IFailu
  int localGeneration = 
localEpStatePtr.getHeartBeatState().getGeneration();
  int remoteGeneration = 
remoteState.getHeartBeatState().getGeneration();
  if (logger.isTraceEnabled())
 -logger.trace(ep + local generation  + localGeneration + 
, remote generation  + remoteGeneration);
 +logger.trace({} local generation {}, remote generation 
{}, ep, localGeneration, remoteGeneration);
  
- if (remoteGeneration  localGeneration + 
MAX_GENERATION_DIFFERENCE)
+ if (localGeneration != 0  remoteGeneration  
localGeneration + MAX_GENERATION_DIFFERENCE)
  {
  // assume some peer has corrupted memory and is 
broadcasting an unbelievable generation about another peer (or itself)
  logger.warn(received an invalid gossip generation for 
peer {}; local generation = {}, received generation = {}, ep, localGeneration, 
remoteGeneration);



[1/7] git commit: Update versions for 2.0.11

2014-10-17 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b84d06f4c - 42f859042
  refs/heads/trunk fea7d9a03 - b6b08f281


Update versions for 2.0.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c8a2a76
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c8a2a76
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c8a2a76

Branch: refs/heads/trunk
Commit: 3c8a2a7660f156c41260019965d9e345d934eb01
Parents: 29a8b88
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:02:29 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 17 13:02:29 2014 +0200

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c8a2a76/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 102a87b..6f6b795 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -15,14 +15,22 @@ using the provided 'sstableupgrade' tool.
 
 2.0.11
 ==
+
+Upgrading
+-
+- Nothing specific to this release, but refer to previous entries if you
+  are upgrading from a previous version.
+
 New features
 
 - DateTieredCompactionStrategy added, optimized for time series data and 
groups
   data that is written closely in time (CASSANDRA-6602 for details). 
Consider
   this experimental for now.
 
+
 2.0.10
 ==
+
 New features
 
 - CqlPaginRecordReader and CqlPagingInputFormat have both been removed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c8a2a76/build.xml
--
diff --git a/build.xml b/build.xml
index 829c873..8c23407 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=2.0.10/
+property name=base.version value=2.0.11/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c8a2a76/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index e0b1eae..39d9520 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.0.11) unstable; urgency=medium
+
+  * New release
+
+ -- Sylvain Lebresne slebre...@apache.org  Fri, 17 Oct 2014 13:01:02 +0200
+
 cassandra (2.0.10) unstable; urgency=medium
 
   * New release



[jira] [Resolved] (CASSANDRA-8113) Gossip should ignore generation numbers too far in the future

2014-10-17 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-8113.
-
Resolution: Fixed

Committed.

 Gossip should ignore generation numbers too far in the future
 -

 Key: CASSANDRA-8113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8113
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Low
Assignee: Jason Brown
 Fix For: 2.1.1

 Attachments: 8113-v1.txt, 8113-v2.txt, 8113-v3.txt, 8113-v4.txt, 
 8133-fix.txt


 If a node sends corrupted gossip, it could set the generation numbers for 
 other nodes to arbitrarily large values. This is dangerous since one bad node 
 (e.g. with bad memory) could in theory bring down the cluster. Nodes should 
 refuse to accept generation numbers that are too far in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8116) HSHA fails with default rpc_max_threads setting

2014-10-17 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175498#comment-14175498
 ] 

Tyler Hobbs commented on CASSANDRA-8116:


Is it guaranteed to OOM?  Can't we just check for that combination and provide 
a sensible error instead of OOMing and letting the user figure it out?

 HSHA fails with default rpc_max_threads setting
 ---

 Key: CASSANDRA-8116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8116
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Mike Adamson
Assignee: Mike Adamson
Priority: Minor
 Fix For: 2.0.11, 2.1.1

 Attachments: 8116.txt


 The HSHA server fails with 'Out of heap space' error if the rpc_max_threads 
 is left at its default setting (unlimited) in cassandra.yaml.
 I'm not proposing any code change for this but have submitted a patch for a 
 comment change in cassandra.yaml to indicate that rpc_max_threads needs to be 
 changed if you use HSHA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8116) HSHA fails with default rpc_max_threads setting

2014-10-17 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reopened CASSANDRA-8116:


 HSHA fails with default rpc_max_threads setting
 ---

 Key: CASSANDRA-8116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8116
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Mike Adamson
Assignee: Mike Adamson
Priority: Minor
 Fix For: 2.0.11, 2.1.1

 Attachments: 8116.txt


 The HSHA server fails with 'Out of heap space' error if the rpc_max_threads 
 is left at its default setting (unlimited) in cassandra.yaml.
 I'm not proposing any code change for this but have submitted a patch for a 
 comment change in cassandra.yaml to indicate that rpc_max_threads needs to be 
 changed if you use HSHA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8027) Assertion error in CompressionParameters

2014-10-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175502#comment-14175502
 ] 

Michael Shuler commented on CASSANDRA-8027:
---

Tested out 2.0 branch with the patch, and it looks good to me.
{noformat}
[junit] Testsuite: org.apache.cassandra.cql3.SSTableMetadataTrackingTest
[junit] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
14.036 sec
{noformat}

 Assertion error in CompressionParameters
 

 Key: CASSANDRA-8027
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8027
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: T Jake Luciani
 Fix For: 2.0.11, 2.1.1

 Attachments: 8027-2.0.txt, 8027.txt


 Compacting a CF with a secondary index throws an assertion error while 
 opening readers for the secondary index components. It is trying to update 
 the CFMD to null because it could not find a CFMD which describes the 
 secondary index. The CompressionParameters are shared between the data and 
 the secondary indices.
 Was introduced in CASSANDRA-7978.
 {noformat}
 java.lang.AssertionError: null
   at 
 org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1131)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1878)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1664)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1676)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:275)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:236)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_67]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_67]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_67]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8137) Prepared statement size overflow error

2014-10-17 Thread Kishan Karunaratne (JIRA)
Kishan Karunaratne created CASSANDRA-8137:
-

 Summary: Prepared statement size overflow error
 Key: CASSANDRA-8137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8137
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux Mint 64 | C* 2.1.0 | Ruby-driver master
Reporter: Kishan Karunaratne
 Fix For: 2.1.0


When using C* 2.1.0 and Ruby-driver master, I get the following error:

{noformat}
Prepared statement of size 4423336 bytes is larger than allowed maximum of 
2027520 bytes.

Unfortunately I don't have a stacktrace as the error isn't recorded in the 
system log. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8137) Prepared statement size overflow error

2014-10-17 Thread Kishan Karunaratne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Karunaratne updated CASSANDRA-8137:
--
Description: 
When using C* 2.1.0 and Ruby-driver master, I get the following error when 
running the Ruby duration test:

{noformat}
Prepared statement of size 4451848 bytes is larger than allowed maximum of 
2027520 bytes.
Prepared statement of size 4434568 bytes is larger than allowed maximum of 
2027520 bytes.
{noformat}

They usually occur in batches of 1, but sometimes in multiples as seen above.  
It happens occasionally, around 20% of the time when running the code.  
Unfortunately I don't have a stacktrace as the error isn't recorded in the 
system log. 
This is my schema, and the offending prepare statement:

{noformat}
@session.execute(CREATE TABLE duration_test.ints (
key INT,
copy INT,
value INT,
PRIMARY KEY (key, copy))
)
{noformat}

{noformat}
select = @session.prepare(SELECT * FROM ints WHERE key=?)
{noformat}

Now, I notice that if I explicitly specify the keyspace in the prepare, I don't 
get the error.

  was:
When using C* 2.1.0 and Ruby-driver master, I get the following error:

{noformat}
Prepared statement of size 4423336 bytes is larger than allowed maximum of 
2027520 bytes.

Unfortunately I don't have a stacktrace as the error isn't recorded in the 
system log. 


 Prepared statement size overflow error
 --

 Key: CASSANDRA-8137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8137
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux Mint 64 | C* 2.1.0 | Ruby-driver master
Reporter: Kishan Karunaratne
 Fix For: 2.1.0


 When using C* 2.1.0 and Ruby-driver master, I get the following error when 
 running the Ruby duration test:
 {noformat}
 Prepared statement of size 4451848 bytes is larger than allowed maximum of 
 2027520 bytes.
 Prepared statement of size 4434568 bytes is larger than allowed maximum of 
 2027520 bytes.
 {noformat}
 They usually occur in batches of 1, but sometimes in multiples as seen above. 
  It happens occasionally, around 20% of the time when running the code.  
 Unfortunately I don't have a stacktrace as the error isn't recorded in the 
 system log. 
 This is my schema, and the offending prepare statement:
 {noformat}
 @session.execute(CREATE TABLE duration_test.ints (
 key INT,
 copy INT,
 value INT,
 PRIMARY KEY (key, copy))
 )
 {noformat}
 {noformat}
 select = @session.prepare(SELECT * FROM ints WHERE key=?)
 {noformat}
 Now, I notice that if I explicitly specify the keyspace in the prepare, I 
 don't get the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8137) Prepared statement size overflow error

2014-10-17 Thread Kishan Karunaratne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Karunaratne updated CASSANDRA-8137:
--
Fix Version/s: (was: 2.1.0)
   2.1.1

 Prepared statement size overflow error
 --

 Key: CASSANDRA-8137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8137
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux Mint 64 | C* 2.1.0 | Ruby-driver master
Reporter: Kishan Karunaratne
 Fix For: 2.1.1


 When using C* 2.1.0 and Ruby-driver master, I get the following error when 
 running the Ruby duration test:
 {noformat}
 Prepared statement of size 4451848 bytes is larger than allowed maximum of 
 2027520 bytes.
 Prepared statement of size 4434568 bytes is larger than allowed maximum of 
 2027520 bytes.
 {noformat}
 They usually occur in batches of 1, but sometimes in multiples as seen above. 
  It happens occasionally, around 20% of the time when running the code.  
 Unfortunately I don't have a stacktrace as the error isn't recorded in the 
 system log. 
 This is my schema, and the offending prepare statement:
 {noformat}
 @session.execute(CREATE TABLE duration_test.ints (
 key INT,
 copy INT,
 value INT,
 PRIMARY KEY (key, copy))
 )
 {noformat}
 {noformat}
 select = @session.prepare(SELECT * FROM ints WHERE key=?)
 {noformat}
 Now, I notice that if I explicitly specify the keyspace in the prepare, I 
 don't get the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8137) Prepared statement size overflow error

2014-10-17 Thread Kishan Karunaratne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishan Karunaratne updated CASSANDRA-8137:
--
Description: 
When using C* 2.1.0 and Ruby-driver master, I get the following error when 
running the Ruby duration test (which prepares a lot of statements, in many 
threads):

{noformat}
Prepared statement of size 4451848 bytes is larger than allowed maximum of 
2027520 bytes.
Prepared statement of size 4434568 bytes is larger than allowed maximum of 
2027520 bytes.
{noformat}

They usually occur in batches of 1, but sometimes in multiples as seen above.  
It happens occasionally, around 20% of the time when running the code.  
Unfortunately I don't have a stacktrace as the error isn't recorded in the 
system log. 
This is my schema, and the offending prepare statement:

{noformat}
@session.execute(CREATE TABLE duration_test.ints (
key INT,
copy INT,
value INT,
PRIMARY KEY (key, copy))
)
{noformat}

{noformat}
select = @session.prepare(SELECT * FROM ints WHERE key=?)
{noformat}

Now, I notice that if I explicitly specify the keyspace in the prepare, I don't 
get the error.

  was:
When using C* 2.1.0 and Ruby-driver master, I get the following error when 
running the Ruby duration test:

{noformat}
Prepared statement of size 4451848 bytes is larger than allowed maximum of 
2027520 bytes.
Prepared statement of size 4434568 bytes is larger than allowed maximum of 
2027520 bytes.
{noformat}

They usually occur in batches of 1, but sometimes in multiples as seen above.  
It happens occasionally, around 20% of the time when running the code.  
Unfortunately I don't have a stacktrace as the error isn't recorded in the 
system log. 
This is my schema, and the offending prepare statement:

{noformat}
@session.execute(CREATE TABLE duration_test.ints (
key INT,
copy INT,
value INT,
PRIMARY KEY (key, copy))
)
{noformat}

{noformat}
select = @session.prepare(SELECT * FROM ints WHERE key=?)
{noformat}

Now, I notice that if I explicitly specify the keyspace in the prepare, I don't 
get the error.


 Prepared statement size overflow error
 --

 Key: CASSANDRA-8137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8137
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux Mint 64 | C* 2.1.0 | Ruby-driver master
Reporter: Kishan Karunaratne
 Fix For: 2.1.1


 When using C* 2.1.0 and Ruby-driver master, I get the following error when 
 running the Ruby duration test (which prepares a lot of statements, in many 
 threads):
 {noformat}
 Prepared statement of size 4451848 bytes is larger than allowed maximum of 
 2027520 bytes.
 Prepared statement of size 4434568 bytes is larger than allowed maximum of 
 2027520 bytes.
 {noformat}
 They usually occur in batches of 1, but sometimes in multiples as seen above. 
  It happens occasionally, around 20% of the time when running the code.  
 Unfortunately I don't have a stacktrace as the error isn't recorded in the 
 system log. 
 This is my schema, and the offending prepare statement:
 {noformat}
 @session.execute(CREATE TABLE duration_test.ints (
 key INT,
 copy INT,
 value INT,
 PRIMARY KEY (key, copy))
 )
 {noformat}
 {noformat}
 select = @session.prepare(SELECT * FROM ints WHERE key=?)
 {noformat}
 Now, I notice that if I explicitly specify the keyspace in the prepare, I 
 don't get the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repai

2014-10-17 Thread J.B. Langston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175588#comment-14175588
 ] 

J.B. Langston commented on CASSANDRA-8084:
--

I don't think sstableloader is working right. Here is the output for 
sstableloader itself:

{code}
automaton@ip-172-31-7-50:~/Keyspace1/Standard1$ sstableloader -d localhost `pwd`
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-320-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-326-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-325-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-283-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-267-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-211-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-301-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-316-Data.db to 
[/54.183.192.248, /54.215.139.161, /54.165.222.3, /54.172.118.222]
Streaming session ID: ac5dd440-5645-11e4-a813-3d13c3d3c540
progress: [/54.172.118.222 8/8 (100%)] [/54.183.192.248 8/8 (100%)] 
[/54.165.222.3 8/8 (100%)] [/54.215.139.161 8/8 (100%)] [total: 100% - 
2147483647MB/s (avg: 30MB/s)
{code}

Here is netstats on the node where it is running:

{code}
Responses   n/a 0812
automaton@ip-172-31-7-50:~$ nodetool netstats
Mode: NORMAL
Bulk Load ac5dd440-5645-11e4-a813-3d13c3d3c540
/172.31.7.50 (using /54.183.192.248)
Receiving 8 files, 1059673728 bytes total

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-10-Data.db
 56468194/164372226 bytes(34%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-4-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-3-Data.db
 50674396/50674396 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-5-Data.db
 68597334/68597334 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-7-Data.db
 139068110/139068110 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-6-Data.db
 12682638/12682638 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-9-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-8-Data.db
 68279024/68279024 bytes(100%) received from /172.31.7.50
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool NameActive   Pending  Completed
Commandsn/a 0  0
Responses   n/a 0970
{code}

Here's netstats on the other node in the same DC:

{code}
automaton@ip-172-31-40-169:~$ nodetool netstats
Mode: NORMAL
Bulk Load ac5dd440-5645-11e4-a813-3d13c3d3c540
/172.31.7.50 (using /54.183.192.248)
Receiving 8 files, 1059673728 bytes total

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-239-Data.db
 68279024/68279024 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-245-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-246-Data.db
 43078602/50674396 bytes(85%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-240-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-241-Data.db
 12682638/12682638 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-243-Data.db
 139068110/139068110 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-242-Data.db
 164372226/164372226 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-244-Data.db
 68597334/68597334 bytes(100%) received from /172.31.7.50
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool NameActive   Pending  Completed
Commandsn/a 0 249589
Responses   

[jira] [Comment Edited] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool

2014-10-17 Thread J.B. Langston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175588#comment-14175588
 ] 

J.B. Langston edited comment on CASSANDRA-8084 at 10/17/14 9:43 PM:


I don't think sstableloader is working right. Here is the output for 
sstableloader itself:

{code}
automaton@ip-172-31-7-50:~/Keyspace1/Standard1$ sstableloader -d localhost `pwd`
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-320-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-326-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-325-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-283-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-267-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-211-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-301-Data.db 
/home/automaton/Keyspace1/Standard1/Keyspace1-Standard1-jb-316-Data.db to 
[/54.183.192.248, /54.215.139.161, /54.165.222.3, /54.172.118.222]
Streaming session ID: ac5dd440-5645-11e4-a813-3d13c3d3c540
progress: [/54.172.118.222 8/8 (100%)] [/54.183.192.248 8/8 (100%)] 
[/54.165.222.3 8/8 (100%)] [/54.215.139.161 8/8 (100%)] [total: 100% - 
2147483647MB/s (avg: 30MB/s)
{code}

Here is netstats on the node where it is running (54.183.192.248):

{code}
Responses   n/a 0812
automaton@ip-172-31-7-50:~$ nodetool netstats
Mode: NORMAL
Bulk Load ac5dd440-5645-11e4-a813-3d13c3d3c540
/172.31.7.50 (using /54.183.192.248)
Receiving 8 files, 1059673728 bytes total

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-10-Data.db
 56468194/164372226 bytes(34%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-4-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-3-Data.db
 50674396/50674396 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-5-Data.db
 68597334/68597334 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-7-Data.db
 139068110/139068110 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-6-Data.db
 12682638/12682638 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-9-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-8-Data.db
 68279024/68279024 bytes(100%) received from /172.31.7.50
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool NameActive   Pending  Completed
Commandsn/a 0  0
Responses   n/a 0970
{code}

Here's netstats on the other node in the same DC (54.215.139.161):

{code}
automaton@ip-172-31-40-169:~$ nodetool netstats
Mode: NORMAL
Bulk Load ac5dd440-5645-11e4-a813-3d13c3d3c540
/172.31.7.50 (using /54.183.192.248)
Receiving 8 files, 1059673728 bytes total

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-239-Data.db
 68279024/68279024 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-245-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-246-Data.db
 43078602/50674396 bytes(85%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-240-Data.db
 27800/27800 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-241-Data.db
 12682638/12682638 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-243-Data.db
 139068110/139068110 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-242-Data.db
 164372226/164372226 bytes(100%) received from /172.31.7.50

/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-244-Data.db
 68597334/68597334 bytes(100%) received from /172.31.7.50
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool NameActive   Pending  Completed

[jira] [Commented] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repai

2014-10-17 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175762#comment-14175762
 ] 

Yuki Morishita commented on CASSANDRA-8084:
---

bq. /172.31.7.50 (using /54.183.192.248)

I think this is because 'localhost' resolved to /172.31.7.50 on sstableloader 
node. It is the same as before (except 'using ...' part) I think. But 
connection is actually made to broadcast address, it is showing 'using ...' 
part.

sstableloader cannot determine whether nodes to stream are on the same dc or 
not.
To do that, we have to provide topology manually, by file or by passing it 
through command line option.

I can go further to solve the problem, but maybe in different JIRA?

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.12

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0-v4.txt, 
 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repai

2014-10-17 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175804#comment-14175804
 ] 

Jeremiah Jordan commented on CASSANDRA-8084:


+1 to fixing sstable loader private ip stuff in a new JIRA. As long as it works 
normally (not using the private ips).

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.12

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0-v4.txt, 
 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)