[jira] [Commented] (CASSANDRA-2827) Thrift error

2013-10-03 Thread sumit thakur (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784881#comment-13784881
 ] 

sumit thakur commented on CASSANDRA-2827:
-

In apache cassandra 1.1.5



ERROR [Thrift:786] 2013-10-01 12:23:38,628 CustomTThreadPoolServer.java (line 
200) Thrift error occurred during processing of message.

org.apache.thrift.TException: Negative length: -2147418111

at 
org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:388)

at 
org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)

at 
org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:19724)

at 
org.apache.thrift.ProcessFunction.process(ProcessFunction.java:21)

at 
org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)

at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:662)



Hector is using as a client to make connection with cassandra.

 Thrift error
 

 Key: CASSANDRA-2827
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2827
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.4
 Environment: 2 nodes with 0.7.4 on linux
Reporter: Olivier Smadja

 This exeception occured of a non seed node.
 ERROR [pool-1-thread-9] 2011-06-25 17:41:37,723 CustomTThreadPoolServer.java 
 (line 218) Thrift error occurred during processing of message.
 org.apache.thrift.TException: Negative length: -2147418111
   at 
 org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:388)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)
   at 
 org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:15964)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3023)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
   at java.lang.Thread.run(Thread.java:619)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6136) CQL should not allow an empty string as column identifier

2013-10-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784934#comment-13784934
 ] 

Sylvain Lebresne commented on CASSANDRA-6136:
-

Concerning IndexInfo, this is really a bug of the describe command of cqlsh. We 
internally use a empty column name to represent COMPACT tables that have not 
column outside the PK (which is generally allowed). So cqlsh should be not 
display that empty column.

Now the fact that we use it internally for that purpose is probably a good idea 
to refuse it otherwise indeed (the fact that it's not allowed for table and 
keyspace identifiers is less so, those are a lot more restricted than column 
names already).

But if we do it, I'd rather directly do it at the grammar level, and disallow 
empty quoted names altogether (again, that's *not* a problem for IndexInfo).

 CQL should not allow an empty string as column identifier
 -

 Key: CASSANDRA-6136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6136
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Dave Brosius
Priority: Minor
 Attachments: 6136.txt, 6136_v2.txt


 CQL currently allows users to create a table with an empty string as column 
 identifier:
 {code}
 CREATE TABLE t (k int primary key,  int);
 {code}
 Which results in the following table:
 {code}
 CREATE TABLE t (
   k int,
int,
   PRIMARY KEY (k)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Empty strings are not allowed for keyspace and table identifiers though.
 I guess it's just a case that we haven't covered. Of course making it illegal 
 in a future version would be a breaking change, but nobody serious would 
 manually have chosen such an identifier...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: Fix skipping columns with multiple slices

2013-10-03 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 c8f0e3a9f - 20a805023


Fix skipping columns with multiple slices

patch by frousseau  slebresne; reviewed by frousseau  slebresne for 
CASSANDRA-6119


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20a80502
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20a80502
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20a80502

Branch: refs/heads/cassandra-1.2
Commit: 20a805023fa26ab1b2f70b574b35357df9652cd3
Parents: c8f0e3a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:06:12 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:06:12 2013 +0200

--
 CHANGES.txt |   1 +
 .../db/columniterator/IndexedSliceReader.java   |  17 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 198 +++
 3 files changed, 213 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cc04eca..c1d1991 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Add tombstone debug threshold and histogram (CASSANDRA-6042, 6057)
  * Fix fat client schema pull NPE (CASSANDRA-6089)
  * Fix memtable flushing for indexed tables (CASSANDRA-6112)
+ * Fix skipping columns with multiple slices (CASSANDRA-6119)
 
 
 1.2.10

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 472ecfc..df916b2 100644
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@ -391,7 +391,7 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 
 // scan from index start
 OnDiskAtom column = null;
-while (file.bytesPastMark(mark)  currentIndex.width)
+while (file.bytesPastMark(mark)  currentIndex.width || column != 
null)
 {
 // Only fetch a new column if we haven't dealt with the 
previous one.
 if (column == null)
@@ -467,20 +467,31 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 OnDiskAtom.Serializer atomSerializer = 
emptyColumnFamily.getOnDiskSerializer();
 int columns = file.readInt();
 
-for (int i = 0; i  columns; i++)
+OnDiskAtom column = null;
+int i = 0;
+while (i  columns || column != null)
 {
-OnDiskAtom column = 
atomSerializer.deserializeFromSSTable(file, sstable.descriptor.version);
+// Only fetch a new column if we haven't dealt with the 
previous one.
+if (column == null)
+{
+column = atomSerializer.deserializeFromSSTable(file, 
sstable.descriptor.version);
+i++;
+}
 
 // col is before slice
 // (If in slice, don't bother checking that until we change 
slice)
 if (!inSlice  isColumnBeforeSliceStart(column))
+{
+column = null;
 continue;
+}
 
 // col is within slice
 if (isColumnBeforeSliceFinish(column))
 {
 inSlice = true;
 addColumn(column);
+column = null;
 }
 // col is after slice. more slices?
 else

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index abe3f05..cd30297 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -1160,6 +1160,204 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 @SuppressWarnings(unchecked)
 @Test
+public void testMultiRangeSomeEmptyNoIndex() throws Throwable
+{
+// in order not to change thrift interfaces at this stage we build 
SliceQueryFilter
+// directly instead of using QueryFilter to build it for us
+

[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-03 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12c4734b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12c4734b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12c4734b

Branch: refs/heads/trunk
Commit: 12c4734b5d657fec128db6cb284d03c92b2f1882
Parents: c7af304 4106f56
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:10:49 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:10:49 2013 +0200

--
 CHANGES.txt |   1 +
 .../db/columniterator/IndexedSliceReader.java   |  13 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 198 +++
 3 files changed, 209 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/12c4734b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12c4734b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-03 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4106f569
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4106f569
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4106f569

Branch: refs/heads/cassandra-2.0
Commit: 4106f56945a8bc82762338ff1737d387abe0060a
Parents: 4bd8626 20a8050
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:10:35 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:10:35 2013 +0200

--
 CHANGES.txt |   1 +
 .../db/columniterator/IndexedSliceReader.java   |  13 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 198 +++
 3 files changed, 209 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4106f569/CHANGES.txt
--
diff --cc CHANGES.txt
index c1023f6,c1d1991..994e8c3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -27,46 -10,12 +27,47 @@@ Merged from 1.2
   * Do not open non-ssl storage port if encryption option is all 
(CASSANDRA-3916)
   * Move batchlog replay to its own executor (CASSANDRA-6079)
   * Add tombstone debug threshold and histogram (CASSANDRA-6042, 6057)
 + * Enable tcp keepalive on incoming connections (CASSANDRA-4053)
   * Fix fat client schema pull NPE (CASSANDRA-6089)
   * Fix memtable flushing for indexed tables (CASSANDRA-6112)
+  * Fix skipping columns with multiple slices (CASSANDRA-6119)
  
  
 -1.2.10
 +2.0.1
 + * Fix bug that could allow reading deleted data temporarily (CASSANDRA-6025)
 + * Improve memory use defaults (CASSANDRA-5069)
 + * Make ThriftServer more easlly extensible (CASSANDRA-6058)
 + * Remove Hadoop dependency from ITransportFactory (CASSANDRA-6062)
 + * add file_cache_size_in_mb setting (CASSANDRA-5661)
 + * Improve error message when yaml contains invalid properties 
(CASSANDRA-5958)
 + * Improve leveled compaction's ability to find non-overlapping L0 compactions
 +   to work on concurrently (CASSANDRA-5921)
 + * Notify indexer of columns shadowed by range tombstones (CASSANDRA-5614)
 + * Log Merkle tree stats (CASSANDRA-2698)
 + * Switch from crc32 to adler32 for compressed sstable checksums 
(CASSANDRA-5862)
 + * Improve offheap memcpy performance (CASSANDRA-5884)
 + * Use a range aware scanner for cleanup (CASSANDRA-2524)
 + * Cleanup doesn't need to inspect sstables that contain only local data
 +   (CASSANDRA-5722)
 + * Add ability for CQL3 to list partition keys (CASSANDRA-4536)
 + * Improve native protocol serialization (CASSANDRA-5664)
 + * Upgrade Thrift to 0.9.1 (CASSANDRA-5923)
 + * Require superuser status for adding triggers (CASSANDRA-5963)
 + * Make standalone scrubber handle old and new style leveled manifest
 +   (CASSANDRA-6005)
 + * Fix paxos bugs (CASSANDRA-6012, 6013, 6023)
 + * Fix paged ranges with multiple replicas (CASSANDRA-6004)
 + * Fix potential AssertionError during tracing (CASSANDRA-6041)
 + * Fix NPE in sstablesplit (CASSANDRA-6027)
 + * Migrate pre-2.0 key/value/column aliases to system.schema_columns
 +   (CASSANDRA-6009)
 + * Paging filter empty rows too agressively (CASSANDRA-6040)
 + * Support variadic parameters for IN clauses (CASSANDRA-4210)
 + * cqlsh: return the result of CAS writes (CASSANDRA-5796)
 + * Fix validation of IN clauses with 2ndary indexes (CASSANDRA-6050)
 + * Support named bind variables in CQL (CASSANDRA-6033)
 +Merged from 1.2:
 + * Allow cache-keys-to-save to be set at runtime (CASSANDRA-5980)
   * Avoid second-guessing out-of-space state (CASSANDRA-5605)
   * Tuning knobs for dealing with large blobs and many CFs (CASSANDRA-5982)
   * (Hadoop) Fix CQLRW for thrift tables (CASSANDRA-6002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4106f569/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --cc 
src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 27d307a,df916b2..036d0cf
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@@ -445,11 -464,19 +445,14 @@@ class IndexedSliceReader extends Abstra
  // We remenber when we are whithin a slice to avoid some 
comparison
  boolean inSlice = false;
  
 -OnDiskAtom.Serializer atomSerializer = 
emptyColumnFamily.getOnDiskSerializer();
 -int columns = file.readInt();
 -
 +int columnCount = 
sstable.descriptor.version.hasRowSizeAndColumnCount ? file.readInt() : 
Integer.MAX_VALUE;
 

[1/2] git commit: Fix skipping columns with multiple slices

2013-10-03 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 4bd862618 - 4106f5694


Fix skipping columns with multiple slices

patch by frousseau  slebresne; reviewed by frousseau  slebresne for 
CASSANDRA-6119


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20a80502
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20a80502
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20a80502

Branch: refs/heads/cassandra-2.0
Commit: 20a805023fa26ab1b2f70b574b35357df9652cd3
Parents: c8f0e3a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:06:12 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:06:12 2013 +0200

--
 CHANGES.txt |   1 +
 .../db/columniterator/IndexedSliceReader.java   |  17 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 198 +++
 3 files changed, 213 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cc04eca..c1d1991 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Add tombstone debug threshold and histogram (CASSANDRA-6042, 6057)
  * Fix fat client schema pull NPE (CASSANDRA-6089)
  * Fix memtable flushing for indexed tables (CASSANDRA-6112)
+ * Fix skipping columns with multiple slices (CASSANDRA-6119)
 
 
 1.2.10

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 472ecfc..df916b2 100644
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@ -391,7 +391,7 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 
 // scan from index start
 OnDiskAtom column = null;
-while (file.bytesPastMark(mark)  currentIndex.width)
+while (file.bytesPastMark(mark)  currentIndex.width || column != 
null)
 {
 // Only fetch a new column if we haven't dealt with the 
previous one.
 if (column == null)
@@ -467,20 +467,31 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 OnDiskAtom.Serializer atomSerializer = 
emptyColumnFamily.getOnDiskSerializer();
 int columns = file.readInt();
 
-for (int i = 0; i  columns; i++)
+OnDiskAtom column = null;
+int i = 0;
+while (i  columns || column != null)
 {
-OnDiskAtom column = 
atomSerializer.deserializeFromSSTable(file, sstable.descriptor.version);
+// Only fetch a new column if we haven't dealt with the 
previous one.
+if (column == null)
+{
+column = atomSerializer.deserializeFromSSTable(file, 
sstable.descriptor.version);
+i++;
+}
 
 // col is before slice
 // (If in slice, don't bother checking that until we change 
slice)
 if (!inSlice  isColumnBeforeSliceStart(column))
+{
+column = null;
 continue;
+}
 
 // col is within slice
 if (isColumnBeforeSliceFinish(column))
 {
 inSlice = true;
 addColumn(column);
+column = null;
 }
 // col is after slice. more slices?
 else

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index abe3f05..cd30297 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -1160,6 +1160,204 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 @SuppressWarnings(unchecked)
 @Test
+public void testMultiRangeSomeEmptyNoIndex() throws Throwable
+{
+// in order not to change thrift interfaces at this stage we build 
SliceQueryFilter
+// directly instead of using QueryFilter to build it for us
+

[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-03 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4106f569
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4106f569
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4106f569

Branch: refs/heads/trunk
Commit: 4106f56945a8bc82762338ff1737d387abe0060a
Parents: 4bd8626 20a8050
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:10:35 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:10:35 2013 +0200

--
 CHANGES.txt |   1 +
 .../db/columniterator/IndexedSliceReader.java   |  13 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 198 +++
 3 files changed, 209 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4106f569/CHANGES.txt
--
diff --cc CHANGES.txt
index c1023f6,c1d1991..994e8c3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -27,46 -10,12 +27,47 @@@ Merged from 1.2
   * Do not open non-ssl storage port if encryption option is all 
(CASSANDRA-3916)
   * Move batchlog replay to its own executor (CASSANDRA-6079)
   * Add tombstone debug threshold and histogram (CASSANDRA-6042, 6057)
 + * Enable tcp keepalive on incoming connections (CASSANDRA-4053)
   * Fix fat client schema pull NPE (CASSANDRA-6089)
   * Fix memtable flushing for indexed tables (CASSANDRA-6112)
+  * Fix skipping columns with multiple slices (CASSANDRA-6119)
  
  
 -1.2.10
 +2.0.1
 + * Fix bug that could allow reading deleted data temporarily (CASSANDRA-6025)
 + * Improve memory use defaults (CASSANDRA-5069)
 + * Make ThriftServer more easlly extensible (CASSANDRA-6058)
 + * Remove Hadoop dependency from ITransportFactory (CASSANDRA-6062)
 + * add file_cache_size_in_mb setting (CASSANDRA-5661)
 + * Improve error message when yaml contains invalid properties 
(CASSANDRA-5958)
 + * Improve leveled compaction's ability to find non-overlapping L0 compactions
 +   to work on concurrently (CASSANDRA-5921)
 + * Notify indexer of columns shadowed by range tombstones (CASSANDRA-5614)
 + * Log Merkle tree stats (CASSANDRA-2698)
 + * Switch from crc32 to adler32 for compressed sstable checksums 
(CASSANDRA-5862)
 + * Improve offheap memcpy performance (CASSANDRA-5884)
 + * Use a range aware scanner for cleanup (CASSANDRA-2524)
 + * Cleanup doesn't need to inspect sstables that contain only local data
 +   (CASSANDRA-5722)
 + * Add ability for CQL3 to list partition keys (CASSANDRA-4536)
 + * Improve native protocol serialization (CASSANDRA-5664)
 + * Upgrade Thrift to 0.9.1 (CASSANDRA-5923)
 + * Require superuser status for adding triggers (CASSANDRA-5963)
 + * Make standalone scrubber handle old and new style leveled manifest
 +   (CASSANDRA-6005)
 + * Fix paxos bugs (CASSANDRA-6012, 6013, 6023)
 + * Fix paged ranges with multiple replicas (CASSANDRA-6004)
 + * Fix potential AssertionError during tracing (CASSANDRA-6041)
 + * Fix NPE in sstablesplit (CASSANDRA-6027)
 + * Migrate pre-2.0 key/value/column aliases to system.schema_columns
 +   (CASSANDRA-6009)
 + * Paging filter empty rows too agressively (CASSANDRA-6040)
 + * Support variadic parameters for IN clauses (CASSANDRA-4210)
 + * cqlsh: return the result of CAS writes (CASSANDRA-5796)
 + * Fix validation of IN clauses with 2ndary indexes (CASSANDRA-6050)
 + * Support named bind variables in CQL (CASSANDRA-6033)
 +Merged from 1.2:
 + * Allow cache-keys-to-save to be set at runtime (CASSANDRA-5980)
   * Avoid second-guessing out-of-space state (CASSANDRA-5605)
   * Tuning knobs for dealing with large blobs and many CFs (CASSANDRA-5982)
   * (Hadoop) Fix CQLRW for thrift tables (CASSANDRA-6002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4106f569/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --cc 
src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 27d307a,df916b2..036d0cf
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@@ -445,11 -464,19 +445,14 @@@ class IndexedSliceReader extends Abstra
  // We remenber when we are whithin a slice to avoid some 
comparison
  boolean inSlice = false;
  
 -OnDiskAtom.Serializer atomSerializer = 
emptyColumnFamily.getOnDiskSerializer();
 -int columns = file.readInt();
 -
 +int columnCount = 
sstable.descriptor.version.hasRowSizeAndColumnCount ? file.readInt() : 
Integer.MAX_VALUE;
 +   

[1/3] git commit: Fix skipping columns with multiple slices

2013-10-03 Thread slebresne
Updated Branches:
  refs/heads/trunk c7af3040c - 12c4734b5


Fix skipping columns with multiple slices

patch by frousseau  slebresne; reviewed by frousseau  slebresne for 
CASSANDRA-6119


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20a80502
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20a80502
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20a80502

Branch: refs/heads/trunk
Commit: 20a805023fa26ab1b2f70b574b35357df9652cd3
Parents: c8f0e3a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:06:12 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:06:12 2013 +0200

--
 CHANGES.txt |   1 +
 .../db/columniterator/IndexedSliceReader.java   |  17 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 198 +++
 3 files changed, 213 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cc04eca..c1d1991 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Add tombstone debug threshold and histogram (CASSANDRA-6042, 6057)
  * Fix fat client schema pull NPE (CASSANDRA-6089)
  * Fix memtable flushing for indexed tables (CASSANDRA-6112)
+ * Fix skipping columns with multiple slices (CASSANDRA-6119)
 
 
 1.2.10

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 472ecfc..df916b2 100644
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@ -391,7 +391,7 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 
 // scan from index start
 OnDiskAtom column = null;
-while (file.bytesPastMark(mark)  currentIndex.width)
+while (file.bytesPastMark(mark)  currentIndex.width || column != 
null)
 {
 // Only fetch a new column if we haven't dealt with the 
previous one.
 if (column == null)
@@ -467,20 +467,31 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 OnDiskAtom.Serializer atomSerializer = 
emptyColumnFamily.getOnDiskSerializer();
 int columns = file.readInt();
 
-for (int i = 0; i  columns; i++)
+OnDiskAtom column = null;
+int i = 0;
+while (i  columns || column != null)
 {
-OnDiskAtom column = 
atomSerializer.deserializeFromSSTable(file, sstable.descriptor.version);
+// Only fetch a new column if we haven't dealt with the 
previous one.
+if (column == null)
+{
+column = atomSerializer.deserializeFromSSTable(file, 
sstable.descriptor.version);
+i++;
+}
 
 // col is before slice
 // (If in slice, don't bother checking that until we change 
slice)
 if (!inSlice  isColumnBeforeSliceStart(column))
+{
+column = null;
 continue;
+}
 
 // col is within slice
 if (isColumnBeforeSliceFinish(column))
 {
 inSlice = true;
 addColumn(column);
+column = null;
 }
 // col is after slice. more slices?
 else

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20a80502/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index abe3f05..cd30297 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -1160,6 +1160,204 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 @SuppressWarnings(unchecked)
 @Test
+public void testMultiRangeSomeEmptyNoIndex() throws Throwable
+{
+// in order not to change thrift interfaces at this stage we build 
SliceQueryFilter
+// directly instead of using QueryFilter to build it for us
+ColumnSlice[] 

[jira] [Resolved] (CASSANDRA-6119) IndexedSliceReader can skip columns when fetching multiple contiguous slices

2013-10-03 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6119.
-

   Resolution: Fixed
Fix Version/s: 1.2.11

Committed, thanks

 IndexedSliceReader can skip columns when fetching multiple contiguous slices
 

 Key: CASSANDRA-6119
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6119
 Project: Cassandra
  Issue Type: Bug
Reporter: Fabien Rousseau
Assignee: Fabien Rousseau
 Fix For: 1.2.11

 Attachments: 6119.patch, 6119-v2.txt


 This was observed using SliceQueryFilter with multiple slices.
 Let's have a row a having the following column list : colA, colB, 
 colC, colD
 Then select 2 ranges : [colA, colB], [colC, colD]
 Expected result is the four columns
 But only 3 are returned (colA, colB, colD)
 To reproduce the above scenario in the unit tests, you can modify the test 
 ColumnFamilyStoreTest.testMultiRangeIndexed by replacing the original line :
 String[] letters = new String[] { a, b, c, d, e, f, g, 
 h, i };
 by this one (f letter has been removed) :
 String[] letters = new String[] { a, b, c, d, e, g, h, 
 i };
 Anyway, a patch is attached which adds more unit tests, and modifies 
 IndexedSliceReader.IndexedBlockFetcher  
 IndexedSliceReader.SimpleBlockFetcher 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[1/2] git commit: Fix test build

2013-10-03 Thread slebresne
Updated Branches:
  refs/heads/trunk 12c4734b5 - ec308e66b


Fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3647d9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3647d9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3647d9e

Branch: refs/heads/trunk
Commit: b3647d9e7e30668370b701bb0b3724a111e24795
Parents: 4106f56
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:21:18 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:21:18 2013 +0200

--
 test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3647d9e/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index 5a86ff4..4e6c87f 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -1322,7 +1322,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 
@@ -1371,7 +1371,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 
@@ -1420,7 +1420,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 
@@ -1470,7 +1470,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 



git commit: Fix test build

2013-10-03 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 4106f5694 - b3647d9e7


Fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3647d9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3647d9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3647d9e

Branch: refs/heads/cassandra-2.0
Commit: b3647d9e7e30668370b701bb0b3724a111e24795
Parents: 4106f56
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:21:18 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:21:18 2013 +0200

--
 test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3647d9e/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index 5a86ff4..4e6c87f 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -1322,7 +1322,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 
@@ -1371,7 +1371,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 
@@ -1420,7 +1420,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 
@@ -1470,7 +1470,7 @@ public class ColumnFamilyStoreTest extends SchemaLoader
 
 String tableName = Keyspace1;
 String cfName = Standard1;
-Table table = Table.open(tableName);
+Keyspace table = Keyspace.open(tableName);
 ColumnFamilyStore cfs = table.getColumnFamilyStore(cfName);
 cfs.clearUnsafe();
 



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-03 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ec308e66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ec308e66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ec308e66

Branch: refs/heads/trunk
Commit: ec308e66b34fc6946aa6c1c1b8967ba3aa963a74
Parents: 12c4734 b3647d9
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Oct 3 11:21:32 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Oct 3 11:21:32 2013 +0200

--
 test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ec308e66/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--



[jira] [Reopened] (CASSANDRA-6129) get java.util.ConcurrentModificationException while bulkloading from sstable for widerow table

2013-10-03 Thread koray sariteke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

koray sariteke reopened CASSANDRA-6129:
---

Reproduced In: 2.0.1, 2.0.0  (was: 2.0.0, 2.0.1)
   Tester: koray sariteke

I have tested against 2.0.1 and got the problem. Seems that concurrent set is 
needed for progressByHost. 

I made some more changes and attached two git diff file (ProgressInfo.diff for 
ProgressInfo.java and BulkLoader.diff for BulkLoader.java)

wait for your comment.

Thanks
Koray 

 get java.util.ConcurrentModificationException while bulkloading from sstable 
 for widerow table
 --

 Key: CASSANDRA-6129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6129
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: three cassandra 2.0.1 node
 jdk 7
 linux - ubuntu
Reporter: koray sariteke
Assignee: Jonathan Ellis
 Fix For: 2.0.2

 Attachments: BulkLoader.diff, ProgressInfo.diff


 I haven't faced that problem with cassandra 1.2.6
 I have created widerow sstables with SSTableSimpleUnsortedWriter. When i 
 tried to load sstables by sstableloader, I got 
 java.util.ConcurrentModificationException after a while (not at the beggining 
 of the streaming).
 Exception is :
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 15MB/s (avg: 0MB/s)] INFO 
 00:45:23,542 [Stream #c0f53e00-2ae2-11e3-ab6b-99a3e9e32246] Session with 
 /192.168.103.3 is complete
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 3MB/s (avg: 1MB/s)]Exception in 
 thread STREAM-OUT-/192.168.103.3 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$EntryIterator.next(HashMap.java:934)
   at java.util.HashMap$EntryIterator.next(HashMap.java:932)
   at 
 org.apache.cassandra.tools.BulkLoader$ProgressIndicator.handleStreamEvent(BulkLoader.java:129)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 
 12MB/s)]Exception in thread STREAM-OUT-/192.168.103.1 
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$KeyIterator.next(HashMap.java:928)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 12MB/s)]
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6129) get java.util.ConcurrentModificationException while bulkloading from sstable for widerow table

2013-10-03 Thread koray sariteke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

koray sariteke updated CASSANDRA-6129:
--

Attachment: ProgressInfo.diff
BulkLoader.diff

 get java.util.ConcurrentModificationException while bulkloading from sstable 
 for widerow table
 --

 Key: CASSANDRA-6129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6129
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: three cassandra 2.0.1 node
 jdk 7
 linux - ubuntu
Reporter: koray sariteke
Assignee: Jonathan Ellis
 Fix For: 2.0.2

 Attachments: BulkLoader.diff, ProgressInfo.diff


 I haven't faced that problem with cassandra 1.2.6
 I have created widerow sstables with SSTableSimpleUnsortedWriter. When i 
 tried to load sstables by sstableloader, I got 
 java.util.ConcurrentModificationException after a while (not at the beggining 
 of the streaming).
 Exception is :
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 15MB/s (avg: 0MB/s)] INFO 
 00:45:23,542 [Stream #c0f53e00-2ae2-11e3-ab6b-99a3e9e32246] Session with 
 /192.168.103.3 is complete
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 3MB/s (avg: 1MB/s)]Exception in 
 thread STREAM-OUT-/192.168.103.3 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$EntryIterator.next(HashMap.java:934)
   at java.util.HashMap$EntryIterator.next(HashMap.java:932)
   at 
 org.apache.cassandra.tools.BulkLoader$ProgressIndicator.handleStreamEvent(BulkLoader.java:129)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 
 12MB/s)]Exception in thread STREAM-OUT-/192.168.103.1 
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$KeyIterator.next(HashMap.java:928)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 12MB/s)]
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-10-03 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785003#comment-13785003
 ] 

Jeremy Hanna commented on CASSANDRA-6053:
-

The load_ring_state=false directive should probably also clear out the peers 
table because otherwise, polluted gossip is still persisted there.

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6135) Add beforeChange Notification to Gossiper State.

2013-10-03 Thread Sergio Bossa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-6135:


Attachment: 0002-CASSANDRA-6135.diff

Attached patch for latest cassandra-1.2 branch.

 Add beforeChange Notification to Gossiper State.
 

 Key: CASSANDRA-6135
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6135
 Project: Cassandra
  Issue Type: New Feature
Reporter: Benjamin Coverston
 Attachments: 
 0001-New-Gossiper-notification-to-IEndpointStateChangeSub.patch, 
 0002-CASSANDRA-6135.diff


 We would like an internal notification to be fired before state changes 
 happen so we can intercept them, and in some cases defer them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-10-03 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785003#comment-13785003
 ] 

Jeremy Hanna edited comment on CASSANDRA-6053 at 10/3/13 11:27 AM:
---

The load_ring_state=false directive should probably also clear out the peers 
table because otherwise, the state that you're trying to get rid of is still 
persisted there.


was (Author: jeromatron):
The load_ring_state=false directive should probably also clear out the peers 
table because otherwise, polluted gossip is still persisted there.

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6136) CQL should not allow an empty string as column identifier

2013-10-03 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-6136:


Attachment: 6136_v2.txt

version2 disallows at the parse layer.

 CQL should not allow an empty string as column identifier
 -

 Key: CASSANDRA-6136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6136
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Dave Brosius
Priority: Minor
 Attachments: 6136.txt, 6136_v2.txt


 CQL currently allows users to create a table with an empty string as column 
 identifier:
 {code}
 CREATE TABLE t (k int primary key,  int);
 {code}
 Which results in the following table:
 {code}
 CREATE TABLE t (
   k int,
int,
   PRIMARY KEY (k)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Empty strings are not allowed for keyspace and table identifiers though.
 I guess it's just a case that we haven't covered. Of course making it illegal 
 in a future version would be a breaking change, but nobody serious would 
 manually have chosen such an identifier...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6136) CQL should not allow an empty string as column identifier

2013-10-03 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-6136:


Attachment: (was: 6136_v2.txt)

 CQL should not allow an empty string as column identifier
 -

 Key: CASSANDRA-6136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6136
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Dave Brosius
Priority: Minor
 Attachments: 6136.txt, 6136_v2.txt


 CQL currently allows users to create a table with an empty string as column 
 identifier:
 {code}
 CREATE TABLE t (k int primary key,  int);
 {code}
 Which results in the following table:
 {code}
 CREATE TABLE t (
   k int,
int,
   PRIMARY KEY (k)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Empty strings are not allowed for keyspace and table identifiers though.
 I guess it's just a case that we haven't covered. Of course making it illegal 
 in a future version would be a breaking change, but nobody serious would 
 manually have chosen such an identifier...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6136) CQL should not allow an empty string as column identifier

2013-10-03 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785028#comment-13785028
 ] 

Dave Brosius edited comment on CASSANDRA-6136 at 10/3/13 12:10 PM:
---

version3 disallows at the parse layer.


was (Author: dbrosius):
version2 disallows at the parse layer.

 CQL should not allow an empty string as column identifier
 -

 Key: CASSANDRA-6136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6136
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Dave Brosius
Priority: Minor
 Attachments: 6136.txt, 6136_v2.txt, 6136_v3.txt


 CQL currently allows users to create a table with an empty string as column 
 identifier:
 {code}
 CREATE TABLE t (k int primary key,  int);
 {code}
 Which results in the following table:
 {code}
 CREATE TABLE t (
   k int,
int,
   PRIMARY KEY (k)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Empty strings are not allowed for keyspace and table identifiers though.
 I guess it's just a case that we haven't covered. Of course making it illegal 
 in a future version would be a breaking change, but nobody serious would 
 manually have chosen such an identifier...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6136) CQL should not allow an empty string as column identifier

2013-10-03 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-6136:


Attachment: 6136_v3.txt

 CQL should not allow an empty string as column identifier
 -

 Key: CASSANDRA-6136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6136
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Dave Brosius
Priority: Minor
 Attachments: 6136.txt, 6136_v2.txt, 6136_v3.txt


 CQL currently allows users to create a table with an empty string as column 
 identifier:
 {code}
 CREATE TABLE t (k int primary key,  int);
 {code}
 Which results in the following table:
 {code}
 CREATE TABLE t (
   k int,
int,
   PRIMARY KEY (k)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Empty strings are not allowed for keyspace and table identifiers though.
 I guess it's just a case that we haven't covered. Of course making it illegal 
 in a future version would be a breaking change, but nobody serious would 
 manually have chosen such an identifier...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5383) Windows 7 deleting/renaming files problem

2013-10-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-5383:
---

Attachment: 5383-v3.patch

rebased against trunk, both starts using the java7 apis and removes a file 
before moving another file on top of it in windows

 Windows 7 deleting/renaming files problem
 -

 Key: CASSANDRA-5383
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5383
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Affects Versions: 2.0 beta 1
Reporter: Ryan McGuire
Assignee: Marcus Eriksson
 Fix For: 2.0.2

 Attachments: 
 0001-CASSANDRA-5383-cant-move-a-file-on-top-of-another-fi.patch, 
 0001-CASSANDRA-5383-v2.patch, 
 0001-use-Java7-apis-for-deleting-and-moving-files-and-cre.patch, 
 5383_patch_v2_system.log, 5383-v3.patch, cant_move_file_patch.log, 
 test_log.5383.patch_v2.log.txt, v2+cant_move_file_patch.log


 Two unit tests are failing on Windows 7 due to errors in renaming/deleting 
 files:
 org.apache.cassandra.db.ColumnFamilyStoreTest: 
 {code}
 [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest
 [junit] Tests run: 27, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
 13.904 sec
 [junit] 
 [junit] - Standard Error -
 [junit] ERROR 13:06:46,058 Unable to delete 
 build\test\cassandra\data\Keyspace1\Indexed2\Keyspace1-Indexed2.birthdate_index-ja-1-Data.db
  (it will be removed on server restart; we'll also retry after GC)
 [junit] ERROR 13:06:48,508 Fatal exception in thread 
 Thread[NonPeriodicTasks:1,5,main]
 [junit] java.lang.RuntimeException: Tried to hard link to file that does 
 not exist 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-7-Statistics.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:72)
 [junit]   at 
 org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1057)
 [junit]   at 
 org.apache.cassandra.db.DataTracker$1.run(DataTracker.java:168)
 [junit]   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 [junit]   at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 [junit]   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 [junit]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
 [junit]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 [junit]   at java.lang.Thread.run(Thread.java:662)
 [junit] -  ---
 [junit] Testcase: 
 testSliceByNamesCommandOldMetatada(org.apache.cassandra.db.ColumnFamilyStoreTest):
   Caused an ERROR
 [junit] Failed to rename 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp
  to 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db
 [junit] java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp
  to 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133)
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:122)
 [junit]   at 
 org.apache.cassandra.db.compaction.LeveledManifest.mutateLevel(LeveledManifest.java:575)
 [junit]   at 
 org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:589)
 [junit]   at 
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetatada(ColumnFamilyStoreTest.java:885)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testRemoveUnifinishedCompactionLeftovers(org.apache.cassandra.db.ColumnFamilyStoreTest):
 Caused an ERROR
 [junit] java.io.IOException: Failed to delete 
 c:\Users\Ryan\git\cassandra\build\test\cassandra\data\Keyspace1\Standard3\Keyspace1-Standard3-ja-2-Data.db
 [junit] FSWriteError in 
 build\test\cassandra\data\Keyspace1\Standard3\Keyspace1-Standard3-ja-2-Data.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:112)
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:103)
 [junit]   at 
 org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:139)
 [junit]   at 
 

[jira] [Commented] (CASSANDRA-6101) Debian init script broken

2013-10-03 Thread Pieter Callewaert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785036#comment-13785036
 ] 

Pieter Callewaert commented on CASSANDRA-6101:
--

Not really important, but status isn't working with this patch..

root ~ # service cassandra status
 * could not access pidfile for Cassandra

 Debian init script broken
 -

 Key: CASSANDRA-6101
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6101
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Anton Winter
Assignee: Eric Evans
Priority: Minor
 Attachments: 6101-classpath.patch, 6101.txt


 The debian init script released in 2.0.1 contains 2 issues:
 # The pidfile directory is not created if it doesn't already exist.
 # Classpath not exported to the start-stop-daemon.
 These lead to the init script not picking up jna.jar, or anything from the 
 debian EXTRA_CLASSPATH environment variable, and the init script not being 
 able to stop/restart Cassandra.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6129) get java.util.ConcurrentModificationException while bulkloading from sstable for widerow table

2013-10-03 Thread koray sariteke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

koray sariteke updated CASSANDRA-6129:
--

Attachment: BulkLoader.diff

 get java.util.ConcurrentModificationException while bulkloading from sstable 
 for widerow table
 --

 Key: CASSANDRA-6129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6129
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: three cassandra 2.0.1 node
 jdk 7
 linux - ubuntu
Reporter: koray sariteke
Assignee: Jonathan Ellis
 Fix For: 2.0.2

 Attachments: BulkLoader.diff


 I haven't faced that problem with cassandra 1.2.6
 I have created widerow sstables with SSTableSimpleUnsortedWriter. When i 
 tried to load sstables by sstableloader, I got 
 java.util.ConcurrentModificationException after a while (not at the beggining 
 of the streaming).
 Exception is :
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 15MB/s (avg: 0MB/s)] INFO 
 00:45:23,542 [Stream #c0f53e00-2ae2-11e3-ab6b-99a3e9e32246] Session with 
 /192.168.103.3 is complete
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 3MB/s (avg: 1MB/s)]Exception in 
 thread STREAM-OUT-/192.168.103.3 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$EntryIterator.next(HashMap.java:934)
   at java.util.HashMap$EntryIterator.next(HashMap.java:932)
   at 
 org.apache.cassandra.tools.BulkLoader$ProgressIndicator.handleStreamEvent(BulkLoader.java:129)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 
 12MB/s)]Exception in thread STREAM-OUT-/192.168.103.1 
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$KeyIterator.next(HashMap.java:928)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 12MB/s)]
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6129) get java.util.ConcurrentModificationException while bulkloading from sstable for widerow table

2013-10-03 Thread koray sariteke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

koray sariteke updated CASSANDRA-6129:
--

Attachment: (was: BulkLoader.diff)

 get java.util.ConcurrentModificationException while bulkloading from sstable 
 for widerow table
 --

 Key: CASSANDRA-6129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6129
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: three cassandra 2.0.1 node
 jdk 7
 linux - ubuntu
Reporter: koray sariteke
Assignee: Jonathan Ellis
 Fix For: 2.0.2

 Attachments: BulkLoader.diff


 I haven't faced that problem with cassandra 1.2.6
 I have created widerow sstables with SSTableSimpleUnsortedWriter. When i 
 tried to load sstables by sstableloader, I got 
 java.util.ConcurrentModificationException after a while (not at the beggining 
 of the streaming).
 Exception is :
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 15MB/s (avg: 0MB/s)] INFO 
 00:45:23,542 [Stream #c0f53e00-2ae2-11e3-ab6b-99a3e9e32246] Session with 
 /192.168.103.3 is complete
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 3MB/s (avg: 1MB/s)]Exception in 
 thread STREAM-OUT-/192.168.103.3 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$EntryIterator.next(HashMap.java:934)
   at java.util.HashMap$EntryIterator.next(HashMap.java:932)
   at 
 org.apache.cassandra.tools.BulkLoader$ProgressIndicator.handleStreamEvent(BulkLoader.java:129)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 
 12MB/s)]Exception in thread STREAM-OUT-/192.168.103.1 
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$KeyIterator.next(HashMap.java:928)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 12MB/s)]
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6129) get java.util.ConcurrentModificationException while bulkloading from sstable for widerow table

2013-10-03 Thread koray sariteke (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784963#comment-13784963
 ] 

koray sariteke edited comment on CASSANDRA-6129 at 10/3/13 12:38 PM:
-

I have tested against 2.0.1 and got the problem. Seems that concurrent set is 
needed for progressByHost. 

I made some more changes and attached git diff file (BulkLoader.diff for 
BulkLoader.java)

wait for your comment.

Thanks
Koray 


was (Author: ksaritek):
I have tested against 2.0.1 and got the problem. Seems that concurrent set is 
needed for progressByHost. 

I made some more changes and attached two git diff file (ProgressInfo.diff for 
ProgressInfo.java and BulkLoader.diff for BulkLoader.java)

wait for your comment.

Thanks
Koray 

 get java.util.ConcurrentModificationException while bulkloading from sstable 
 for widerow table
 --

 Key: CASSANDRA-6129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6129
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: three cassandra 2.0.1 node
 jdk 7
 linux - ubuntu
Reporter: koray sariteke
Assignee: Jonathan Ellis
 Fix For: 2.0.2

 Attachments: BulkLoader.diff


 I haven't faced that problem with cassandra 1.2.6
 I have created widerow sstables with SSTableSimpleUnsortedWriter. When i 
 tried to load sstables by sstableloader, I got 
 java.util.ConcurrentModificationException after a while (not at the beggining 
 of the streaming).
 Exception is :
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 15MB/s (avg: 0MB/s)] INFO 
 00:45:23,542 [Stream #c0f53e00-2ae2-11e3-ab6b-99a3e9e32246] Session with 
 /192.168.103.3 is complete
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 3MB/s (avg: 1MB/s)]Exception in 
 thread STREAM-OUT-/192.168.103.3 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$EntryIterator.next(HashMap.java:934)
   at java.util.HashMap$EntryIterator.next(HashMap.java:932)
   at 
 org.apache.cassandra.tools.BulkLoader$ProgressIndicator.handleStreamEvent(BulkLoader.java:129)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 
 12MB/s)]Exception in thread STREAM-OUT-/192.168.103.1 
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$KeyIterator.next(HashMap.java:928)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 12MB/s)]
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)



--
This message was sent by 

[jira] [Updated] (CASSANDRA-6129) get java.util.ConcurrentModificationException while bulkloading from sstable for widerow table

2013-10-03 Thread koray sariteke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

koray sariteke updated CASSANDRA-6129:
--

Attachment: (was: ProgressInfo.diff)

 get java.util.ConcurrentModificationException while bulkloading from sstable 
 for widerow table
 --

 Key: CASSANDRA-6129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6129
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: three cassandra 2.0.1 node
 jdk 7
 linux - ubuntu
Reporter: koray sariteke
Assignee: Jonathan Ellis
 Fix For: 2.0.2

 Attachments: BulkLoader.diff


 I haven't faced that problem with cassandra 1.2.6
 I have created widerow sstables with SSTableSimpleUnsortedWriter. When i 
 tried to load sstables by sstableloader, I got 
 java.util.ConcurrentModificationException after a while (not at the beggining 
 of the streaming).
 Exception is :
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 15MB/s (avg: 0MB/s)] INFO 
 00:45:23,542 [Stream #c0f53e00-2ae2-11e3-ab6b-99a3e9e32246] Session with 
 /192.168.103.3 is complete
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 3MB/s (avg: 1MB/s)]Exception in 
 thread STREAM-OUT-/192.168.103.3 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$EntryIterator.next(HashMap.java:934)
   at java.util.HashMap$EntryIterator.next(HashMap.java:932)
   at 
 org.apache.cassandra.tools.BulkLoader$ProgressIndicator.handleStreamEvent(BulkLoader.java:129)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 
 12MB/s)]Exception in thread STREAM-OUT-/192.168.103.1 
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$KeyIterator.next(HashMap.java:928)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 12MB/s)]
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6106) QueryState.getTimestamp() FBUtilities.timestampMicros() reads current timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() / 1000

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785058#comment-13785058
 ] 

Jonathan Ellis commented on CASSANDRA-6106:
---

I'd be okay with using gettimeofday when we find it in libc and leaving windows 
with the status quo.

 QueryState.getTimestamp()  FBUtilities.timestampMicros() reads current 
 timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() 
 / 1000
 

 Key: CASSANDRA-6106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE Cassandra 3.1, but also HEAD
Reporter: Christopher Smith
Priority: Minor
  Labels: collision, conflict, timestamp
 Attachments: microtimstamp.patch, microtimstamp_random.patch, 
 microtimstamp_random_rev2.patch


 I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
 mentioned issues with millisecond rounding in timestamps and was able to 
 reproduce the issue. If I specify a timestamp in a mutating query, I get 
 microsecond precision, but if I don't, I get timestamps rounded to the 
 nearest millisecond, at least for my first query on a given connection, which 
 substantially increases the possibilities of collision.
 I believe I found the offending code, though I am by no means sure this is 
 comprehensive. I think we probably need a fairly comprehensive replacement of 
 all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread lantao yan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785079#comment-13785079
 ] 

lantao yan commented on CASSANDRA-5078:
---

@Yuki Morishita, I checked that code, I think I was right about each thread 
will run against one column family store.
so two threads are working on the same column family with the same timestamp 
will never happen.

If given two compaction threads can work on the same column family 
store(directory, and files), we may have risk condition.
[~jbellis], correct me if I am wrong.

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread lantao yan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785079#comment-13785079
 ] 

lantao yan edited comment on CASSANDRA-5078 at 10/3/13 1:37 PM:


@Yuki Morishita, I checked that code, I think I was right about each thread 
will run against one column family store.
so two threads are working on the same column family with the same timestamp 
will never happen.

CompactionTask.java
protected void runWith(File sstableDirectory) throws Exception // it is 
running against a sstableDirectory, which is a cfs directory I guess.

If given two compaction threads can work on the same column family 
store(directory, and files), we may have risk condition.
[~jbellis], correct me if I am wrong.


was (Author: yanlantao):
@Yuki Morishita, I checked that code, I think I was right about each thread 
will run against one column family store.
so two threads are working on the same column family with the same timestamp 
will never happen.

If given two compaction threads can work on the same column family 
store(directory, and files), we may have risk condition.
[~jbellis], correct me if I am wrong.

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6136) CQL should not allow an empty string as column identifier

2013-10-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785230#comment-13785230
 ] 

Sylvain Lebresne commented on CASSANDRA-6136:
-

Last patch lgtm, +1. I've also created CASSANDRA-6139 to fix cqlsh DESC command.

 CQL should not allow an empty string as column identifier
 -

 Key: CASSANDRA-6136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6136
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Dave Brosius
Priority: Minor
 Attachments: 6136.txt, 6136_v2.txt, 6136_v3.txt


 CQL currently allows users to create a table with an empty string as column 
 identifier:
 {code}
 CREATE TABLE t (k int primary key,  int);
 {code}
 Which results in the following table:
 {code}
 CREATE TABLE t (
   k int,
int,
   PRIMARY KEY (k)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Empty strings are not allowed for keyspace and table identifiers though.
 I guess it's just a case that we haven't covered. Of course making it illegal 
 in a future version would be a breaking change, but nobody serious would 
 manually have chosen such an identifier...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6139) Cqlsh shouldn't display empty value alias

2013-10-03 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-6139:
---

 Summary: Cqlsh shouldn't display empty value alias
 Key: CASSANDRA-6139
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6139
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
Priority: Minor


When someone creates:
{noformat}
CREATE TABLE foo (
   k int,
   v int,
   PRIMARY KEY (k, v)
) WITH COMPACT STORAGE
{noformat}
then we internally create a value alias (1.2)/compact value definition 
(2.0) with an empty name. Seems that cqlsh don't recognize that fact and 
display that as:
{noformat}
cqlsh:ks DESC TABLE foo;

CREATE TABLE foo (
  k int,
  v int,
   blob,
  PRIMARY KEY (k, v)
) WITH COMPACT STORAGE AND ...
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: Disallow empty column names in cql patch by dbrosius reviewed by slebresne for cassandra-6136

2013-10-03 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-2.0 b3647d9e7 - 27f4ea2bf


Disallow empty column names in cql
patch by dbrosius reviewed by slebresne for cassandra-6136


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/27f4ea2b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/27f4ea2b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/27f4ea2b

Branch: refs/heads/cassandra-2.0
Commit: 27f4ea2bfd8831ee147ee1ed7a59be9c3308a558
Parents: b3647d9
Author: Dave Brosius dbros...@apache.org
Authored: Thu Oct 3 10:29:04 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Thu Oct 3 10:29:04 2013 -0400

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/cql3/Cql.g | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/27f4ea2b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 994e8c3..e08e5b6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -15,6 +15,7 @@
  * Fix potential NPE on composite 2ndary indexes (CASSANDRA-6098)
  * Delete can potentially be skipped in batch (CASSANDRA-6115)
  * Allow alter keyspace on system_traces (CASSANDRA-6016)
+ * Disallow empty column names in cql (CASSANDRA-6136)
 Merged from 1.2:
  * Never return WriteTimeout for CL.ANY (CASSANDRA-6032)
  * Tracing should log write failure rather than raw exceptions (CASSANDRA-6133)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/27f4ea2b/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 17afb00..ed950da 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -1096,7 +1096,7 @@ STRING_LITERAL
 QUOTED_NAME
 @init{ StringBuilder b = new StringBuilder(); }
 @after{ setText(b.toString()); }
-: '\' (c=~('\') { b.appendCodePoint(c); } | '\' '\' { 
b.appendCodePoint('\'); })* '\'
+: '\' (c=~('\') { b.appendCodePoint(c); } | '\' '\' { 
b.appendCodePoint('\'); })+ '\'
 ;
 
 fragment DIGIT



[2/4] git commit: add concurrent sets to progressByHost (CASSANDRA-6129)

2013-10-03 Thread jbellis
add concurrent sets to progressByHost (CASSANDRA-6129)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/31a9a2fd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/31a9a2fd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/31a9a2fd

Branch: refs/heads/trunk
Commit: 31a9a2fd1e93e41632b729b43c0894c0e8e0c4e9
Parents: 27f4ea2
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 09:30:50 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 09:30:50 2013 -0500

--
 src/java/org/apache/cassandra/tools/BulkLoader.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/31a9a2fd/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index cd3c7e1..c89bb83 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -24,6 +24,7 @@ import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 
+import com.google.common.collect.Sets;
 import org.apache.commons.cli.*;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.protocol.TProtocol;
@@ -115,7 +116,7 @@ public class BulkLoader
 SetProgressInfo progresses = 
progressByHost.get(progressInfo.peer);
 if (progresses == null)
 {
-progresses = new HashSet();
+progresses = Sets.newSetFromMap(new 
ConcurrentHashMapProgressInfo, Boolean());
 progressByHost.put(progressInfo.peer, progresses);
 }
 if (progresses.contains(progressInfo))



[4/4] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-03 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/db829493
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/db829493
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/db829493

Branch: refs/heads/trunk
Commit: db8294932d0dfaa3cc354794fee61c5531fbb101
Parents: ec308e6 31a9a2f
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 09:31:16 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 09:31:16 2013 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/cql3/Cql.g| 2 +-
 src/java/org/apache/cassandra/tools/BulkLoader.java | 3 ++-
 3 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/db829493/CHANGES.txt
--



[3/4] git commit: add concurrent sets to progressByHost (CASSANDRA-6129)

2013-10-03 Thread jbellis
add concurrent sets to progressByHost (CASSANDRA-6129)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/31a9a2fd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/31a9a2fd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/31a9a2fd

Branch: refs/heads/cassandra-2.0
Commit: 31a9a2fd1e93e41632b729b43c0894c0e8e0c4e9
Parents: 27f4ea2
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 09:30:50 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 09:30:50 2013 -0500

--
 src/java/org/apache/cassandra/tools/BulkLoader.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/31a9a2fd/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index cd3c7e1..c89bb83 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -24,6 +24,7 @@ import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 
+import com.google.common.collect.Sets;
 import org.apache.commons.cli.*;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.protocol.TProtocol;
@@ -115,7 +116,7 @@ public class BulkLoader
 SetProgressInfo progresses = 
progressByHost.get(progressInfo.peer);
 if (progresses == null)
 {
-progresses = new HashSet();
+progresses = Sets.newSetFromMap(new 
ConcurrentHashMapProgressInfo, Boolean());
 progressByHost.put(progressInfo.peer, progresses);
 }
 if (progresses.contains(progressInfo))



[1/4] git commit: Disallow empty column names in cql patch by dbrosius reviewed by slebresne for cassandra-6136

2013-10-03 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 27f4ea2bf - 31a9a2fd1
  refs/heads/trunk ec308e66b - db8294932


Disallow empty column names in cql
patch by dbrosius reviewed by slebresne for cassandra-6136


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/27f4ea2b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/27f4ea2b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/27f4ea2b

Branch: refs/heads/trunk
Commit: 27f4ea2bfd8831ee147ee1ed7a59be9c3308a558
Parents: b3647d9
Author: Dave Brosius dbros...@apache.org
Authored: Thu Oct 3 10:29:04 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Thu Oct 3 10:29:04 2013 -0400

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/cql3/Cql.g | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/27f4ea2b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 994e8c3..e08e5b6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -15,6 +15,7 @@
  * Fix potential NPE on composite 2ndary indexes (CASSANDRA-6098)
  * Delete can potentially be skipped in batch (CASSANDRA-6115)
  * Allow alter keyspace on system_traces (CASSANDRA-6016)
+ * Disallow empty column names in cql (CASSANDRA-6136)
 Merged from 1.2:
  * Never return WriteTimeout for CL.ANY (CASSANDRA-6032)
  * Tracing should log write failure rather than raw exceptions (CASSANDRA-6133)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/27f4ea2b/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 17afb00..ed950da 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -1096,7 +1096,7 @@ STRING_LITERAL
 QUOTED_NAME
 @init{ StringBuilder b = new StringBuilder(); }
 @after{ setText(b.toString()); }
-: '\' (c=~('\') { b.appendCodePoint(c); } | '\' '\' { 
b.appendCodePoint('\'); })* '\'
+: '\' (c=~('\') { b.appendCodePoint(c); } | '\' '\' { 
b.appendCodePoint('\'); })+ '\'
 ;
 
 fragment DIGIT



[jira] [Resolved] (CASSANDRA-6136) CQL should not allow an empty string as column identifier

2013-10-03 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius resolved CASSANDRA-6136.
-

   Resolution: Fixed
Fix Version/s: 2.0.2

committed to cassandra-2.0 as commit 27f4ea2bfd8831ee147ee1ed7a59be9c3308a558

 CQL should not allow an empty string as column identifier
 -

 Key: CASSANDRA-6136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6136
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Dave Brosius
Priority: Minor
 Fix For: 2.0.2

 Attachments: 6136.txt, 6136_v2.txt, 6136_v3.txt


 CQL currently allows users to create a table with an empty string as column 
 identifier:
 {code}
 CREATE TABLE t (k int primary key,  int);
 {code}
 Which results in the following table:
 {code}
 CREATE TABLE t (
   k int,
int,
   PRIMARY KEY (k)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Empty strings are not allowed for keyspace and table identifiers though.
 I guess it's just a case that we haven't covered. Of course making it illegal 
 in a future version would be a breaking change, but nobody serious would 
 manually have chosen such an identifier...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6129) get java.util.ConcurrentModificationException while bulkloading from sstable for widerow table

2013-10-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6129.
---

   Resolution: Fixed
Reproduced In: 2.0.1, 2.0.0  (was: 2.0.0, 2.0.1)

done in 6ca9b4842942db6ff7a978f1054bb619f07a60ad

 get java.util.ConcurrentModificationException while bulkloading from sstable 
 for widerow table
 --

 Key: CASSANDRA-6129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6129
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: three cassandra 2.0.1 node
 jdk 7
 linux - ubuntu
Reporter: koray sariteke
Assignee: Jonathan Ellis
 Fix For: 2.0.2

 Attachments: BulkLoader.diff


 I haven't faced that problem with cassandra 1.2.6
 I have created widerow sstables with SSTableSimpleUnsortedWriter. When i 
 tried to load sstables by sstableloader, I got 
 java.util.ConcurrentModificationException after a while (not at the beggining 
 of the streaming).
 Exception is :
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 15MB/s (avg: 0MB/s)] INFO 
 00:45:23,542 [Stream #c0f53e00-2ae2-11e3-ab6b-99a3e9e32246] Session with 
 /192.168.103.3 is complete
 progress: [/192.168.103.5 0/39 (0%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (0%)] [total: 0% - 3MB/s (avg: 1MB/s)]Exception in 
 thread STREAM-OUT-/192.168.103.3 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$EntryIterator.next(HashMap.java:934)
   at java.util.HashMap$EntryIterator.next(HashMap.java:932)
   at 
 org.apache.cassandra.tools.BulkLoader$ProgressIndicator.handleStreamEvent(BulkLoader.java:129)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 
 12MB/s)]Exception in thread STREAM-OUT-/192.168.103.1 
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
   at java.util.HashMap$KeyIterator.next(HashMap.java:928)
 progress: [/192.168.103.5 0/39 (3%)] [/192.168.103.3 0/39 (0%)] 
 [/192.168.103.1 0/39 (2%)] [total: 1% - 2147483647MB/s (avg: 12MB/s)]
   at 
 org.apache.cassandra.streaming.StreamResultFuture.fireStreamEvent(StreamResultFuture.java:198)
   at 
 org.apache.cassandra.streaming.StreamResultFuture.handleProgress(StreamResultFuture.java:191)
   at 
 org.apache.cassandra.streaming.StreamSession.progress(StreamSession.java:474)
   at 
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:105)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
   at 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
   at 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
   at 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
   at java.lang.Thread.run(Thread.java:781)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6107) CQL3 Batch statement memory leak

2013-10-03 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-6107:
--

Attachment: 6107_v2.patch

Changed the MemoryMeter to measure the full map rather than enforcing 
restrictions on individual prepared statements. The hardcoded maximum is 100MB 
for the thrift cache and 100MB for the CQL cache. 



 CQL3 Batch statement memory leak
 

 Key: CASSANDRA-6107
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6107
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
 Environment: - CASS version: 1.2.8 or 2.0.1, same issue seen in both
 - Running on OSX MacbookPro
 - Sun JVM 1.7
 - Single local cassandra node
 - both CMS and G1 GC used
 - we are using the cass-JDBC driver to submit our batches
Reporter: Constance Eustace
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 1.2.11

 Attachments: 6107.patch, 6107_v2.patch


 We are doing large volume insert/update tests on a CASS via CQL3. 
 Using 4GB heap, after roughly 750,000 updates create/update 75,000 row keys, 
 we run out of heap, and it never dissipates, and we begin getting this 
 infamous error which many people seem to be encountering:
 WARN [ScheduledTasks:1] 2013-09-26 16:17:10,752 GCInspector.java (line 142) 
 Heap is 0.9383457210434385 full.  You may need to reduce memtable and/or 
 cache sizes.  Cassandra will now flush up to the two largest memtables to 
 free up memory.  Adjust flush_largest_memtables_at threshold in 
 cassandra.yaml if you don't want Cassandra to do this automatically
  INFO [ScheduledTasks:1] 2013-09-26 16:17:10,753 StorageService.java (line 
 3614) Unable to reduce heap usage since there are no dirty column families
 8 and 12 GB heaps appear to delay the problem by roughly proportionate 
 amounts of 75,000 - 100,000 rowkeys per 4GB. Each run of 50,000 row key 
 creations sees the heap grow and never shrink again. 
 We have attempted to no effect:
 - removing all secondary indexes to see if that alleviates overuse of bloom 
 filters 
 - adjusted parameters for compaction throughput
 - adjusted memtable flush thresholds and other parameters 
 By examining heapdumps, it seems apparent that the problem is perpetual 
 retention of CQL3 BATCH statements. We have even tried dropping the keyspaces 
 after the updates and the CQL3 statement are still visible in the heapdump, 
 and after many many many CMS GC runs. G1 also showed this issue.
 The 750,000 statements are broken into batches of roughly 200 statements.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: eliminate redundant classpath creation

2013-10-03 Thread eevans
Updated Branches:
  refs/heads/cassandra-2.0 31a9a2fd1 - 723abe2fc


eliminate redundant classpath creation

Patch by eevans; reviewed by Anton Winter for CASSANDRA-6101


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/723abe2f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/723abe2f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/723abe2f

Branch: refs/heads/cassandra-2.0
Commit: 723abe2fc2cbdd89db552e6dd225efe1d4086ebe
Parents: 31a9a2f
Author: Eric Evans eev...@apache.org
Authored: Thu Oct 3 09:40:39 2013 -0500
Committer: Eric Evans eev...@apache.org
Committed: Thu Oct 3 09:42:03 2013 -0500

--
 debian/cassandra.in.sh |  2 ++
 debian/init| 23 ---
 2 files changed, 6 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/723abe2f/debian/cassandra.in.sh
--
diff --git a/debian/cassandra.in.sh b/debian/cassandra.in.sh
index f618895..13005e2 100644
--- a/debian/cassandra.in.sh
+++ b/debian/cassandra.in.sh
@@ -18,3 +18,5 @@ done
 for jar in /usr/share/cassandra/*.jar; do
 CLASSPATH=$CLASSPATH:$jar
 done
+
+CLASSPATH=$CLASSPATH:$EXTRA_CLASSPATH
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/723abe2f/debian/init
--
diff --git a/debian/init b/debian/init
index ce929a1..17901e8 100644
--- a/debian/init
+++ b/debian/init
@@ -77,26 +77,9 @@ fi
 # Depend on lsb-base (= 3.0-6) to ensure that this file is present.
 . /lib/lsb/init-functions
 
+# If JNA is installed, add it to EXTRA_CLASSPATH
 #
-# Function that returns the applications classpath
-#
-classpath()
-{
-cp=$EXTRA_CLASSPATH
-for j in /usr/share/$NAME/lib/*.jar; do
-[ x$cp = x ]  cp=$j || cp=$cp:$j
-done
-for j in /usr/share/$NAME/*.jar; do
-[ x$cp = x ]  cp=$j || cp=$cp:$j
-done
-
-# use JNA if installed in standard location
-[ -r /usr/share/java/jna.jar ]  cp=$cp:/usr/share/java/jna.jar
-
-# Include the conf directory for purposes of log4j-server.properties, and
-# commons-daemon in support of the daemonization class.
-printf $cp:$CONFDIR:/usr/share/java/commons-daemon.jar
-}
+EXTRA_CLASSPATH=/usr/share/java/jna.jar:$EXTRA_CLASSPATH
 
 #
 # Function that returns 0 if process is running, or nonzero if not.
@@ -136,6 +119,8 @@ do_start()
 [ -e `dirname $PIDFILE` ] || \
 install -d -ocassandra -gcassandra -m750 `dirname $PIDFILE`
 
+export EXTRA_CLASSPATH
+
 start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -q -p $PIDFILE 
-t /dev/null || return 1
 
 start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -b -p $PIDFILE 
-- \



[jira] [Commented] (CASSANDRA-6101) Debian init script broken

2013-10-03 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785246#comment-13785246
 ] 

Eric Evans commented on CASSANDRA-6101:
---

bq. Yes, that works as well.

Thanks Anton; Committed

 Debian init script broken
 -

 Key: CASSANDRA-6101
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6101
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Anton Winter
Assignee: Eric Evans
Priority: Minor
 Attachments: 6101-classpath.patch, 6101.txt


 The debian init script released in 2.0.1 contains 2 issues:
 # The pidfile directory is not created if it doesn't already exist.
 # Classpath not exported to the start-stop-daemon.
 These lead to the init script not picking up jna.jar, or anything from the 
 debian EXTRA_CLASSPATH environment variable, and the init script not being 
 able to stop/restart Cassandra.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[2/2] git commit: Merge cassandra-2.0 into trunk

2013-10-03 Thread eevans
Merge cassandra-2.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5476420
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5476420
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5476420

Branch: refs/heads/trunk
Commit: f547642072bc10be61a03a0232603a3101d7c867
Parents: db82949 723abe2
Author: Eric Evans eev...@apache.org
Authored: Thu Oct 3 09:42:52 2013 -0500
Committer: Eric Evans eev...@apache.org
Committed: Thu Oct 3 09:42:52 2013 -0500

--
 debian/cassandra.in.sh |  2 ++
 debian/init| 23 ---
 2 files changed, 6 insertions(+), 19 deletions(-)
--




[1/2] git commit: eliminate redundant classpath creation

2013-10-03 Thread eevans
Updated Branches:
  refs/heads/trunk db8294932 - f54764207


eliminate redundant classpath creation

Patch by eevans; reviewed by Anton Winter for CASSANDRA-6101


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/723abe2f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/723abe2f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/723abe2f

Branch: refs/heads/trunk
Commit: 723abe2fc2cbdd89db552e6dd225efe1d4086ebe
Parents: 31a9a2f
Author: Eric Evans eev...@apache.org
Authored: Thu Oct 3 09:40:39 2013 -0500
Committer: Eric Evans eev...@apache.org
Committed: Thu Oct 3 09:42:03 2013 -0500

--
 debian/cassandra.in.sh |  2 ++
 debian/init| 23 ---
 2 files changed, 6 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/723abe2f/debian/cassandra.in.sh
--
diff --git a/debian/cassandra.in.sh b/debian/cassandra.in.sh
index f618895..13005e2 100644
--- a/debian/cassandra.in.sh
+++ b/debian/cassandra.in.sh
@@ -18,3 +18,5 @@ done
 for jar in /usr/share/cassandra/*.jar; do
 CLASSPATH=$CLASSPATH:$jar
 done
+
+CLASSPATH=$CLASSPATH:$EXTRA_CLASSPATH
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/723abe2f/debian/init
--
diff --git a/debian/init b/debian/init
index ce929a1..17901e8 100644
--- a/debian/init
+++ b/debian/init
@@ -77,26 +77,9 @@ fi
 # Depend on lsb-base (= 3.0-6) to ensure that this file is present.
 . /lib/lsb/init-functions
 
+# If JNA is installed, add it to EXTRA_CLASSPATH
 #
-# Function that returns the applications classpath
-#
-classpath()
-{
-cp=$EXTRA_CLASSPATH
-for j in /usr/share/$NAME/lib/*.jar; do
-[ x$cp = x ]  cp=$j || cp=$cp:$j
-done
-for j in /usr/share/$NAME/*.jar; do
-[ x$cp = x ]  cp=$j || cp=$cp:$j
-done
-
-# use JNA if installed in standard location
-[ -r /usr/share/java/jna.jar ]  cp=$cp:/usr/share/java/jna.jar
-
-# Include the conf directory for purposes of log4j-server.properties, and
-# commons-daemon in support of the daemonization class.
-printf $cp:$CONFDIR:/usr/share/java/commons-daemon.jar
-}
+EXTRA_CLASSPATH=/usr/share/java/jna.jar:$EXTRA_CLASSPATH
 
 #
 # Function that returns 0 if process is running, or nonzero if not.
@@ -136,6 +119,8 @@ do_start()
 [ -e `dirname $PIDFILE` ] || \
 install -d -ocassandra -gcassandra -m750 `dirname $PIDFILE`
 
+export EXTRA_CLASSPATH
+
 start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -q -p $PIDFILE 
-t /dev/null || return 1
 
 start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -b -p $PIDFILE 
-- \



[jira] [Created] (CASSANDRA-6140) Cassandra-cli backward compatibility issue with Cassandra 2.0.1

2013-10-03 Thread DOAN DuyHai (JIRA)
DOAN DuyHai created CASSANDRA-6140:
--

 Summary: Cassandra-cli backward compatibility issue with Cassandra 
2.0.1
 Key: CASSANDRA-6140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6140
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Linux Ubuntu, Cassandra 2.0.0
Reporter: DOAN DuyHai


Currently we are using Cassandra 1.2.6 and we want to migrate to 2.0.1.

 Currently we still use Thrift for some column families (migration to CQL3 is 
not done yet for them). We have cassandra-cli script to drop/create fresh 
keyspace, re-create column families and populate referential data:

*Schema creation script*
{code}
drop keyspace xxx;
create keyspace xxx with placement_strategy ...

create column family offers with 
key_validation_class = UTF8Type and
comparator = 'CompositeType(UTF8Type)'  and 
default_validation_class = UTF8Type;
{code}

*Data insertion script*:
{code}
set offers['OFFER1'][PRODUCT1']='test_product';
...
{code}

 When executing the data insertion script with Cassandra 2.0.1, we have the 
following stack trace:
{code}
Invalid cell for CQL3 table offers. The CQL3 column component (COL1) does not 
correspond to a defined CQL3 column
InvalidRequestException(why:Invalid cell for CQL3 table offers. The CQL3 column 
component (COL1) does not correspond to a defined CQL3 column)
at 
org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21447)
at 
org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21433)
at 
org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:21367)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:898)
at 
org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:882)
at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:987)
at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:231)
at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:201)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:327)
{code}

 This data insertion script works pecfectly with Cassandra 1.2.6.

 We face the same issue with Cassandra 2.0.0. It looks like the cassandra-cli 
commands no longer works with Cassandra 2.0.0...

  





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6140) Cassandra-cli backward compatibility issue with Cassandra 2.0.1

2013-10-03 Thread DOAN DuyHai (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DOAN DuyHai updated CASSANDRA-6140:
---

Description: 
Currently we are using Cassandra 1.2.6 and we want to migrate to 2.0.1.

 We still use Thrift for some column families (migration to CQL3 is not done 
yet for them). We have cassandra-cli script to drop/create fresh keyspace, 
re-create column families and populate referential data:

*Schema creation script*
{code}
drop keyspace xxx;
create keyspace xxx with placement_strategy ...

create column family offers with 
key_validation_class = UTF8Type and
comparator = 'CompositeType(UTF8Type)'  and 
default_validation_class = UTF8Type;
{code}

*Data insertion script*:
{code}
set offers['OFFER1'][PRODUCT1']='test_product';
...
{code}

 When executing the data insertion script with Cassandra 2.0.1, we have the 
following stack trace:
{code}
Invalid cell for CQL3 table offers. The CQL3 column component (COL1) does not 
correspond to a defined CQL3 column
InvalidRequestException(why:Invalid cell for CQL3 table offers. The CQL3 column 
component (COL1) does not correspond to a defined CQL3 column)
at 
org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21447)
at 
org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21433)
at 
org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:21367)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:898)
at 
org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:882)
at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:987)
at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:231)
at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:201)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:327)
{code}

 This data insertion script works pecfectly with Cassandra 1.2.6.

 We face the same issue with Cassandra 2.0.0. It looks like the cassandra-cli 
commands no longer works with Cassandra 2.0.0...

  



  was:
Currently we are using Cassandra 1.2.6 and we want to migrate to 2.0.1.

 Currently we still use Thrift for some column families (migration to CQL3 is 
not done yet for them). We have cassandra-cli script to drop/create fresh 
keyspace, re-create column families and populate referential data:

*Schema creation script*
{code}
drop keyspace xxx;
create keyspace xxx with placement_strategy ...

create column family offers with 
key_validation_class = UTF8Type and
comparator = 'CompositeType(UTF8Type)'  and 
default_validation_class = UTF8Type;
{code}

*Data insertion script*:
{code}
set offers['OFFER1'][PRODUCT1']='test_product';
...
{code}

 When executing the data insertion script with Cassandra 2.0.1, we have the 
following stack trace:
{code}
Invalid cell for CQL3 table offers. The CQL3 column component (COL1) does not 
correspond to a defined CQL3 column
InvalidRequestException(why:Invalid cell for CQL3 table offers. The CQL3 column 
component (COL1) does not correspond to a defined CQL3 column)
at 
org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21447)
at 
org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21433)
at 
org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:21367)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:898)
at 
org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:882)
at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:987)
at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:231)
at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:201)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:327)
{code}

 This data insertion script works pecfectly with Cassandra 1.2.6.

 We face the same issue with Cassandra 2.0.0. It looks like the cassandra-cli 
commands no longer works with Cassandra 2.0.0...

  




 Cassandra-cli backward compatibility issue with Cassandra 2.0.1
 ---

 Key: CASSANDRA-6140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6140
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Linux Ubuntu, Cassandra 2.0.0
Reporter: DOAN DuyHai

 Currently we are using Cassandra 1.2.6 and we want to migrate to 2.0.1.
  We still use Thrift for some 

[jira] [Commented] (CASSANDRA-6101) Debian init script broken

2013-10-03 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785250#comment-13785250
 ] 

Eric Evans commented on CASSANDRA-6101:
---

bq. Not really important, but status isn't working with this patch..

I'm not seeing that [~pieterc]; Which patch are you referring to, 
[6101.txt|https://issues.apache.org/jira/secure/attachment/12605201/6101.txt] 
or 
[6101-classpatch.patch|https://issues.apache.org/jira/secure/attachment/12605920/6101-classpath.patch]?

I wouldn't be surprised at this point to find there are more bugs with this, 
but only 
[6101.txt|https://issues.apache.org/jira/secure/attachment/12605201/6101.txt] 
should have had any impact on this, and it is definitely a change for the 
better.

When you see this error, does a PID file exist at 
{{/var/run/cassandra/cassandra.pid}}?  If so, what are the contents of the 
file?  Is Cassandra running, and if so, what is its PID (hint: try {{pgrep -f 
CassandraDaemon}})?

Could you attach the output of {{sh -x /etc/init.d/cassandra status}}?

 Debian init script broken
 -

 Key: CASSANDRA-6101
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6101
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Anton Winter
Assignee: Eric Evans
Priority: Minor
 Attachments: 6101-classpath.patch, 6101.txt


 The debian init script released in 2.0.1 contains 2 issues:
 # The pidfile directory is not created if it doesn't already exist.
 # Classpath not exported to the start-stop-daemon.
 These lead to the init script not picking up jna.jar, or anything from the 
 debian EXTRA_CLASSPATH environment variable, and the init script not being 
 able to stop/restart Cassandra.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6107) CQL3 Batch statement memory leak

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785251#comment-13785251
 ] 

Jonathan Ellis commented on CASSANDRA-6107:
---

Use the cache weigher/weightedCapacity api instead of re-measuring the entire 
cache each time.  then the cache will take care of evicting old ones to make 
room as needed.

suggest making the capacity 1/256 of heap size.

should probably have a separate setting for maximum single statement size.  if 
a single statement is under this threshold but larger than the cache, execute 
it but do not cache it.

finally, statementid size should be negligible, i'd leave that out.


 CQL3 Batch statement memory leak
 

 Key: CASSANDRA-6107
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6107
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
 Environment: - CASS version: 1.2.8 or 2.0.1, same issue seen in both
 - Running on OSX MacbookPro
 - Sun JVM 1.7
 - Single local cassandra node
 - both CMS and G1 GC used
 - we are using the cass-JDBC driver to submit our batches
Reporter: Constance Eustace
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 1.2.11

 Attachments: 6107.patch, 6107_v2.patch


 We are doing large volume insert/update tests on a CASS via CQL3. 
 Using 4GB heap, after roughly 750,000 updates create/update 75,000 row keys, 
 we run out of heap, and it never dissipates, and we begin getting this 
 infamous error which many people seem to be encountering:
 WARN [ScheduledTasks:1] 2013-09-26 16:17:10,752 GCInspector.java (line 142) 
 Heap is 0.9383457210434385 full.  You may need to reduce memtable and/or 
 cache sizes.  Cassandra will now flush up to the two largest memtables to 
 free up memory.  Adjust flush_largest_memtables_at threshold in 
 cassandra.yaml if you don't want Cassandra to do this automatically
  INFO [ScheduledTasks:1] 2013-09-26 16:17:10,753 StorageService.java (line 
 3614) Unable to reduce heap usage since there are no dirty column families
 8 and 12 GB heaps appear to delay the problem by roughly proportionate 
 amounts of 75,000 - 100,000 rowkeys per 4GB. Each run of 50,000 row key 
 creations sees the heap grow and never shrink again. 
 We have attempted to no effect:
 - removing all secondary indexes to see if that alleviates overuse of bloom 
 filters 
 - adjusted parameters for compaction throughput
 - adjusted memtable flush thresholds and other parameters 
 By examining heapdumps, it seems apparent that the problem is perpetual 
 retention of CQL3 BATCH statements. We have even tried dropping the keyspaces 
 after the updates and the CQL3 statement are still visible in the heapdump, 
 and after many many many CMS GC runs. G1 also showed this issue.
 The 750,000 statements are broken into batches of roughly 200 statements.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6107) CQL3 Batch statement memory leak

2013-10-03 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-6107:
--

Attachment: Screen Shot 2013-10-03 at 17.59.37.png

Memory Usage graph for batches of 'insert' statements with varying numbers of 
columns.  

 CQL3 Batch statement memory leak
 

 Key: CASSANDRA-6107
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6107
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
 Environment: - CASS version: 1.2.8 or 2.0.1, same issue seen in both
 - Running on OSX MacbookPro
 - Sun JVM 1.7
 - Single local cassandra node
 - both CMS and G1 GC used
 - we are using the cass-JDBC driver to submit our batches
Reporter: Constance Eustace
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 1.2.11

 Attachments: 6107.patch, 6107_v2.patch, Screen Shot 2013-10-03 at 
 17.59.37.png


 We are doing large volume insert/update tests on a CASS via CQL3. 
 Using 4GB heap, after roughly 750,000 updates create/update 75,000 row keys, 
 we run out of heap, and it never dissipates, and we begin getting this 
 infamous error which many people seem to be encountering:
 WARN [ScheduledTasks:1] 2013-09-26 16:17:10,752 GCInspector.java (line 142) 
 Heap is 0.9383457210434385 full.  You may need to reduce memtable and/or 
 cache sizes.  Cassandra will now flush up to the two largest memtables to 
 free up memory.  Adjust flush_largest_memtables_at threshold in 
 cassandra.yaml if you don't want Cassandra to do this automatically
  INFO [ScheduledTasks:1] 2013-09-26 16:17:10,753 StorageService.java (line 
 3614) Unable to reduce heap usage since there are no dirty column families
 8 and 12 GB heaps appear to delay the problem by roughly proportionate 
 amounts of 75,000 - 100,000 rowkeys per 4GB. Each run of 50,000 row key 
 creations sees the heap grow and never shrink again. 
 We have attempted to no effect:
 - removing all secondary indexes to see if that alleviates overuse of bloom 
 filters 
 - adjusted parameters for compaction throughput
 - adjusted memtable flush thresholds and other parameters 
 By examining heapdumps, it seems apparent that the problem is perpetual 
 retention of CQL3 BATCH statements. We have even tried dropping the keyspaces 
 after the updates and the CQL3 statement are still visible in the heapdump, 
 and after many many many CMS GC runs. G1 also showed this issue.
 The 750,000 statements are broken into batches of roughly 200 statements.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6116) /etc/init.d/cassandra stop and service don't work

2013-10-03 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785255#comment-13785255
 ] 

Eric Evans commented on CASSANDRA-6116:
---

This is probably a duplicate of CASSANDRA-6090 and/or CASSANDRA-6101; Could you 
try this again and let me know the result?

 /etc/init.d/cassandra stop and service don't work
 -

 Key: CASSANDRA-6116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6116
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Cathy Daw
Assignee: Eric Evans
Priority: Minor

 These use to work in 2.0.0 and appears to be introduced in 2.0.1
 Test Scenario
 {noformat}
 # Start Server
 automaton@ip-10-171-39-230:~$ sudo service cassandra start
 xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
 -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 # Check Status
 automaton@ip-10-171-39-230:~$ nodetool status
 Datacenter: datacenter1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 UN  127.0.0.1  81.72 KB   256 100.0%  
 e40ef77c-9cf7-4e27-b651-ede3b7269019  rack1
 # Check Status of service
 automaton@ip-10-171-39-230:~$ sudo service cassandra status
 xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
 -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
  * Cassandra is not running
 # Stop Server
 automaton@ip-10-171-39-230:~$ sudo service cassandra stop
 xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
 -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 # Verify Server is no longer up
 automaton@ip-10-171-39-230:~$ nodetool status
 Datacenter: datacenter1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 UN  127.0.0.1  81.72 KB   256 100.0%  
 e40ef77c-9cf7-4e27-b651-ede3b7269019  rack1
 {noformat}
 Installation Instructions
 {noformat}
 wget http://people.apache.org/~slebresne/cassandra_2.0.1_all.deb
 sudo dpkg -i cassandra_2.0.1_all.deb # Error about dependencies
 sudo apt-get -f install
 sudo dpkg -i cassandra_2.0.1_all.deb
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6141) Update nodetool to use o.a.c.metrics package instead of deprecated mbeans

2013-10-03 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-6141:
-

 Summary: Update nodetool to use o.a.c.metrics package instead of 
deprecated mbeans
 Key: CASSANDRA-6141
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6141
 Project: Cassandra
  Issue Type: Sub-task
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
 Fix For: 2.1






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6116) /etc/init.d/cassandra stop and service don't work

2013-10-03 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785255#comment-13785255
 ] 

Eric Evans edited comment on CASSANDRA-6116 at 10/3/13 3:23 PM:


This is probably a duplicate of CASSANDRA-6090 and/or CASSANDRA-6101; Could you 
try this again and let me know the result?

Edit: Oh, and trying it again will require building a new package from the 
{{cassanra-2.0}} branch, let me know if you need help with this, or web access 
to a packaged snapshot.


was (Author: urandom):
This is probably a duplicate of CASSANDRA-6090 and/or CASSANDRA-6101; Could you 
try this again and let me know the result?

 /etc/init.d/cassandra stop and service don't work
 -

 Key: CASSANDRA-6116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6116
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Cathy Daw
Assignee: Eric Evans
Priority: Minor

 These use to work in 2.0.0 and appears to be introduced in 2.0.1
 Test Scenario
 {noformat}
 # Start Server
 automaton@ip-10-171-39-230:~$ sudo service cassandra start
 xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
 -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 # Check Status
 automaton@ip-10-171-39-230:~$ nodetool status
 Datacenter: datacenter1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 UN  127.0.0.1  81.72 KB   256 100.0%  
 e40ef77c-9cf7-4e27-b651-ede3b7269019  rack1
 # Check Status of service
 automaton@ip-10-171-39-230:~$ sudo service cassandra status
 xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
 -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
  * Cassandra is not running
 # Stop Server
 automaton@ip-10-171-39-230:~$ sudo service cassandra stop
 xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
 -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 # Verify Server is no longer up
 automaton@ip-10-171-39-230:~$ nodetool status
 Datacenter: datacenter1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 UN  127.0.0.1  81.72 KB   256 100.0%  
 e40ef77c-9cf7-4e27-b651-ede3b7269019  rack1
 {noformat}
 Installation Instructions
 {noformat}
 wget http://people.apache.org/~slebresne/cassandra_2.0.1_all.deb
 sudo dpkg -i cassandra_2.0.1_all.deb # Error about dependencies
 sudo apt-get -f install
 sudo dpkg -i cassandra_2.0.1_all.deb
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6131) JAVA_HOME on cassandra-env.sh is ignored on Debian packages

2013-10-03 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785278#comment-13785278
 ] 

Eric Evans commented on CASSANDRA-6131:
---

Could you expound on this a bit?  Are you trying to _set_ {{JAVA_HOME}} from 
within {{cassandra-env.sh}}?

 JAVA_HOME on cassandra-env.sh is ignored on Debian packages
 ---

 Key: CASSANDRA-6131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6131
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: I've just got upgraded to 2.0.1 package from the apache 
 repositories using apt. I had the JAVA_HOME environment variable set in 
 /etc/cassandra/cassandra-env.sh but after the upgrade it only worked by 
 setting it on /usr/sbin/cassandra script. I can't configure java 7 system 
 wide, only for cassandra.
 Off-toppic: Thanks for getting rid of the jsvc mess.
Reporter: Sebastián Lacuesta
Assignee: Eric Evans





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6142) Remove multithreaded compaction

2013-10-03 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-6142:
-

 Summary: Remove multithreaded compaction
 Key: CASSANDRA-6142
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6142
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.1


There is at best a very small sweet spot for multithreaded compaction 
(ParallelCompactionIterable).  For large rows, we stall the pipeline and fall 
back to a single LCR pass.  For small rows, the overhead of the coordination 
outweighs the benefits of parallelization (45s to compact 2x1M stress rows with 
multithreading enabled, vs 35 with it disabled).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6142) Remove multithreaded compaction

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785287#comment-13785287
 ] 

Jonathan Ellis commented on CASSANDRA-6142:
---

I tried parallelizing at the OnDiskAtomIterator level instead 
(thread-per-iterator-per-partition, buffering into a queue) and for small 
partitions the performance is ridiculously bad, easily 100x worse than single 
threaded mode.

Any better ideas [~krummas] [~yukim] [~iamaleksey] [~slebresne]?  If not I will 
post a patch to rip out PCI.

 Remove multithreaded compaction
 ---

 Key: CASSANDRA-6142
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6142
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.1


 There is at best a very small sweet spot for multithreaded compaction 
 (ParallelCompactionIterable).  For large rows, we stall the pipeline and fall 
 back to a single LCR pass.  For small rows, the overhead of the coordination 
 outweighs the benefits of parallelization (45s to compact 2x1M stress rows 
 with multithreading enabled, vs 35 with it disabled).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5383) Windows 7 deleting/renaming files problem

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785290#comment-13785290
 ] 

Jonathan Ellis commented on CASSANDRA-5383:
---

+1

 Windows 7 deleting/renaming files problem
 -

 Key: CASSANDRA-5383
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5383
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Affects Versions: 2.0 beta 1
Reporter: Ryan McGuire
Assignee: Marcus Eriksson
 Fix For: 2.0.2

 Attachments: 
 0001-CASSANDRA-5383-cant-move-a-file-on-top-of-another-fi.patch, 
 0001-CASSANDRA-5383-v2.patch, 
 0001-use-Java7-apis-for-deleting-and-moving-files-and-cre.patch, 
 5383_patch_v2_system.log, 5383-v3.patch, cant_move_file_patch.log, 
 test_log.5383.patch_v2.log.txt, v2+cant_move_file_patch.log


 Two unit tests are failing on Windows 7 due to errors in renaming/deleting 
 files:
 org.apache.cassandra.db.ColumnFamilyStoreTest: 
 {code}
 [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest
 [junit] Tests run: 27, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
 13.904 sec
 [junit] 
 [junit] - Standard Error -
 [junit] ERROR 13:06:46,058 Unable to delete 
 build\test\cassandra\data\Keyspace1\Indexed2\Keyspace1-Indexed2.birthdate_index-ja-1-Data.db
  (it will be removed on server restart; we'll also retry after GC)
 [junit] ERROR 13:06:48,508 Fatal exception in thread 
 Thread[NonPeriodicTasks:1,5,main]
 [junit] java.lang.RuntimeException: Tried to hard link to file that does 
 not exist 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-7-Statistics.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:72)
 [junit]   at 
 org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1057)
 [junit]   at 
 org.apache.cassandra.db.DataTracker$1.run(DataTracker.java:168)
 [junit]   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 [junit]   at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 [junit]   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 [junit]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
 [junit]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 [junit]   at java.lang.Thread.run(Thread.java:662)
 [junit] -  ---
 [junit] Testcase: 
 testSliceByNamesCommandOldMetatada(org.apache.cassandra.db.ColumnFamilyStoreTest):
   Caused an ERROR
 [junit] Failed to rename 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp
  to 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db
 [junit] java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp
  to 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133)
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:122)
 [junit]   at 
 org.apache.cassandra.db.compaction.LeveledManifest.mutateLevel(LeveledManifest.java:575)
 [junit]   at 
 org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:589)
 [junit]   at 
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetatada(ColumnFamilyStoreTest.java:885)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testRemoveUnifinishedCompactionLeftovers(org.apache.cassandra.db.ColumnFamilyStoreTest):
 Caused an ERROR
 [junit] java.io.IOException: Failed to delete 
 c:\Users\Ryan\git\cassandra\build\test\cassandra\data\Keyspace1\Standard3\Keyspace1-Standard3-ja-2-Data.db
 [junit] FSWriteError in 
 build\test\cassandra\data\Keyspace1\Standard3\Keyspace1-Standard3-ja-2-Data.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:112)
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:103)
 [junit]   at 
 org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:139)
 [junit]   at 
 org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:507)
 [junit]   at 
 

[jira] [Commented] (CASSANDRA-6001) Add a Tracing line for index/seq scan queries

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785302#comment-13785302
 ] 

Jonathan Ellis commented on CASSANDRA-6001:
---

How about this, to leave things as objects until we're ready to turn it into a 
string for tracing?

 Add a Tracing line for index/seq scan queries
 -

 Key: CASSANDRA-6001
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6001
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
 Fix For: 1.2.11

 Attachments: 6001.patch, 6001-v2.txt


 Tracing should show the user why a specific index was selected.  Something 
 like
 {noformat}
 Index mean cardinality is {users_by_lastname: 100, users_by_state: 2000, 
 users_by_birthdate: 1000}.  Scanning with users_by_lastname.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6001) Add a Tracing line for index/seq scan queries

2013-10-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6001:
--

Attachment: 6001-v2.txt

 Add a Tracing line for index/seq scan queries
 -

 Key: CASSANDRA-6001
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6001
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
 Fix For: 1.2.11

 Attachments: 6001.patch, 6001-v2.txt


 Tracing should show the user why a specific index was selected.  Something 
 like
 {noformat}
 Index mean cardinality is {users_by_lastname: 100, users_by_state: 2000, 
 users_by_birthdate: 1000}.  Scanning with users_by_lastname.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-5943) Add sstablesplit dtest

2013-10-03 Thread Daniel Meyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Meyer resolved CASSANDRA-5943.
-

Resolution: Fixed

 Add sstablesplit dtest
 --

 Key: CASSANDRA-5943
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5943
 Project: Cassandra
  Issue Type: Test
Reporter: Brandon Williams
Assignee: Daniel Meyer
Priority: Minor

 Now that we're shipping sstablesplit, we should add a dtest to make sure it 
 works correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6143) shuffle stress test

2013-10-03 Thread Daniel Meyer (JIRA)
Daniel Meyer created CASSANDRA-6143:
---

 Summary: shuffle stress test
 Key: CASSANDRA-6143
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6143
 Project: Cassandra
  Issue Type: Test
Reporter: Daniel Meyer
Assignee: Daniel Meyer


Determine scalability curves for shuffle as data size and cluster size vary.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Issue Comment Deleted] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread lantao yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lantao yan updated CASSANDRA-5078:
--

Comment: was deleted

(was: . )

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread lantao yan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785079#comment-13785079
 ] 

lantao yan edited comment on CASSANDRA-5078 at 10/3/13 4:16 PM:


. 


was (Author: yanlantao):
@Yuki Morishita, I checked that code, I think I was right about each thread 
will run against one column family store.
so two threads are working on the same column family with the same timestamp 
will never happen.

CompactionTask.java
protected void runWith(File sstableDirectory) throws Exception // it is 
running against a sstableDirectory, which is a cfs directory I guess.

If given two compaction threads can work on the same column family 
store(directory, and files), we may have risk condition.
[~jbellis], correct me if I am wrong.

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6144) Schema torture test

2013-10-03 Thread Daniel Meyer (JIRA)
Daniel Meyer created CASSANDRA-6144:
---

 Summary: Schema torture test
 Key: CASSANDRA-6144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6144
 Project: Cassandra
  Issue Type: Test
Reporter: Daniel Meyer
Assignee: Daniel Meyer


Develop schema torture tests for running in both dtest as well as against a 
distributed cluster.

Torture test will include repeated cycle of create, alter, drop, truncate, etc.
Run torture test against variable size clusters in case schema disagreements 
occur more frequently with more nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5019) Still too much object allocation on reads

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785314#comment-13785314
 ] 

Jonathan Ellis commented on CASSANDRA-5019:
---

Possibly useful: https://github.com/RichardWarburton/slab

 Still too much object allocation on reads
 -

 Key: CASSANDRA-5019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5019
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
  Labels: performance
 Fix For: 2.1


 ArrayBackedSortedColumns was a step in the right direction but it's still 
 relatively heavyweight thanks to allocating individual Columns.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6142) Remove multithreaded compaction

2013-10-03 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785321#comment-13785321
 ] 

Marcus Eriksson commented on CASSANDRA-6142:


i tried improving it a while back as well, got basically the same results, yes, 
we should remove it

concluded that the best way to improve the speed was to do more compactions in 
parallel (CASSANDRA-5936 - i should finish that up..)

 Remove multithreaded compaction
 ---

 Key: CASSANDRA-6142
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6142
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.1


 There is at best a very small sweet spot for multithreaded compaction 
 (ParallelCompactionIterable).  For large rows, we stall the pipeline and fall 
 back to a single LCR pass.  For small rows, the overhead of the coordination 
 outweighs the benefits of parallelization (45s to compact 2x1M stress rows 
 with multithreading enabled, vs 35 with it disabled).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6131) JAVA_HOME on cassandra-env.sh is ignored on Debian packages

2013-10-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785322#comment-13785322
 ] 

Sebastián Lacuesta commented on CASSANDRA-6131:
---

Yes, I thought it was the way I should do this, in fact (due to time
constraints) I've just set it at /usr/sbin/cassandra (I know it's really
_awfull_). I used to set JAVA_HOME at /etc/cassandra/cassandra-env.sh and
it worked until 2.0.0. I need to use java 6 as the  default jvm on my
development machine, but jvm 7 just for cassandra, if I'm really wrong on
how I'm proceeding, I'd be glad to know what's the way to set it up.
Thanks a lot!


2013/10/3 Eric Evans (JIRA) j...@apache.org



 JAVA_HOME on cassandra-env.sh is ignored on Debian packages
 ---

 Key: CASSANDRA-6131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6131
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: I've just got upgraded to 2.0.1 package from the apache 
 repositories using apt. I had the JAVA_HOME environment variable set in 
 /etc/cassandra/cassandra-env.sh but after the upgrade it only worked by 
 setting it on /usr/sbin/cassandra script. I can't configure java 7 system 
 wide, only for cassandra.
 Off-toppic: Thanks for getting rid of the jsvc mess.
Reporter: Sebastián Lacuesta
Assignee: Eric Evans





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6097) nodetool repair randomly hangs.

2013-10-03 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-6097:
--

Attachment: 6097-1.2.txt

JB is right. We need to handle cases where connection got lost.
Attaching patch to listen to JMXConnectionNotification.

There are 2 cases handled. One is for notification lost. In this case, We 
cannot tell if repair finished, so log message to check server log and go on to 
next keyspace for repair. The other one is for connection closed, and in this 
case we throw exception to stop further processing of repair.

 nodetool repair randomly hangs.
 ---

 Key: CASSANDRA-6097
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6097
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DataStax AMI
Reporter: J.B. Langston
Assignee: Yuki Morishita
Priority: Minor
 Attachments: 6097-1.2.txt, dse.stack, nodetool.stack


 nodetool repair randomly hangs. This is not the same issue where repair hangs 
 if a stream is disrupted. This can be reproduced on a single-node cluster 
 where no streaming takes place, so I think this may be a JMX connection or 
 timeout issue. Thread dumps show that nodetool is waiting on a JMX response 
 and there are no repair-related threads running in Cassandra. Nodetool main 
 thread waiting for JMX response:
 {code}
 main prio=5 tid=7ffa4b001800 nid=0x10aedf000 in Object.wait() [10aede000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.tools.RepairRunner.repairAndWait(NodeProbe.java:976)
   at 
 org.apache.cassandra.tools.NodeProbe.forceRepairAsync(NodeProbe.java:221)
   at 
 org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:1444)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1213)
 {code}
 When nodetool hangs, it does not print out the following message:
 Starting repair command #XX, repairing 1 ranges for keyspace XXX
 However, Cassandra logs that repair in system.log:
 1380033480.95  INFO [Thread-154] 10:38:00,882 Starting repair command #X, 
 repairing X ranges for keyspace XXX
 This suggests that the repair command was received by Cassandra but the 
 connection then failed and nodetool didn't receive a response.
 Obviously, running repair on a single-node cluster is pointless but it's the 
 easiest way to demonstrate this problem. The customer who reported this has 
 also seen the issue on his real multi-node cluster.
 Steps to reproduce:
 Note: I reproduced this once on the official DataStax AMI with DSE 3.1.3 
 (Cassandra 1.2.6+patches).  I was unable to reproduce on my Mac using the 
 same version, and subsequent attempts to reproduce it on the AMI were 
 unsuccessful. The customer says he is able is able to reliably reproduce on 
 his Mac using DSE 3.1.3 and occasionally reproduce it on his real cluster. 
 1) Deploy an AMI using the DataStax AMI at 
 https://aws.amazon.com/amis/datastax-auto-clustering-ami-2-2
 2) Create a test keyspace
 {code}
 create keyspace test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 {code}
 3) Run an endless loop that runs nodetool repair repeatedly:
 {code}
 while true; do nodetool repair -pr test; done
 {code}
 4) Wait until repair hangs. It may take many tries; the behavior is random.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread lantao yan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785336#comment-13785336
 ] 

lantao yan commented on CASSANDRA-5078:
---

[~yukim], [~jbellis], I checked that code, although two threads can work one 
the same column family. but in case of one thread is compacting, another thread 
would fail? I was worrying if two threads compacting the same set of files, we 
could have risk condition? given that, this problem is not actually changing 
the primary key.

ColumnFamilyStore.java
assert data.getCompacting().isEmpty() : data.getCompacting();

correct me if I am wrong. I will keep reading the code. thanks.

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6144) Schema torture test

2013-10-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6144:
--

Description: 
Develop schema torture tests for running in both dtest as well as against a 
distributed cluster.

Torture test will include repeated cycle of *concurrent* create, alter, drop, 
truncate, etc.
Run torture test against variable size clusters in case schema disagreements 
occur more frequently with more nodes.

  was:
Develop schema torture tests for running in both dtest as well as against a 
distributed cluster.

Torture test will include repeated cycle of create, alter, drop, truncate, etc.
Run torture test against variable size clusters in case schema disagreements 
occur more frequently with more nodes.


 Schema torture test
 ---

 Key: CASSANDRA-6144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6144
 Project: Cassandra
  Issue Type: Test
Reporter: Daniel Meyer
Assignee: Daniel Meyer

 Develop schema torture tests for running in both dtest as well as against a 
 distributed cluster.
 Torture test will include repeated cycle of *concurrent* create, alter, drop, 
 truncate, etc.
 Run torture test against variable size clusters in case schema disagreements 
 occur more frequently with more nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785358#comment-13785358
 ] 

Yuki Morishita commented on CASSANDRA-5078:
---

Size-tiered/leveled compaction choose files that aren't compacting when 
determining candidates, and try marking acquired candidates as compacting. If 
marking fails, then compaction would be skipped(and go on to find next 
candidates).
So two or more threads can grab different set of SSTables inside the same 
ColumnFamily.

See, for example 
https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java#L57
 and 
https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/db/DataTracker.java#L188.

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[01/11] git commit: cleanup

2013-10-03 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 20a805023 - 92b3622dc
  refs/heads/cassandra-2.0 723abe2fc - 7d8a56df3
  refs/heads/trunk f54764207 - 9b06e48bb


cleanup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55a55000
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55a55000
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55a55000

Branch: refs/heads/trunk
Commit: 55a55000526bc24313bbc7afc08f758d35dda03a
Parents: 31a9a2f
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 09:53:00 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 09:53:00 2013 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/55a55000/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 00375ab..657d7f2 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -880,17 +880,17 @@ public class NodeCmd
 outs.println(\t\tMemtable cell count:  + 
cfstore.getMemtableColumnsCount());
 outs.println(\t\tMemtable data size, bytes:  + 
cfstore.getMemtableDataSize());
 outs.println(\t\tMemtable switch count:  + 
cfstore.getMemtableSwitchCount());
-outs.println(\t\tRead count:  + cfstore.getReadCount());
-outs.println(\t\tRead latency, micros:  + 
String.format(%01.3f, cfstore.getRecentReadLatencyMicros() / 1000) +  ms.);
-outs.println(\t\tWrite count:  + cfstore.getWriteCount());
-outs.println(\t\tWrite latency, micros:  + 
String.format(%01.3f, cfstore.getRecentWriteLatencyMicros() / 1000) +  ms.);
+outs.println(\t\tLocal read count:  + 
cfstore.getReadCount());
+outs.printf(\t\tLocal read latency: %01.3f ms%n, 
cfstore.getRecentReadLatencyMicros() / 1000);
+outs.println(\t\tLocal write count:  + 
cfstore.getWriteCount());
+outs.printf(\t\tLocal write latency: %01.3f ms%n, 
cfstore.getRecentWriteLatencyMicros() / 1000);
 outs.println(\t\tPending tasks:  + 
cfstore.getPendingTasks());
 outs.println(\t\tBloom filter false positives:  + 
cfstore.getBloomFilterFalsePositives());
 outs.println(\t\tBloom filter false ratio:  + 
String.format(%01.5f, cfstore.getRecentBloomFilterFalseRatio()));
 outs.println(\t\tBloom filter space used, bytes:  + 
cfstore.getBloomFilterDiskSpaceUsed());
-outs.println(\t\tCompacted partition minimum size, bytes:  + 
cfstore.getMinRowSize());
-outs.println(\t\tCompacted partition maximum size, bytes:  + 
cfstore.getMaxRowSize());
-outs.println(\t\tCompacted partition mean size, bytes:  + 
cfstore.getMeanRowSize());
+outs.println(\t\tCompacted partition minimum bytes:  + 
cfstore.getMinRowSize());
+outs.println(\t\tCompacted partition maximum bytes:  + 
cfstore.getMaxRowSize());
+outs.println(\t\tCompacted partition mean bytes:  + 
cfstore.getMeanRowSize());
 
 outs.println();
 }



[02/11] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-03 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ecfb403d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ecfb403d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ecfb403d

Branch: refs/heads/trunk
Commit: ecfb403da94efe161348ad85de45166b766673fb
Parents: db82949 55a5500
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 09:53:14 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 09:53:14 2013 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)
--




[11/11] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-03 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b06e48b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b06e48b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b06e48b

Branch: refs/heads/trunk
Commit: 9b06e48bb7fc4ce0cb95778e2aa961cccaea5d1c
Parents: c8d0f11 7d8a56d
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 12:12:43 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:12:43 2013 -0500

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b06e48b/src/java/org/apache/cassandra/service/StorageProxy.java
--



[10/11] git commit: Merge remote-tracking branch 'origin/trunk' into trunk

2013-10-03 Thread jbellis
Merge remote-tracking branch 'origin/trunk' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c8d0f11c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c8d0f11c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c8d0f11c

Branch: refs/heads/trunk
Commit: c8d0f11c0b1acad2c5fc1d9948510efb5f9597d3
Parents: ecfb403 f547642
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 12:12:38 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:12:38 2013 -0500

--
 debian/cassandra.in.sh |  2 ++
 debian/init| 23 ---
 2 files changed, 6 insertions(+), 19 deletions(-)
--




[06/11] git commit: cleanup

2013-10-03 Thread jbellis
cleanup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/71c89128
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/71c89128
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/71c89128

Branch: refs/heads/cassandra-2.0
Commit: 71c89128279af4ef9f1a990b022f77e05f5aafb8
Parents: 723abe2
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 09:53:00 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:03:24 2013 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/71c89128/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 00375ab..657d7f2 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -880,17 +880,17 @@ public class NodeCmd
 outs.println(\t\tMemtable cell count:  + 
cfstore.getMemtableColumnsCount());
 outs.println(\t\tMemtable data size, bytes:  + 
cfstore.getMemtableDataSize());
 outs.println(\t\tMemtable switch count:  + 
cfstore.getMemtableSwitchCount());
-outs.println(\t\tRead count:  + cfstore.getReadCount());
-outs.println(\t\tRead latency, micros:  + 
String.format(%01.3f, cfstore.getRecentReadLatencyMicros() / 1000) +  ms.);
-outs.println(\t\tWrite count:  + cfstore.getWriteCount());
-outs.println(\t\tWrite latency, micros:  + 
String.format(%01.3f, cfstore.getRecentWriteLatencyMicros() / 1000) +  ms.);
+outs.println(\t\tLocal read count:  + 
cfstore.getReadCount());
+outs.printf(\t\tLocal read latency: %01.3f ms%n, 
cfstore.getRecentReadLatencyMicros() / 1000);
+outs.println(\t\tLocal write count:  + 
cfstore.getWriteCount());
+outs.printf(\t\tLocal write latency: %01.3f ms%n, 
cfstore.getRecentWriteLatencyMicros() / 1000);
 outs.println(\t\tPending tasks:  + 
cfstore.getPendingTasks());
 outs.println(\t\tBloom filter false positives:  + 
cfstore.getBloomFilterFalsePositives());
 outs.println(\t\tBloom filter false ratio:  + 
String.format(%01.5f, cfstore.getRecentBloomFilterFalseRatio()));
 outs.println(\t\tBloom filter space used, bytes:  + 
cfstore.getBloomFilterDiskSpaceUsed());
-outs.println(\t\tCompacted partition minimum size, bytes:  + 
cfstore.getMinRowSize());
-outs.println(\t\tCompacted partition maximum size, bytes:  + 
cfstore.getMaxRowSize());
-outs.println(\t\tCompacted partition mean size, bytes:  + 
cfstore.getMeanRowSize());
+outs.println(\t\tCompacted partition minimum bytes:  + 
cfstore.getMinRowSize());
+outs.println(\t\tCompacted partition maximum bytes:  + 
cfstore.getMaxRowSize());
+outs.println(\t\tCompacted partition mean bytes:  + 
cfstore.getMeanRowSize());
 
 outs.println();
 }



[07/11] git commit: cleanup

2013-10-03 Thread jbellis
cleanup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/71c89128
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/71c89128
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/71c89128

Branch: refs/heads/trunk
Commit: 71c89128279af4ef9f1a990b022f77e05f5aafb8
Parents: 723abe2
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 09:53:00 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:03:24 2013 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/71c89128/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 00375ab..657d7f2 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -880,17 +880,17 @@ public class NodeCmd
 outs.println(\t\tMemtable cell count:  + 
cfstore.getMemtableColumnsCount());
 outs.println(\t\tMemtable data size, bytes:  + 
cfstore.getMemtableDataSize());
 outs.println(\t\tMemtable switch count:  + 
cfstore.getMemtableSwitchCount());
-outs.println(\t\tRead count:  + cfstore.getReadCount());
-outs.println(\t\tRead latency, micros:  + 
String.format(%01.3f, cfstore.getRecentReadLatencyMicros() / 1000) +  ms.);
-outs.println(\t\tWrite count:  + cfstore.getWriteCount());
-outs.println(\t\tWrite latency, micros:  + 
String.format(%01.3f, cfstore.getRecentWriteLatencyMicros() / 1000) +  ms.);
+outs.println(\t\tLocal read count:  + 
cfstore.getReadCount());
+outs.printf(\t\tLocal read latency: %01.3f ms%n, 
cfstore.getRecentReadLatencyMicros() / 1000);
+outs.println(\t\tLocal write count:  + 
cfstore.getWriteCount());
+outs.printf(\t\tLocal write latency: %01.3f ms%n, 
cfstore.getRecentWriteLatencyMicros() / 1000);
 outs.println(\t\tPending tasks:  + 
cfstore.getPendingTasks());
 outs.println(\t\tBloom filter false positives:  + 
cfstore.getBloomFilterFalsePositives());
 outs.println(\t\tBloom filter false ratio:  + 
String.format(%01.5f, cfstore.getRecentBloomFilterFalseRatio()));
 outs.println(\t\tBloom filter space used, bytes:  + 
cfstore.getBloomFilterDiskSpaceUsed());
-outs.println(\t\tCompacted partition minimum size, bytes:  + 
cfstore.getMinRowSize());
-outs.println(\t\tCompacted partition maximum size, bytes:  + 
cfstore.getMaxRowSize());
-outs.println(\t\tCompacted partition mean size, bytes:  + 
cfstore.getMeanRowSize());
+outs.println(\t\tCompacted partition minimum bytes:  + 
cfstore.getMinRowSize());
+outs.println(\t\tCompacted partition maximum bytes:  + 
cfstore.getMaxRowSize());
+outs.println(\t\tCompacted partition mean bytes:  + 
cfstore.getMeanRowSize());
 
 outs.println();
 }



[03/11] git commit: fix assertionerror from #6132

2013-10-03 Thread jbellis
fix assertionerror from #6132


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92b3622d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92b3622d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92b3622d

Branch: refs/heads/cassandra-1.2
Commit: 92b3622dc219798b3bacce6f37eb1d5885bafeb4
Parents: 20a8050
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 12:03:02 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:03:02 2013 -0500

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++
 src/java/org/apache/cassandra/dht/BootStrapper.java  | 8 
 src/java/org/apache/cassandra/net/MessagingService.java  | 8 +---
 src/java/org/apache/cassandra/service/StorageProxy.java  | 2 +-
 4 files changed, 13 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 633ea9a..218f719 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -38,6 +38,7 @@ import 
org.apache.cassandra.config.EncryptionOptions.ServerEncryptionOptions;
 import org.apache.cassandra.db.ColumnFamilyStore;
 import org.apache.cassandra.db.DefsTable;
 import org.apache.cassandra.db.SystemTable;
+import org.apache.cassandra.dht.BootStrapper;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.io.FSWriteError;
@@ -839,6 +840,8 @@ public class DatabaseDescriptor
 case READ_REPAIR:
 case MUTATION:
 return getWriteRpcTimeout();
+case BOOTSTRAP_TOKEN:
+return BootStrapper.BOOTSTRAP_TIMEOUT;
 default:
 return getRpcTimeout();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/dht/BootStrapper.java
--
diff --git a/src/java/org/apache/cassandra/dht/BootStrapper.java 
b/src/java/org/apache/cassandra/dht/BootStrapper.java
index ff76534..2e79562 100644
--- a/src/java/org/apache/cassandra/dht/BootStrapper.java
+++ b/src/java/org/apache/cassandra/dht/BootStrapper.java
@@ -48,6 +48,8 @@ import org.apache.cassandra.net.*;
 
 public class BootStrapper
 {
+public static final long BOOTSTRAP_TIMEOUT = 3; // default bootstrap 
timeout of 30s
+
 private static final Logger logger = 
LoggerFactory.getLogger(BootStrapper.class);
 
 /* endpoint that needs to be bootstrapped */
@@ -55,7 +57,6 @@ public class BootStrapper
 /* token of the node being bootstrapped. */
 protected final CollectionToken tokens;
 protected final TokenMetadata tokenMetadata;
-private static final long BOOTSTRAP_TIMEOUT = 3; // default bootstrap 
timeout of 30s
 
 public BootStrapper(InetAddress address, CollectionToken tokens, 
TokenMetadata tmd)
 {
@@ -187,13 +188,12 @@ public class BootStrapper
 {
 MessageOut message = new 
MessageOut(MessagingService.Verb.BOOTSTRAP_TOKEN);
 int retries = 5;
-long timeout = Math.max(DatabaseDescriptor.getRpcTimeout(), 
BOOTSTRAP_TIMEOUT);
 
 while (retries  0)
 {
 BootstrapTokenCallback btc = new BootstrapTokenCallback();
-MessagingService.instance().sendRR(message, maxEndpoint, btc, 
timeout);
-Token token = btc.getToken(timeout);
+MessagingService.instance().sendRR(message, maxEndpoint, btc);
+Token token = btc.getToken(BOOTSTRAP_TIMEOUT);
 if (token != null)
 return token;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index dd02ca6..c9b0047 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -550,7 +550,7 @@ public final class MessagingService implements 
MessagingServiceMBean
  */
 public String sendRR(MessageOut message, InetAddress to, IMessageCallback 
cb)
 {
-return sendRR(message, to, cb, message.getTimeout());
+return sendRR(message, to, cb, message.getTimeout(), null);
 }
 
 /**
@@ 

[04/11] git commit: fix assertionerror from #6132

2013-10-03 Thread jbellis
fix assertionerror from #6132


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92b3622d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92b3622d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92b3622d

Branch: refs/heads/cassandra-2.0
Commit: 92b3622dc219798b3bacce6f37eb1d5885bafeb4
Parents: 20a8050
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 12:03:02 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:03:02 2013 -0500

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++
 src/java/org/apache/cassandra/dht/BootStrapper.java  | 8 
 src/java/org/apache/cassandra/net/MessagingService.java  | 8 +---
 src/java/org/apache/cassandra/service/StorageProxy.java  | 2 +-
 4 files changed, 13 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 633ea9a..218f719 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -38,6 +38,7 @@ import 
org.apache.cassandra.config.EncryptionOptions.ServerEncryptionOptions;
 import org.apache.cassandra.db.ColumnFamilyStore;
 import org.apache.cassandra.db.DefsTable;
 import org.apache.cassandra.db.SystemTable;
+import org.apache.cassandra.dht.BootStrapper;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.io.FSWriteError;
@@ -839,6 +840,8 @@ public class DatabaseDescriptor
 case READ_REPAIR:
 case MUTATION:
 return getWriteRpcTimeout();
+case BOOTSTRAP_TOKEN:
+return BootStrapper.BOOTSTRAP_TIMEOUT;
 default:
 return getRpcTimeout();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/dht/BootStrapper.java
--
diff --git a/src/java/org/apache/cassandra/dht/BootStrapper.java 
b/src/java/org/apache/cassandra/dht/BootStrapper.java
index ff76534..2e79562 100644
--- a/src/java/org/apache/cassandra/dht/BootStrapper.java
+++ b/src/java/org/apache/cassandra/dht/BootStrapper.java
@@ -48,6 +48,8 @@ import org.apache.cassandra.net.*;
 
 public class BootStrapper
 {
+public static final long BOOTSTRAP_TIMEOUT = 3; // default bootstrap 
timeout of 30s
+
 private static final Logger logger = 
LoggerFactory.getLogger(BootStrapper.class);
 
 /* endpoint that needs to be bootstrapped */
@@ -55,7 +57,6 @@ public class BootStrapper
 /* token of the node being bootstrapped. */
 protected final CollectionToken tokens;
 protected final TokenMetadata tokenMetadata;
-private static final long BOOTSTRAP_TIMEOUT = 3; // default bootstrap 
timeout of 30s
 
 public BootStrapper(InetAddress address, CollectionToken tokens, 
TokenMetadata tmd)
 {
@@ -187,13 +188,12 @@ public class BootStrapper
 {
 MessageOut message = new 
MessageOut(MessagingService.Verb.BOOTSTRAP_TOKEN);
 int retries = 5;
-long timeout = Math.max(DatabaseDescriptor.getRpcTimeout(), 
BOOTSTRAP_TIMEOUT);
 
 while (retries  0)
 {
 BootstrapTokenCallback btc = new BootstrapTokenCallback();
-MessagingService.instance().sendRR(message, maxEndpoint, btc, 
timeout);
-Token token = btc.getToken(timeout);
+MessagingService.instance().sendRR(message, maxEndpoint, btc);
+Token token = btc.getToken(BOOTSTRAP_TIMEOUT);
 if (token != null)
 return token;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index dd02ca6..c9b0047 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -550,7 +550,7 @@ public final class MessagingService implements 
MessagingServiceMBean
  */
 public String sendRR(MessageOut message, InetAddress to, IMessageCallback 
cb)
 {
-return sendRR(message, to, cb, message.getTimeout());
+return sendRR(message, to, cb, message.getTimeout(), null);
 }
 
 /**
@@ 

[05/11] git commit: fix assertionerror from #6132

2013-10-03 Thread jbellis
fix assertionerror from #6132


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92b3622d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92b3622d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92b3622d

Branch: refs/heads/trunk
Commit: 92b3622dc219798b3bacce6f37eb1d5885bafeb4
Parents: 20a8050
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 12:03:02 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:03:02 2013 -0500

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++
 src/java/org/apache/cassandra/dht/BootStrapper.java  | 8 
 src/java/org/apache/cassandra/net/MessagingService.java  | 8 +---
 src/java/org/apache/cassandra/service/StorageProxy.java  | 2 +-
 4 files changed, 13 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 633ea9a..218f719 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -38,6 +38,7 @@ import 
org.apache.cassandra.config.EncryptionOptions.ServerEncryptionOptions;
 import org.apache.cassandra.db.ColumnFamilyStore;
 import org.apache.cassandra.db.DefsTable;
 import org.apache.cassandra.db.SystemTable;
+import org.apache.cassandra.dht.BootStrapper;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.io.FSWriteError;
@@ -839,6 +840,8 @@ public class DatabaseDescriptor
 case READ_REPAIR:
 case MUTATION:
 return getWriteRpcTimeout();
+case BOOTSTRAP_TOKEN:
+return BootStrapper.BOOTSTRAP_TIMEOUT;
 default:
 return getRpcTimeout();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/dht/BootStrapper.java
--
diff --git a/src/java/org/apache/cassandra/dht/BootStrapper.java 
b/src/java/org/apache/cassandra/dht/BootStrapper.java
index ff76534..2e79562 100644
--- a/src/java/org/apache/cassandra/dht/BootStrapper.java
+++ b/src/java/org/apache/cassandra/dht/BootStrapper.java
@@ -48,6 +48,8 @@ import org.apache.cassandra.net.*;
 
 public class BootStrapper
 {
+public static final long BOOTSTRAP_TIMEOUT = 3; // default bootstrap 
timeout of 30s
+
 private static final Logger logger = 
LoggerFactory.getLogger(BootStrapper.class);
 
 /* endpoint that needs to be bootstrapped */
@@ -55,7 +57,6 @@ public class BootStrapper
 /* token of the node being bootstrapped. */
 protected final CollectionToken tokens;
 protected final TokenMetadata tokenMetadata;
-private static final long BOOTSTRAP_TIMEOUT = 3; // default bootstrap 
timeout of 30s
 
 public BootStrapper(InetAddress address, CollectionToken tokens, 
TokenMetadata tmd)
 {
@@ -187,13 +188,12 @@ public class BootStrapper
 {
 MessageOut message = new 
MessageOut(MessagingService.Verb.BOOTSTRAP_TOKEN);
 int retries = 5;
-long timeout = Math.max(DatabaseDescriptor.getRpcTimeout(), 
BOOTSTRAP_TIMEOUT);
 
 while (retries  0)
 {
 BootstrapTokenCallback btc = new BootstrapTokenCallback();
-MessagingService.instance().sendRR(message, maxEndpoint, btc, 
timeout);
-Token token = btc.getToken(timeout);
+MessagingService.instance().sendRR(message, maxEndpoint, btc);
+Token token = btc.getToken(BOOTSTRAP_TIMEOUT);
 if (token != null)
 return token;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92b3622d/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index dd02ca6..c9b0047 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -550,7 +550,7 @@ public final class MessagingService implements 
MessagingServiceMBean
  */
 public String sendRR(MessageOut message, InetAddress to, IMessageCallback 
cb)
 {
-return sendRR(message, to, cb, message.getTimeout());
+return sendRR(message, to, cb, message.getTimeout(), null);
 }
 
 /**
@@ -567,9 

[08/11] git commit: merge from 1.2

2013-10-03 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d8a56df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d8a56df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d8a56df

Branch: refs/heads/trunk
Commit: 7d8a56df3df26be7537f2a5158469629c34b911c
Parents: 71c8912 92b3622
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 12:12:11 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:12:11 2013 -0500

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d8a56df/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index 6c8e636,9b559e5..d16bef9
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -963,29 -637,35 +963,29 @@@ public class StorageProxy implements St
  IteratorInetAddress iter = targets.iterator();
  InetAddress target = iter.next();
  
 -// direct writes to local DC or old Cassandra versions
 -if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_11)
 +// Add the other destinations of the same message as a FORWARD_HEADER 
entry
 +DataOutputBuffer out = new DataOutputBuffer();
 +try
  {
 -// yes, the loop and non-loop code here are the same; this is 
clunky but we want to avoid
 -// creating a second iterator since we already have a perfectly 
good one
 -MessagingService.instance().sendRR(message, target, handler, 
message.getTimeout(), handler.consistencyLevel);
 +out.writeInt(targets.size() - 1);
  while (iter.hasNext())
  {
 -target = iter.next();
 -MessagingService.instance().sendRR(message, target, handler);
 +InetAddress destination = iter.next();
 +CompactEndpointSerializationHelper.serialize(destination, 
out);
- int id = MessagingService.instance().addCallback(handler, 
message, destination, message.getTimeout());
++int id = MessagingService.instance().addCallback(handler, 
message, destination, message.getTimeout(), handler.consistencyLevel);
 +out.writeInt(id);
 +logger.trace(Adding FWD message to {}@{}, id, destination);
  }
 -return;
 +message = message.withParameter(RowMutation.FORWARD_TO, 
out.getData());
 +// send the combined message + forward headers
 +int id = MessagingService.instance().sendRR(message, target, 
handler);
 +logger.trace(Sending message to {}@{}, id, target);
  }
 -
 -// Add all the other destinations of the same message as a 
FORWARD_HEADER entry
 -FastByteArrayOutputStream bos = new FastByteArrayOutputStream();
 -DataOutputStream dos = new DataOutputStream(bos);
 -dos.writeInt(targets.size() - 1);
 -while (iter.hasNext())
 +catch (IOException e)
  {
 -InetAddress destination = iter.next();
 -CompactEndpointSerializationHelper.serialize(destination, dos);
 -String id = MessagingService.instance().addCallback(handler, 
message, destination, message.getTimeout());
 -dos.writeUTF(id);
 +// DataOutputBuffer is in-memory, doesn't throw IOException
 +throw new AssertionError(e);
  }
 -message = message.withParameter(RowMutation.FORWARD_TO, 
bos.toByteArray());
 -// send the combined message + forward headers
 -Tracing.trace(Enqueuing message to {}, target);
 -MessagingService.instance().sendRR(message, target, handler);
  }
  
  private static void insertLocal(final RowMutation rm, final 
AbstractWriteResponseHandler responseHandler)



[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-03 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785363#comment-13785363
 ] 

Constance Eustace commented on CASSANDRA-6137:
--

Or possibly we crammed in multiple column key updates at once..

 CQL3 SELECT IN CLAUSE inconsistent
 --

 Key: CASSANDRA-6137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu AWS Cassandra 2.0.1
Reporter: Constance Eustace
 Fix For: 2.0.1


 We are encountering inconsistent results from CQL3 queries with column keys 
 using IN clause in WHERE. This has been reproduced in cqlsh.
 Rowkey is e_entid
 Column key is p_prop
 This returns roughly 21 rows for 21 column keys that match p_prop.
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
 These three queries each return one row for the requested single column key 
 in the IN clause:
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:all:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:fail:count');
 This query returns ONLY ONE ROW (one column key), not three as I would expect 
 from the three-column-key IN clause:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 This query does return two rows however for the requested two column keys:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
   
 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 cqlsh describe table internal_submission.entity_job;
 CREATE TABLE entity_job (
   e_entid text,
   p_prop text,
   describes text,
   dndcondition text,
   e_entlinks text,
   e_entname text,
   e_enttype text,
   ingeststatus text,
   ingeststatusdetail text,
   p_flags text,
   p_propid text,
   p_proplinks text,
   p_storage text,
   p_subents text,
   p_val text,
   p_vallang text,
   p_vallinks text,
   p_valtype text,
   p_valunit text,
   p_vars text,
   partnerid text,
   referenceid text,
   size int,
   sourceip text,
   submitdate bigint,
   submitevent text,
   userid text,
   version text,
   PRIMARY KEY (e_entid, p_prop)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX internal_submission__JobDescribesIDX ON entity_job (describes);
 CREATE INDEX internal_submission__JobDNDConditionIDX ON entity_job 
 (dndcondition);
 CREATE INDEX internal_submission__JobIngestStatusIDX ON entity_job 
 (ingeststatus);
 CREATE INDEX internal_submission__JobIngestStatusDetailIDX ON entity_job 
 (ingeststatusdetail);
 CREATE INDEX internal_submission__JobReferenceIDIDX ON entity_job 
 (referenceid);
 CREATE INDEX internal_submission__JobUserIDX ON entity_job (userid);
 CREATE INDEX internal_submission__JobVersionIDX ON entity_job (version);
 ---
 My suspicion is that the three-column-key IN Clause is translated (improperly 
 or not) to a two-column key range with 

[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-03 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785361#comment-13785361
 ] 

Constance Eustace commented on CASSANDRA-6137:
--

This may be related to us possibly inserting a null or empty column key. We'll 
try to investigate the data at a lower level

 CQL3 SELECT IN CLAUSE inconsistent
 --

 Key: CASSANDRA-6137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu AWS Cassandra 2.0.1
Reporter: Constance Eustace
 Fix For: 2.0.1


 We are encountering inconsistent results from CQL3 queries with column keys 
 using IN clause in WHERE. This has been reproduced in cqlsh.
 Rowkey is e_entid
 Column key is p_prop
 This returns roughly 21 rows for 21 column keys that match p_prop.
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
 These three queries each return one row for the requested single column key 
 in the IN clause:
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:all:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:fail:count');
 This query returns ONLY ONE ROW (one column key), not three as I would expect 
 from the three-column-key IN clause:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 This query does return two rows however for the requested two column keys:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
   
 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 cqlsh describe table internal_submission.entity_job;
 CREATE TABLE entity_job (
   e_entid text,
   p_prop text,
   describes text,
   dndcondition text,
   e_entlinks text,
   e_entname text,
   e_enttype text,
   ingeststatus text,
   ingeststatusdetail text,
   p_flags text,
   p_propid text,
   p_proplinks text,
   p_storage text,
   p_subents text,
   p_val text,
   p_vallang text,
   p_vallinks text,
   p_valtype text,
   p_valunit text,
   p_vars text,
   partnerid text,
   referenceid text,
   size int,
   sourceip text,
   submitdate bigint,
   submitevent text,
   userid text,
   version text,
   PRIMARY KEY (e_entid, p_prop)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX internal_submission__JobDescribesIDX ON entity_job (describes);
 CREATE INDEX internal_submission__JobDNDConditionIDX ON entity_job 
 (dndcondition);
 CREATE INDEX internal_submission__JobIngestStatusIDX ON entity_job 
 (ingeststatus);
 CREATE INDEX internal_submission__JobIngestStatusDetailIDX ON entity_job 
 (ingeststatusdetail);
 CREATE INDEX internal_submission__JobReferenceIDIDX ON entity_job 
 (referenceid);
 CREATE INDEX internal_submission__JobUserIDX ON entity_job (userid);
 CREATE INDEX internal_submission__JobVersionIDX ON entity_job (version);
 ---
 My suspicion is that the three-column-key IN Clause is 

[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-03 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785372#comment-13785372
 ] 

Constance Eustace commented on CASSANDRA-6137:
--

We did an export of sorts of the data from the 2.0.1 cassandra down to 1.2.9, 
imported/loaded. The reproduction test did not find anything.

Currently we are testing a set of about 1000 rows by:

  For each row key:
- get the list of column keys by SELECT columnkey FROM table WHERE rowkey = 
?
- generate several 2- 3- and 4- column key WHERE IN clauses of random 
column keys in random order
- execute SELECT columnkey, columnkey_othercol FROM table WHERE rowkey = ? 
and columnkey IN [$columnkeyset]
- check for missing columns

On our corrupted database the program locates frequent failures.
Export of that data using SELECT * FROM table == JSON and then JSON == same 
schema in 1.2.9 then re-run found no problems.

 CQL3 SELECT IN CLAUSE inconsistent
 --

 Key: CASSANDRA-6137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu AWS Cassandra 2.0.1
Reporter: Constance Eustace
 Fix For: 2.0.1


 We are encountering inconsistent results from CQL3 queries with column keys 
 using IN clause in WHERE. This has been reproduced in cqlsh.
 Rowkey is e_entid
 Column key is p_prop
 This returns roughly 21 rows for 21 column keys that match p_prop.
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
 These three queries each return one row for the requested single column key 
 in the IN clause:
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:all:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:fail:count');
 This query returns ONLY ONE ROW (one column key), not three as I would expect 
 from the three-column-key IN clause:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 This query does return two rows however for the requested two column keys:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
   
 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 cqlsh describe table internal_submission.entity_job;
 CREATE TABLE entity_job (
   e_entid text,
   p_prop text,
   describes text,
   dndcondition text,
   e_entlinks text,
   e_entname text,
   e_enttype text,
   ingeststatus text,
   ingeststatusdetail text,
   p_flags text,
   p_propid text,
   p_proplinks text,
   p_storage text,
   p_subents text,
   p_val text,
   p_vallang text,
   p_vallinks text,
   p_valtype text,
   p_valunit text,
   p_vars text,
   partnerid text,
   referenceid text,
   size int,
   sourceip text,
   submitdate bigint,
   submitevent text,
   userid text,
   version text,
   PRIMARY KEY (e_entid, p_prop)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX internal_submission__JobDescribesIDX ON 

[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-03 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785375#comment-13785375
 ] 

Constance Eustace commented on CASSANDRA-6137:
--

THe queries in the corrupted instance at least consistently report the wrong 
query results...

 CQL3 SELECT IN CLAUSE inconsistent
 --

 Key: CASSANDRA-6137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE
Reporter: Constance Eustace
 Fix For: 2.0.1


 We are encountering inconsistent results from CQL3 queries with column keys 
 using IN clause in WHERE. This has been reproduced in cqlsh.
 Rowkey is e_entid
 Column key is p_prop
 This returns roughly 21 rows for 21 column keys that match p_prop.
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
 These three queries each return one row for the requested single column key 
 in the IN clause:
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:all:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:fail:count');
 This query returns ONLY ONE ROW (one column key), not three as I would expect 
 from the three-column-key IN clause:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 This query does return two rows however for the requested two column keys:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
   
 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 cqlsh describe table internal_submission.entity_job;
 CREATE TABLE entity_job (
   e_entid text,
   p_prop text,
   describes text,
   dndcondition text,
   e_entlinks text,
   e_entname text,
   e_enttype text,
   ingeststatus text,
   ingeststatusdetail text,
   p_flags text,
   p_propid text,
   p_proplinks text,
   p_storage text,
   p_subents text,
   p_val text,
   p_vallang text,
   p_vallinks text,
   p_valtype text,
   p_valunit text,
   p_vars text,
   partnerid text,
   referenceid text,
   size int,
   sourceip text,
   submitdate bigint,
   submitevent text,
   userid text,
   version text,
   PRIMARY KEY (e_entid, p_prop)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX internal_submission__JobDescribesIDX ON entity_job (describes);
 CREATE INDEX internal_submission__JobDNDConditionIDX ON entity_job 
 (dndcondition);
 CREATE INDEX internal_submission__JobIngestStatusIDX ON entity_job 
 (ingeststatus);
 CREATE INDEX internal_submission__JobIngestStatusDetailIDX ON entity_job 
 (ingeststatusdetail);
 CREATE INDEX internal_submission__JobReferenceIDIDX ON entity_job 
 (referenceid);
 CREATE INDEX internal_submission__JobUserIDX ON entity_job (userid);
 CREATE INDEX internal_submission__JobVersionIDX ON entity_job (version);
 ---
 My suspicion is that the three-column-key IN Clause is translated 

[jira] [Updated] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-03 Thread Constance Eustace (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constance Eustace updated CASSANDRA-6137:
-

Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE  (was: Ubuntu AWS 
Cassandra 2.0.1)

 CQL3 SELECT IN CLAUSE inconsistent
 --

 Key: CASSANDRA-6137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE
Reporter: Constance Eustace
 Fix For: 2.0.1


 We are encountering inconsistent results from CQL3 queries with column keys 
 using IN clause in WHERE. This has been reproduced in cqlsh.
 Rowkey is e_entid
 Column key is p_prop
 This returns roughly 21 rows for 21 column keys that match p_prop.
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
 These three queries each return one row for the requested single column key 
 in the IN clause:
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:all:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:fail:count');
 This query returns ONLY ONE ROW (one column key), not three as I would expect 
 from the three-column-key IN clause:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 This query does return two rows however for the requested two column keys:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
   
 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 cqlsh describe table internal_submission.entity_job;
 CREATE TABLE entity_job (
   e_entid text,
   p_prop text,
   describes text,
   dndcondition text,
   e_entlinks text,
   e_entname text,
   e_enttype text,
   ingeststatus text,
   ingeststatusdetail text,
   p_flags text,
   p_propid text,
   p_proplinks text,
   p_storage text,
   p_subents text,
   p_val text,
   p_vallang text,
   p_vallinks text,
   p_valtype text,
   p_valunit text,
   p_vars text,
   partnerid text,
   referenceid text,
   size int,
   sourceip text,
   submitdate bigint,
   submitevent text,
   userid text,
   version text,
   PRIMARY KEY (e_entid, p_prop)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX internal_submission__JobDescribesIDX ON entity_job (describes);
 CREATE INDEX internal_submission__JobDNDConditionIDX ON entity_job 
 (dndcondition);
 CREATE INDEX internal_submission__JobIngestStatusIDX ON entity_job 
 (ingeststatus);
 CREATE INDEX internal_submission__JobIngestStatusDetailIDX ON entity_job 
 (ingeststatusdetail);
 CREATE INDEX internal_submission__JobReferenceIDIDX ON entity_job 
 (referenceid);
 CREATE INDEX internal_submission__JobUserIDX ON entity_job (userid);
 CREATE INDEX internal_submission__JobVersionIDX ON entity_job (version);
 ---
 My suspicion is that the three-column-key IN Clause is translated (improperly 
 or not) to a two-column key range with the assumption 

[09/11] git commit: merge from 1.2

2013-10-03 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d8a56df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d8a56df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d8a56df

Branch: refs/heads/cassandra-2.0
Commit: 7d8a56df3df26be7537f2a5158469629c34b911c
Parents: 71c8912 92b3622
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 3 12:12:11 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Oct 3 12:12:11 2013 -0500

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d8a56df/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index 6c8e636,9b559e5..d16bef9
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -963,29 -637,35 +963,29 @@@ public class StorageProxy implements St
  IteratorInetAddress iter = targets.iterator();
  InetAddress target = iter.next();
  
 -// direct writes to local DC or old Cassandra versions
 -if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_11)
 +// Add the other destinations of the same message as a FORWARD_HEADER 
entry
 +DataOutputBuffer out = new DataOutputBuffer();
 +try
  {
 -// yes, the loop and non-loop code here are the same; this is 
clunky but we want to avoid
 -// creating a second iterator since we already have a perfectly 
good one
 -MessagingService.instance().sendRR(message, target, handler, 
message.getTimeout(), handler.consistencyLevel);
 +out.writeInt(targets.size() - 1);
  while (iter.hasNext())
  {
 -target = iter.next();
 -MessagingService.instance().sendRR(message, target, handler);
 +InetAddress destination = iter.next();
 +CompactEndpointSerializationHelper.serialize(destination, 
out);
- int id = MessagingService.instance().addCallback(handler, 
message, destination, message.getTimeout());
++int id = MessagingService.instance().addCallback(handler, 
message, destination, message.getTimeout(), handler.consistencyLevel);
 +out.writeInt(id);
 +logger.trace(Adding FWD message to {}@{}, id, destination);
  }
 -return;
 +message = message.withParameter(RowMutation.FORWARD_TO, 
out.getData());
 +// send the combined message + forward headers
 +int id = MessagingService.instance().sendRR(message, target, 
handler);
 +logger.trace(Sending message to {}@{}, id, target);
  }
 -
 -// Add all the other destinations of the same message as a 
FORWARD_HEADER entry
 -FastByteArrayOutputStream bos = new FastByteArrayOutputStream();
 -DataOutputStream dos = new DataOutputStream(bos);
 -dos.writeInt(targets.size() - 1);
 -while (iter.hasNext())
 +catch (IOException e)
  {
 -InetAddress destination = iter.next();
 -CompactEndpointSerializationHelper.serialize(destination, dos);
 -String id = MessagingService.instance().addCallback(handler, 
message, destination, message.getTimeout());
 -dos.writeUTF(id);
 +// DataOutputBuffer is in-memory, doesn't throw IOException
 +throw new AssertionError(e);
  }
 -message = message.withParameter(RowMutation.FORWARD_TO, 
bos.toByteArray());
 -// send the combined message + forward headers
 -Tracing.trace(Enqueuing message to {}, target);
 -MessagingService.instance().sendRR(message, target, handler);
  }
  
  private static void insertLocal(final RowMutation rm, final 
AbstractWriteResponseHandler responseHandler)



[jira] [Commented] (CASSANDRA-6109) Consider coldness in STCS compaction

2013-10-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785422#comment-13785422
 ] 

Tyler Hobbs commented on CASSANDRA-6109:


There's a problem with prioritizing the compaction manager queue: for normal 
compactions, we enqueue a task that will actually pick the sstables to compact 
at the last moment, right before the task is run.  I think we have three 
options:
# Don't try to prioritize the compaction manager queue
# Pick the sstables upfront (maybe only for STCS and not LCS? This behavior was 
added for CASSANDRA-4310, which is primarily concerned with LCS) and 
potentially compact a less-than-optimal set of sstables
# Prioritize the task when it's submitted by picking an initial bucket of 
sstables; finalize the bucket, adding sstables if necessary, just before the 
task is executed

I would lean towards #3, although it's the most complex.  I just wanted to hear 
your thoughts before writing that up.

 Consider coldness in STCS compaction
 

 Key: CASSANDRA-6109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6109
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 2.0.2


 I see two options:
 # Don't compact cold sstables at all
 # Compact cold sstables only if there is nothing more important to compact
 The latter is better if you have cold data that may become hot again...  but 
 it's confusing if you have a workload such that you can't keep up with *all* 
 compaction, but you can keep up with hot sstable.  (Compaction backlog stat 
 becomes useless since we fall increasingly behind.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6109) Consider coldness in STCS compaction

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785445#comment-13785445
 ] 

Jonathan Ellis commented on CASSANDRA-6109:
---

We're mostly talking about prioritizing across different CFs, right?

What if we just made the compaction manager queue a priority queue instead of 
FIFO?

 Consider coldness in STCS compaction
 

 Key: CASSANDRA-6109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6109
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 2.0.2


 I see two options:
 # Don't compact cold sstables at all
 # Compact cold sstables only if there is nothing more important to compact
 The latter is better if you have cold data that may become hot again...  but 
 it's confusing if you have a workload such that you can't keep up with *all* 
 compaction, but you can keep up with hot sstable.  (Compaction backlog stat 
 becomes useless since we fall increasingly behind.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6109) Consider coldness in STCS compaction

2013-10-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785448#comment-13785448
 ] 

Tyler Hobbs commented on CASSANDRA-6109:


bq. We're mostly talking about prioritizing across different CFs, right?

Correct.

bq. What if we just made the compaction manager queue a priority queue instead 
of FIFO?

Yeah, that's what I'm trying to do, it's just that with the current behavior, 
we can't determine the priority when inserting the tasks into the queue because 
the sstables aren't picked until tasks are *removed* from the queue.

 Consider coldness in STCS compaction
 

 Key: CASSANDRA-6109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6109
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 2.0.2


 I see two options:
 # Don't compact cold sstables at all
 # Compact cold sstables only if there is nothing more important to compact
 The latter is better if you have cold data that may become hot again...  but 
 it's confusing if you have a workload such that you can't keep up with *all* 
 compaction, but you can keep up with hot sstable.  (Compaction backlog stat 
 becomes useless since we fall increasingly behind.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6131) JAVA_HOME on cassandra-env.sh is ignored on Debian packages

2013-10-03 Thread Eric Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Evans updated CASSANDRA-6131:
--

Attachment: 6131.patch

 JAVA_HOME on cassandra-env.sh is ignored on Debian packages
 ---

 Key: CASSANDRA-6131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6131
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: I've just got upgraded to 2.0.1 package from the apache 
 repositories using apt. I had the JAVA_HOME environment variable set in 
 /etc/cassandra/cassandra-env.sh but after the upgrade it only worked by 
 setting it on /usr/sbin/cassandra script. I can't configure java 7 system 
 wide, only for cassandra.
 Off-toppic: Thanks for getting rid of the jsvc mess.
Reporter: Sebastián Lacuesta
Assignee: Eric Evans
  Labels: debian
 Fix For: 2.0.2

 Attachments: 6131.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-5858) Add a shuffle dtest

2013-10-03 Thread Daniel Meyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Meyer resolved CASSANDRA-5858.
-

Resolution: Fixed

Shuffle step has been added to upgrade_through_versions_test when vnodes are 
enabled.

Closing this as further testing work on shuffle will be done on: CASSANDRA-6143

 Add a shuffle dtest
 ---

 Key: CASSANDRA-5858
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5858
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Jonathan Ellis
Assignee: Daniel Meyer
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6145) Include batch files for recently added shell commands (and some older ones)

2013-10-03 Thread Sven Delmas (JIRA)
Sven Delmas created CASSANDRA-6145:
--

 Summary: Include batch files for recently added shell commands 
(and some older ones)
 Key: CASSANDRA-6145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6145
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
 Environment: Windows
Reporter: Sven Delmas
Priority: Trivial


- cassandra-shuffle.bat seems very different than the ones already there. As 
far as I can tell the differences are cosmetic, so I consolidate that.
- cassandra-stress.bat
- cassandra-stressd.bat
- cqlsh.bat
- debug-cql.bat
- sstableloader.bat
- sstablemetadata.bat
- sstablescrub.bat
- sstablesplit.bat
- sstableupgrade.bat

Not all files apply to all branches, but I figure I better include all the ones 
I have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5591) Windows failure renaming LCS json.

2013-10-03 Thread Sven Delmas (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sven Delmas updated CASSANDRA-5591:
---

Attachment: 6145.patch

This adds the files mentioned in the Jira and tweaks cassandra-shuffle.bat.

 Windows failure renaming LCS json.
 --

 Key: CASSANDRA-5591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5591
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Windows
Reporter: Jeremiah Jordan

 Had someone report that on Windows, under load, the LCS json file sometimes 
 fails to be renamed.
 {noformat}
 ERROR [CompactionExecutor:1] 2013-05-23 14:43:55,848 CassandraDaemon.java 
 (line 174) Exception in thread Thread[CompactionExecutor:1,1,main]
  java.lang.RuntimeException: Failed to rename C:\development\tools\DataStax 
 Community\data\data\zzz\zzz\zzz.json to C:\development\tools\DataStax 
 Community\data\data\zzz\zzz\zzz-old.json
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133)
   at 
 org.apache.cassandra.db.compaction.LeveledManifest.serialize(LeveledManifest.java:617)
   at 
 org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:229)
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:155)
   at 
 org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:410)
   at 
 org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:223)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:991)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:230)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:188)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5591) Windows failure renaming LCS json.

2013-10-03 Thread Sven Delmas (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sven Delmas updated CASSANDRA-5591:
---

Attachment: (was: 6145.patch)

 Windows failure renaming LCS json.
 --

 Key: CASSANDRA-5591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5591
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Windows
Reporter: Jeremiah Jordan

 Had someone report that on Windows, under load, the LCS json file sometimes 
 fails to be renamed.
 {noformat}
 ERROR [CompactionExecutor:1] 2013-05-23 14:43:55,848 CassandraDaemon.java 
 (line 174) Exception in thread Thread[CompactionExecutor:1,1,main]
  java.lang.RuntimeException: Failed to rename C:\development\tools\DataStax 
 Community\data\data\zzz\zzz\zzz.json to C:\development\tools\DataStax 
 Community\data\data\zzz\zzz\zzz-old.json
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133)
   at 
 org.apache.cassandra.db.compaction.LeveledManifest.serialize(LeveledManifest.java:617)
   at 
 org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:229)
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:155)
   at 
 org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:410)
   at 
 org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:223)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:991)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:230)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:188)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6131) JAVA_HOME on cassandra-env.sh is ignored on Debian packages

2013-10-03 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785471#comment-13785471
 ] 

Eric Evans commented on CASSANDRA-6131:
---

If setting {{JAVA_HOME}} from {{cassandra-env.sh}} ever worked before (from the 
Debian package), it was probably by accident, but there is no reason we can't 
support it going forward.

For what it's worth, I'd probably recommend using {{/etc/default/cassandra}} 
for Debian/Ubuntu, but it will work with either.

[~sebastianlacuesta], could you test the attached patch and let me know if this 
solves it for you?

 JAVA_HOME on cassandra-env.sh is ignored on Debian packages
 ---

 Key: CASSANDRA-6131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6131
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: I've just got upgraded to 2.0.1 package from the apache 
 repositories using apt. I had the JAVA_HOME environment variable set in 
 /etc/cassandra/cassandra-env.sh but after the upgrade it only worked by 
 setting it on /usr/sbin/cassandra script. I can't configure java 7 system 
 wide, only for cassandra.
 Off-toppic: Thanks for getting rid of the jsvc mess.
Reporter: Sebastián Lacuesta
Assignee: Eric Evans
  Labels: debian
 Fix For: 2.0.2

 Attachments: 6131.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6145) Include windows batch files for recently added shell commands (and some older ones)

2013-10-03 Thread Sven Delmas (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sven Delmas updated CASSANDRA-6145:
---

Summary: Include windows batch files for recently added shell commands (and 
some older ones)  (was: Include batch files for recently added shell commands 
(and some older ones))

 Include windows batch files for recently added shell commands (and some older 
 ones)
 ---

 Key: CASSANDRA-6145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6145
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
 Environment: Windows
Reporter: Sven Delmas
Priority: Trivial
 Attachments: 6145.patch


 - cassandra-shuffle.bat seems very different than the ones already there. As 
 far as I can tell the differences are cosmetic, so I consolidate that.
 - cassandra-stress.bat
 - cassandra-stressd.bat
 - cqlsh.bat
 - debug-cql.bat
 - sstableloader.bat
 - sstablemetadata.bat
 - sstablescrub.bat
 - sstablesplit.bat
 - sstableupgrade.bat
 Not all files apply to all branches, but I figure I better include all the 
 ones I have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6145) Include windows batch files for recently added shell commands (and some older ones)

2013-10-03 Thread Sven Delmas (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sven Delmas updated CASSANDRA-6145:
---

Attachment: 6145.patch

This adds the batch files and also syncs up cassandra-shuffle.bat.

 Include windows batch files for recently added shell commands (and some older 
 ones)
 ---

 Key: CASSANDRA-6145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6145
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
 Environment: Windows
Reporter: Sven Delmas
Priority: Trivial
 Attachments: 6145.patch


 - cassandra-shuffle.bat seems very different than the ones already there. As 
 far as I can tell the differences are cosmetic, so I consolidate that.
 - cassandra-stress.bat
 - cassandra-stressd.bat
 - cqlsh.bat
 - debug-cql.bat
 - sstableloader.bat
 - sstablemetadata.bat
 - sstablescrub.bat
 - sstablesplit.bat
 - sstableupgrade.bat
 Not all files apply to all branches, but I figure I better include all the 
 ones I have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread lantao yan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785524#comment-13785524
 ] 

lantao yan commented on CASSANDRA-5078:
---

Many thanks for the sharing! yes, I was wrong. Now I have better understanding. 
as you said, two threads can work on collections of sstables inside the same 
ks.cf.
I have no doubt now. One more question, is UUIDGen.getTimeUUID().toString() a 
correct way to generate the timeuuid as you suggest?

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785529#comment-13785529
 ] 

Jonathan Ellis commented on CASSANDRA-5078:
---

Yes, but why turn it into a String?

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5078) save compaction merge counts in a system table

2013-10-03 Thread lantao yan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785545#comment-13785545
 ] 

lantao yan commented on CASSANDRA-5078:
---

do you mean we can use UUID as column type?

 save compaction merge counts in a system table
 --

 Key: CASSANDRA-5078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5078
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: lantao yan
Priority: Minor
  Labels: lhf
 Attachments: 5078-v3.txt, 5078-v4.txt, patch1.patch


 we should save the compaction merge stats from CASSANDRA-4894 in the system 
 table and probably expose them via JMX (and nodetool)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >