[jira] [Created] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column

2015-03-18 Thread Maxim (JIRA)
Maxim created CASSANDRA-8985:


 Summary: java.lang.AssertionError: Added column does not sort as 
the last column
 Key: CASSANDRA-8985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8985
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.13
OracleJDK1.7
Debian 7.8
Reporter: Maxim


After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error:
ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java (line 199) 
Exception in thread Thread[ReadStage:1823,5,main]
java.lang.AssertionError: Added column does not sort as the last column
at 
org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
at 
org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211)
at 
org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792)
at 
org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:551)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1755)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/4] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-18 Thread benedict
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/config/Config.java
src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0e5e7d93
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0e5e7d93
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0e5e7d93

Branch: refs/heads/trunk
Commit: 0e5e7d93be1a8b9a91db387b8208daa90dc4a664
Parents: 21bdf87 8284964
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 10:39:32 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 10:39:32 2015 +

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/config/Config.java |  22 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +
 .../cassandra/net/IncomingTcpConnection.java|   5 +-
 .../cassandra/net/OutboundTcpConnection.java| 117 +++-
 .../cassandra/utils/CoalescingStrategies.java   | 544 +++
 .../utils/NanoTimeToCurrentTimeMillis.java  |  88 +++
 .../utils/CoalescingStrategiesTest.java | 445 +++
 .../utils/NanoTimeToCurrentTimeMillisTest.java  |  52 ++
 9 files changed, 1254 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0e5e7d93/CHANGES.txt
--
diff --cc CHANGES.txt
index e090647,68df77e..2661723
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,77 -1,5 +1,78 @@@
 +3.0
 + * Add nodetool command to validate all sstables in a node (CASSANDRA-5791)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * 

[1/4] cassandra git commit: Introduce intra-cluster message coalescing

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 db1a741ea - 828496492
  refs/heads/trunk 21bdf8700 - 144644bbf


Introduce intra-cluster message coalescing

patch by ariel; reviewed by benedict for CASSANDRA-8692


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82849649
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82849649
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82849649

Branch: refs/heads/cassandra-2.1
Commit: 828496492c51d7437b690999205ecc941f41a0a9
Parents: db1a741
Author: Ariel Weisberg ariel.weisb...@datastax.com
Authored: Wed Mar 18 10:37:28 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 10:38:04 2015 +

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/config/Config.java |  22 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +
 .../cassandra/net/IncomingTcpConnection.java|   5 +-
 .../cassandra/net/OutboundTcpConnection.java| 117 +++-
 .../cassandra/utils/CoalescingStrategies.java   | 544 +++
 .../utils/NanoTimeToCurrentTimeMillis.java  |  88 +++
 .../utils/CoalescingStrategiesTest.java | 445 +++
 .../utils/NanoTimeToCurrentTimeMillisTest.java  |  52 ++
 9 files changed, 1254 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82849649/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 56a7164..68df77e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Introduce intra-cluster message coalescing (CASSANDRA-8692)
  * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)
  * Don't check if an sstable is live for offline compactions (CASSANDRA-8841)
  * Don't set clientMode in SSTableLoader (CASSANDRA-8238)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/82849649/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index fbbd1dd..378a1ad 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -39,6 +39,12 @@ import org.apache.cassandra.utils.FBUtilities;
  */
 public class Config
 {
+/*
+ * Prefix for Java properties for internal Cassandra configuration options
+ */
+public static final String PROPERTY_PREFIX = cassandra.;
+
+
 public String cluster_name = Test Cluster;
 public String authenticator;
 public String authorizer;
@@ -223,6 +229,22 @@ public class Config
 private static final CsvPreference STANDARD_SURROUNDING_SPACES_NEED_QUOTES 
= new CsvPreference.Builder(CsvPreference.STANDARD_PREFERENCE)

   .surroundingSpacesNeedQuotes(true).build();
 
+/*
+ * Strategy to use for coalescing messages in OutboundTcpConnection.
+ * Can be fixed, movingaverage, timehorizon, disabled. Setting is case and 
leading/trailing
+ * whitespace insensitive. You can also specify a subclass of 
CoalescingStrategies.CoalescingStrategy by name.
+ */
+public String otc_coalescing_strategy = DISABLED;
+
+/*
+ * How many microseconds to wait for coalescing. For fixed strategy this 
is the amount of time after the first
+ * messgae is received before it will be sent with any accompanying 
messages. For moving average this is the
+ * maximum amount of time that will be waited as well as the interval at 
which messages must arrive on average
+ * for coalescing to be enabled.
+ */
+public static final int otc_coalescing_window_us_default = 200;
+public int otc_coalescing_window_us = otc_coalescing_window_us_default;
+
 public static boolean getOutboundBindAny()
 {
 return outboundBindAny;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/82849649/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 65cec9c..d0db9f4 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1682,4 +1682,14 @@ public class DatabaseDescriptor
 String arch = System.getProperty(os.arch);
 return arch.contains(64) || arch.contains(sparcv9);
 }
+
+public static String getOtcCoalescingStrategy()
+{
+return 

[2/4] cassandra git commit: Introduce intra-cluster message coalescing

2015-03-18 Thread benedict
Introduce intra-cluster message coalescing

patch by ariel; reviewed by benedict for CASSANDRA-8692


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82849649
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82849649
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82849649

Branch: refs/heads/trunk
Commit: 828496492c51d7437b690999205ecc941f41a0a9
Parents: db1a741
Author: Ariel Weisberg ariel.weisb...@datastax.com
Authored: Wed Mar 18 10:37:28 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 10:38:04 2015 +

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/config/Config.java |  22 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +
 .../cassandra/net/IncomingTcpConnection.java|   5 +-
 .../cassandra/net/OutboundTcpConnection.java| 117 +++-
 .../cassandra/utils/CoalescingStrategies.java   | 544 +++
 .../utils/NanoTimeToCurrentTimeMillis.java  |  88 +++
 .../utils/CoalescingStrategiesTest.java | 445 +++
 .../utils/NanoTimeToCurrentTimeMillisTest.java  |  52 ++
 9 files changed, 1254 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82849649/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 56a7164..68df77e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Introduce intra-cluster message coalescing (CASSANDRA-8692)
  * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)
  * Don't check if an sstable is live for offline compactions (CASSANDRA-8841)
  * Don't set clientMode in SSTableLoader (CASSANDRA-8238)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/82849649/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index fbbd1dd..378a1ad 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -39,6 +39,12 @@ import org.apache.cassandra.utils.FBUtilities;
  */
 public class Config
 {
+/*
+ * Prefix for Java properties for internal Cassandra configuration options
+ */
+public static final String PROPERTY_PREFIX = cassandra.;
+
+
 public String cluster_name = Test Cluster;
 public String authenticator;
 public String authorizer;
@@ -223,6 +229,22 @@ public class Config
 private static final CsvPreference STANDARD_SURROUNDING_SPACES_NEED_QUOTES 
= new CsvPreference.Builder(CsvPreference.STANDARD_PREFERENCE)

   .surroundingSpacesNeedQuotes(true).build();
 
+/*
+ * Strategy to use for coalescing messages in OutboundTcpConnection.
+ * Can be fixed, movingaverage, timehorizon, disabled. Setting is case and 
leading/trailing
+ * whitespace insensitive. You can also specify a subclass of 
CoalescingStrategies.CoalescingStrategy by name.
+ */
+public String otc_coalescing_strategy = DISABLED;
+
+/*
+ * How many microseconds to wait for coalescing. For fixed strategy this 
is the amount of time after the first
+ * messgae is received before it will be sent with any accompanying 
messages. For moving average this is the
+ * maximum amount of time that will be waited as well as the interval at 
which messages must arrive on average
+ * for coalescing to be enabled.
+ */
+public static final int otc_coalescing_window_us_default = 200;
+public int otc_coalescing_window_us = otc_coalescing_window_us_default;
+
 public static boolean getOutboundBindAny()
 {
 return outboundBindAny;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/82849649/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 65cec9c..d0db9f4 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1682,4 +1682,14 @@ public class DatabaseDescriptor
 String arch = System.getProperty(os.arch);
 return arch.contains(64) || arch.contains(sparcv9);
 }
+
+public static String getOtcCoalescingStrategy()
+{
+return conf.otc_coalescing_strategy;
+}
+
+public static int getOtcCoalescingWindow()
+{
+return conf.otc_coalescing_window_us;
+  

[4/4] cassandra git commit: Partition intra-cluster message streams by size, not type

2015-03-18 Thread benedict
Partition intra-cluster message streams by size, not type

patch by ariel; reviewed by benedict for CASSANDRA-8789


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/144644bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/144644bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/144644bb

Branch: refs/heads/trunk
Commit: 144644bbf77a546c45db384e2dbc18e13f65c9ce
Parents: 0e5e7d9
Author: Ariel Weisberg ariel.weisb...@datastax.com
Authored: Wed Mar 18 10:44:22 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 10:44:22 2015 +

--
 CHANGES.txt |  1 +
 .../cassandra/metrics/ConnectionMetrics.java| 61 
 .../org/apache/cassandra/net/MessageOut.java| 36 +++-
 .../apache/cassandra/net/MessagingService.java  | 34 ++-
 .../cassandra/net/MessagingServiceMBean.java| 25 
 .../net/OutboundTcpConnectionPool.java  | 44 +++---
 .../org/apache/cassandra/tools/NodeTool.java| 32 +-
 .../apache/cassandra/utils/StatusLogger.java| 14 ++---
 8 files changed, 152 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/144644bb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2661723..ae98f56 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
  * Add nodetool command to validate all sstables in a node (CASSANDRA-5791)
  * Add WriteFailureException to native protocol, notify coordinator of
write failures (CASSANDRA-8592)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/144644bb/src/java/org/apache/cassandra/metrics/ConnectionMetrics.java
--
diff --git a/src/java/org/apache/cassandra/metrics/ConnectionMetrics.java 
b/src/java/org/apache/cassandra/metrics/ConnectionMetrics.java
index 60020b3..73dd0bd 100644
--- a/src/java/org/apache/cassandra/metrics/ConnectionMetrics.java
+++ b/src/java/org/apache/cassandra/metrics/ConnectionMetrics.java
@@ -38,16 +38,19 @@ public class ConnectionMetrics
 public static final Meter totalTimeouts = 
Metrics.meter(DefaultNameFactory.createMetricName(TYPE_NAME, TotalTimeouts, 
null));
 
 public final String address;
-/** Pending tasks for Command(Mutations, Read etc) TCP Connections */
-public final GaugeInteger commandPendingTasks;
-/** Completed tasks for Command(Mutations, Read etc) TCP Connections */
-public final GaugeLong commandCompletedTasks;
-/** Dropped tasks for Command(Mutations, Read etc) TCP Connections */
-public final GaugeLong commandDroppedTasks;
-/** Pending tasks for Response(GOSSIP  RESPONSE) TCP Connections */
-public final GaugeInteger responsePendingTasks;
-/** Completed tasks for Response(GOSSIP  RESPONSE) TCP Connections */
-public final GaugeLong responseCompletedTasks;
+/** Pending tasks for large message TCP Connections */
+public final GaugeInteger largeMessagePendingTasks;
+/** Completed tasks for large message TCP Connections */
+public final GaugeLong largeMessageCompletedTasks;
+/** Dropped tasks for large message TCP Connections */
+public final GaugeLong largeMessageDroppedTasks;
+/** Pending tasks for small message TCP Connections */
+public final GaugeInteger smallMessagePendingTasks;
+/** Completed tasks for small message TCP Connections */
+public final GaugeLong smallMessageCompletedTasks;
+/** Dropped tasks for small message TCP Connections */
+public final GaugeLong smallMessageDroppedTasks;
+
 /** Number of timeouts for specific IP */
 public final Meter timeouts;
 
@@ -66,39 +69,46 @@ public class ConnectionMetrics
 
 factory = new DefaultNameFactory(Connection, address);
 
-commandPendingTasks = 
Metrics.register(factory.createMetricName(CommandPendingTasks), new 
GaugeInteger()
+largeMessagePendingTasks = 
Metrics.register(factory.createMetricName(LargeMessagePendingTasks), new 
GaugeInteger()
 {
 public Integer getValue()
 {
-return connectionPool.cmdCon.getPendingMessages();
+return connectionPool.largeMessages.getPendingMessages();
 }
 });
-commandCompletedTasks = 
Metrics.register(factory.createMetricName(CommandCompletedTasks), new 
GaugeLong()
+largeMessageCompletedTasks = 
Metrics.register(factory.createMetricName(LargeMessageCompletedTasks), new 
GaugeLong()
 {
 public Long getValue()
 {
-return 

[2/4] cassandra git commit: Use correct bounds for page cache eviction of compressed files

2015-03-18 Thread benedict
Use correct bounds for page cache eviction of compressed files

patch by benedict; reviewed by marcus for CASSANDRA-8746


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/521b3631
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/521b3631
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/521b3631

Branch: refs/heads/trunk
Commit: 521b36311ad23f3defd6abf36becda61388add9c
Parents: 572ef50
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:08:08 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:08:08 2015 +

--
 CHANGES.txt |  1 +
 .../cassandra/io/sstable/SSTableReader.java | 40 +++-
 .../io/util/CompressedPoolingSegmentedFile.java |  7 
 .../io/util/CompressedSegmentedFile.java|  7 
 .../apache/cassandra/io/util/SegmentedFile.java |  6 +++
 5 files changed, 27 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/521b3631/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 36bdb39..3f96330 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Use correct bounds for page cache eviction of compressed files 
(CASSANDRA-8746)
  * SSTableScanner enforces its bounds (CASSANDRA-8946)
  * Cleanup cell equality (CASSANDRA-8947)
  * Introduce intra-cluster message coalescing (CASSANDRA-8692)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/521b3631/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index 41e4adb..f42bfc7 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -896,8 +896,8 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 {
 public void run()
 {
-CLibrary.trySkipCache(dfile.path, 0, dataStart);
-CLibrary.trySkipCache(ifile.path, 0, indexStart);
+dfile.dropPageCache(dataStart);
+ifile.dropPageCache(indexStart);
 if (runOnClose != null)
 runOnClose.run();
 }
@@ -920,8 +920,8 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 {
 public void run()
 {
-CLibrary.trySkipCache(dfile.path, 0, 0);
-CLibrary.trySkipCache(ifile.path, 0, 0);
+dfile.dropPageCache(0);
+ifile.dropPageCache(0);
 runOnClose.run();
 }
 };
@@ -2181,8 +2181,8 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 if (isCompacted.get())
 SystemKeyspace.clearSSTableReadMeter(desc.ksname, desc.cfname, 
desc.generation);
 // don't ideally want to dropPageCache for the file until all 
instances have been released
-dropPageCache(desc.filenameFor(Component.DATA));
-dropPageCache(desc.filenameFor(Component.PRIMARY_INDEX));
+CLibrary.trySkipCache(desc.filenameFor(Component.DATA), 0, 0);
+CLibrary.trySkipCache(desc.filenameFor(Component.PRIMARY_INDEX), 
0, 0);
 }
 
 public String name()
@@ -2204,32 +2204,4 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 return refc;
 }
 }
-
-private static void dropPageCache(String filePath)
-{
-RandomAccessFile file = null;
-
-try
-{
-file = new RandomAccessFile(filePath, r);
-
-int fd = CLibrary.getfd(file.getFD());
-
-if (fd  0)
-{
-if (logger.isDebugEnabled())
-logger.debug(String.format(Dropping page cache of file 
%s., filePath));
-
-CLibrary.trySkipCache(fd, 0, 0);
-}
-}
-catch (IOException e)
-{
-// we don't care if cache cleanup fails
-}
-finally
-{
-FileUtils.closeQuietly(file);
-}
-}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/521b3631/src/java/org/apache/cassandra/io/util/CompressedPoolingSegmentedFile.java
--
diff --git 

[4/4] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-18 Thread benedict
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/abd528a0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/abd528a0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/abd528a0

Branch: refs/heads/trunk
Commit: abd528a0f5ea979ebfc485f03f64085f6affca41
Parents: 7aefd91 521b363
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:08:54 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:08:54 2015 +

--
 CHANGES.txt |  1 +
 .../io/sstable/format/SSTableReader.java| 40 +++-
 .../io/util/CompressedPoolingSegmentedFile.java |  7 
 .../io/util/CompressedSegmentedFile.java|  7 
 .../apache/cassandra/io/util/SegmentedFile.java |  6 +++
 5 files changed, 27 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/abd528a0/CHANGES.txt
--
diff --cc CHANGES.txt
index 62b1079,3f96330..c07599a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,78 -1,5 +1,79 @@@
 +3.0
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add nodetool command to validate all sstables in a node (CASSANDRA-5791)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable 

[jira] [Assigned] (CASSANDRA-3852) use LIFO queueing policy when queue size exceeds thresholds

2015-03-18 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reassigned CASSANDRA-3852:
---

Assignee: Benedict

 use LIFO queueing policy when queue size exceeds thresholds
 ---

 Key: CASSANDRA-3852
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3852
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Benedict
  Labels: performance
 Fix For: 3.1


 A strict FIFO policy for queueing (between stages) is detrimental to latency 
 and forward progress. Whenever a node is saturated beyond incoming request 
 rate, *all* requests become slow. If it is consistently saturated, you start 
 effectively timing out on *all* requests.
 A much better strategy from the point of view of latency is to serve a subset 
 requests quickly, and letting some time out, rather than letting all either 
 time out or be slow.
 Care must be taken such that:
 * We still guarantee that requests are processed reasonably timely (we 
 couldn't go strict LIFO for example as that would result in requests getting 
 stuck potentially forever on a loaded node).
 * Maybe, depending on the previous point's solution, ensure that some 
 requests bypass the policy and get prioritized (e.g., schema migrations, or 
 anything internal to a node).
 A possible implementation is to go LIFO whenever there are requests in the 
 queue that are older than N milliseconds (or a certain queue size, etc).
 Benefits:
 * All cases where the client is directly, or is indirectly affecting through 
 other layers, a system which has limited concurrency (e.g., thread pool size 
 of X to serve some incoming request rate), it is *much* better for a few 
 requests to time out while most are serviced quickly, than for all requests 
 to become slow, as it doesn't explode concurrency. Think any random 
 non-super-advanced php app, ruby web app, java servlet based app, etc. 
 Essentially, it optimizes very heavily for improved average latencies.
 * Systems with strict p95/p99/p999 requirements on latencies should greatly 
 benefit from such a policy. For example, suppose you have a system at 85% of 
 capacity, and it takes a write spike (or has a hiccup like GC pause, blocking 
 on a commit log write, etc). Suppose the hiccup racks up 500 ms worth of 
 requests. At 15% margin at steady state, that takes 500ms * 100/15 = 3.2 
 seconds to recover. Instead of *all* requests for an entire 3.2 second window 
 being slow, we'd serve requests quickly for 2.7 of those seconds, with the 
 incoming requests during that 500 ms interval being the ones primarily 
 affected. The flip side though is that once you're at the point where more 
 than N percent of requests end up having to wait for others to take LIFO 
 priority, the p(100-N) latencies will actually be *worse* than without this 
 change (but at this point you have to consider what the root reason for those 
 pXX requirements are).
 * In the case of complete saturation, it allows forward progress. Suppose 
 you're taking 25% more traffic than you are able to handle. Instead of 
 getting backed up and ending up essentially timing out *every single 
 request*, you will succeed in processing up to 75% of them (I say up to 
 because it depends; for example on a {{QUORUM}} request you need at least two 
 of the requests from the co-ordinator to succeed so the percentage is brought 
 down) and allowing clients to make forward progress and get work done, rather 
 than being stuck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8940) Inconsistent select count and select distinct

2015-03-18 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-8940:
-

Assignee: Benjamin Lerer

 Inconsistent select count and select distinct
 -

 Key: CASSANDRA-8940
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8940
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.2
Reporter: Frens Jan Rumph
Assignee: Benjamin Lerer

 When performing {{select count( * ) from ...}} I expect the results to be 
 consistent over multiple query executions if the table at hand is not written 
 to / deleted from in the mean time. However, in my set-up it is not. The 
 counts returned vary considerable (several percent). The same holds for 
 {{select distinct partition-key-columns from ...}}.
 I have a table in a keyspace with replication_factor = 1 which is something 
 like:
 {code}
 CREATE TABLE tbl (
 id frozenid_type,
 bucket bigint,
 offset int,
 value double,
 PRIMARY KEY ((id, bucket), offset)
 )
 {code}
 The frozen udt is:
 {code}
 CREATE TYPE id_type (
 tags maptext, text
 );
 {code}
 The table contains around 35k rows (I'm not trying to be funny here ...). The 
 consistency level for the queries was ONE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Cleanup cell equality

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 828496492 - 69ffd1fa0
  refs/heads/trunk 7bef6f93a - c2dc31c1f


Cleanup cell equality

patch by benedict; reviewed by sylvain for CASSANDRA-8947


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69ffd1fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69ffd1fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69ffd1fa

Branch: refs/heads/cassandra-2.1
Commit: 69ffd1fa01dd9a5b7118cbcaf63dd2dffc1fa508
Parents: 8284964
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:00:29 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:00:29 2015 +

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/AbstractCell.java   |  3 +-
 .../apache/cassandra/db/BufferCounterCell.java  |  7 +---
 .../apache/cassandra/db/BufferDeletedCell.java  |  5 ---
 .../apache/cassandra/db/BufferExpiringCell.java | 13 ++
 .../apache/cassandra/db/NativeCounterCell.java  |  8 +---
 .../apache/cassandra/db/NativeDeletedCell.java  |  6 ---
 .../apache/cassandra/db/NativeExpiringCell.java | 13 ++
 test/unit/org/apache/cassandra/db/CellTest.java | 44 
 9 files changed, 58 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 68df77e..2af8df6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Cleanup cell equality (CASSANDRA-8947)
  * Introduce intra-cluster message coalescing (CASSANDRA-8692)
  * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)
  * Don't check if an sstable is live for offline compactions (CASSANDRA-8841)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/AbstractCell.java
--
diff --git a/src/java/org/apache/cassandra/db/AbstractCell.java 
b/src/java/org/apache/cassandra/db/AbstractCell.java
index f27871f..37d483f 100644
--- a/src/java/org/apache/cassandra/db/AbstractCell.java
+++ b/src/java/org/apache/cassandra/db/AbstractCell.java
@@ -136,7 +136,8 @@ public abstract class AbstractCell implements Cell
 
 public boolean equals(Cell cell)
 {
-return timestamp() == cell.timestamp()  name().equals(cell.name()) 
 value().equals(cell.value());
+return timestamp() == cell.timestamp()  name().equals(cell.name()) 
 value().equals(cell.value())
+serializationFlags() == cell.serializationFlags();
 }
 
 public int hashCode()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/BufferCounterCell.java
--
diff --git a/src/java/org/apache/cassandra/db/BufferCounterCell.java 
b/src/java/org/apache/cassandra/db/BufferCounterCell.java
index bdd97a7..827182a 100644
--- a/src/java/org/apache/cassandra/db/BufferCounterCell.java
+++ b/src/java/org/apache/cassandra/db/BufferCounterCell.java
@@ -171,11 +171,6 @@ public class BufferCounterCell extends BufferCell 
implements CounterCell
 @Override
 public boolean equals(Cell cell)
 {
-return cell instanceof CounterCell  equals((CounterCell) cell);
-}
-
-public boolean equals(CounterCell cell)
-{
-return super.equals(cell)  timestampOfLastDelete == 
cell.timestampOfLastDelete();
+return super.equals(cell)  timestampOfLastDelete == ((CounterCell) 
cell).timestampOfLastDelete();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/BufferDeletedCell.java
--
diff --git a/src/java/org/apache/cassandra/db/BufferDeletedCell.java 
b/src/java/org/apache/cassandra/db/BufferDeletedCell.java
index bcc170f..a38f322 100644
--- a/src/java/org/apache/cassandra/db/BufferDeletedCell.java
+++ b/src/java/org/apache/cassandra/db/BufferDeletedCell.java
@@ -107,11 +107,6 @@ public class BufferDeletedCell extends BufferCell 
implements DeletedCell
 throw new MarshalException(The local deletion time should not be 
negative);
 }
 
-public boolean equals(Cell cell)
-{
-return timestamp() == cell.timestamp()  getLocalDeletionTime() == 
cell.getLocalDeletionTime()  name().equals(cell.name());
-}
-
 @Override
 public void updateDigest(MessageDigest digest)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/BufferExpiringCell.java

[2/3] cassandra git commit: Cleanup cell equality

2015-03-18 Thread benedict
Cleanup cell equality

patch by benedict; reviewed by sylvain for CASSANDRA-8947


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69ffd1fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69ffd1fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69ffd1fa

Branch: refs/heads/trunk
Commit: 69ffd1fa01dd9a5b7118cbcaf63dd2dffc1fa508
Parents: 8284964
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:00:29 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:00:29 2015 +

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/AbstractCell.java   |  3 +-
 .../apache/cassandra/db/BufferCounterCell.java  |  7 +---
 .../apache/cassandra/db/BufferDeletedCell.java  |  5 ---
 .../apache/cassandra/db/BufferExpiringCell.java | 13 ++
 .../apache/cassandra/db/NativeCounterCell.java  |  8 +---
 .../apache/cassandra/db/NativeDeletedCell.java  |  6 ---
 .../apache/cassandra/db/NativeExpiringCell.java | 13 ++
 test/unit/org/apache/cassandra/db/CellTest.java | 44 
 9 files changed, 58 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 68df77e..2af8df6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Cleanup cell equality (CASSANDRA-8947)
  * Introduce intra-cluster message coalescing (CASSANDRA-8692)
  * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)
  * Don't check if an sstable is live for offline compactions (CASSANDRA-8841)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/AbstractCell.java
--
diff --git a/src/java/org/apache/cassandra/db/AbstractCell.java 
b/src/java/org/apache/cassandra/db/AbstractCell.java
index f27871f..37d483f 100644
--- a/src/java/org/apache/cassandra/db/AbstractCell.java
+++ b/src/java/org/apache/cassandra/db/AbstractCell.java
@@ -136,7 +136,8 @@ public abstract class AbstractCell implements Cell
 
 public boolean equals(Cell cell)
 {
-return timestamp() == cell.timestamp()  name().equals(cell.name()) 
 value().equals(cell.value());
+return timestamp() == cell.timestamp()  name().equals(cell.name()) 
 value().equals(cell.value())
+serializationFlags() == cell.serializationFlags();
 }
 
 public int hashCode()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/BufferCounterCell.java
--
diff --git a/src/java/org/apache/cassandra/db/BufferCounterCell.java 
b/src/java/org/apache/cassandra/db/BufferCounterCell.java
index bdd97a7..827182a 100644
--- a/src/java/org/apache/cassandra/db/BufferCounterCell.java
+++ b/src/java/org/apache/cassandra/db/BufferCounterCell.java
@@ -171,11 +171,6 @@ public class BufferCounterCell extends BufferCell 
implements CounterCell
 @Override
 public boolean equals(Cell cell)
 {
-return cell instanceof CounterCell  equals((CounterCell) cell);
-}
-
-public boolean equals(CounterCell cell)
-{
-return super.equals(cell)  timestampOfLastDelete == 
cell.timestampOfLastDelete();
+return super.equals(cell)  timestampOfLastDelete == ((CounterCell) 
cell).timestampOfLastDelete();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/BufferDeletedCell.java
--
diff --git a/src/java/org/apache/cassandra/db/BufferDeletedCell.java 
b/src/java/org/apache/cassandra/db/BufferDeletedCell.java
index bcc170f..a38f322 100644
--- a/src/java/org/apache/cassandra/db/BufferDeletedCell.java
+++ b/src/java/org/apache/cassandra/db/BufferDeletedCell.java
@@ -107,11 +107,6 @@ public class BufferDeletedCell extends BufferCell 
implements DeletedCell
 throw new MarshalException(The local deletion time should not be 
negative);
 }
 
-public boolean equals(Cell cell)
-{
-return timestamp() == cell.timestamp()  getLocalDeletionTime() == 
cell.getLocalDeletionTime()  name().equals(cell.name());
-}
-
 @Override
 public void updateDigest(MessageDigest digest)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69ffd1fa/src/java/org/apache/cassandra/db/BufferExpiringCell.java
--
diff --git a/src/java/org/apache/cassandra/db/BufferExpiringCell.java 

[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-18 Thread benedict
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2dc31c1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2dc31c1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2dc31c1

Branch: refs/heads/trunk
Commit: c2dc31c1f0f6b7564ef955b3815d9d89bdc58df2
Parents: 7bef6f9 69ffd1f
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:00:59 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:00:59 2015 +

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/AbstractCell.java   |  3 +-
 .../apache/cassandra/db/BufferCounterCell.java  |  7 +---
 .../apache/cassandra/db/BufferDeletedCell.java  |  5 ---
 .../apache/cassandra/db/BufferExpiringCell.java | 13 ++
 .../apache/cassandra/db/NativeCounterCell.java  |  8 +---
 .../apache/cassandra/db/NativeDeletedCell.java  |  6 ---
 .../apache/cassandra/db/NativeExpiringCell.java | 13 ++
 test/unit/org/apache/cassandra/db/CellTest.java | 44 
 9 files changed, 58 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2dc31c1/CHANGES.txt
--
diff --cc CHANGES.txt
index ae98f56,2af8df6..997bf04
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,78 -1,5 +1,79 @@@
 +3.0
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add nodetool command to validate all sstables in a node (CASSANDRA-5791)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 

[jira] [Updated] (CASSANDRA-8981) IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk

2015-03-18 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8981:

Attachment: 8981.txt

This looks to be a manifestation of CASSANDRA-8805 along with a slightly buggy 
test. The background executor was still running so redsitribution of summaries 
was happening, and the cancel compactions doesn't affect them due to 
CASSANDRA-8805, so it was timing out. Attached a patch to fix the test to avoid 
this (and to actually increase the race window, as after this change it 
terminates very quickly)

 IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk
 --

 Key: CASSANDRA-8981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Joshua McKenzie
Assignee: Benedict
Priority: Minor
 Fix For: 3.0

 Attachments: 8981.txt


 Keep running it repeatedly w/showoutput=yes in build.xml on junit and 
 you'll see it time out with:
 {noformat}
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 {noformat}
 I originally thought this was a Windows specific problem (CASSANDRA-8962) but 
 can reproduce on linux by just repeatedly running the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366934#comment-14366934
 ] 

Benedict commented on CASSANDRA-8851:
-

[~Hachmann]: it doesn't look like this log goes back far enough? Do you have an 
earlier log? It also looks like your server is suffering severe GC pauses. 
Would it be possible to get a heap dump uploaded somewhere, and also a result 
of running find . in the root of your data directory, so we can rule out 
potential causes ASAP (we're planning to get to the bottom of this before 
releasing 2.1.4)?

 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8584) Add strerror output on failed trySkipCache calls

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366954#comment-14366954
 ] 

Benedict commented on CASSANDRA-8584:
-

Sorry for the delay, missed this in my work queue. LGTM. One nit to consider on 
commit is if we should increase the window for throttling - an error doing 
something like this probably doesn't need to be reported secondly, and probably 
not even minutely. Probably 10m+ is more like it IMO.

 Add strerror output on failed trySkipCache calls
 

 Key: CASSANDRA-8584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8584
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Trivial
 Fix For: 2.1.4

 Attachments: 8584_v1.txt, nospamlogger.txt


 Since trySkipCache returns an errno rather than -1 and setting errno like our 
 other CLibrary calls, it's thread-safe and we could print out more helpful 
 information if we failed to prompt the kernel to skip the page cache.  That 
 system call should always succeed unless we have an invalid fd as it's free 
 to ignore us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: SSTableScanner enforces its bounds

2015-03-18 Thread benedict
SSTableScanner enforces its bounds

patch by benedict; reviewed by sylvain for CASSANDRA-8946


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/572ef50d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/572ef50d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/572ef50d

Branch: refs/heads/trunk
Commit: 572ef50dd11fcb501ebe46f1dde6656e42cb96bb
Parents: 69ffd1f
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:02:35 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:02:35 2015 +

--
 CHANGES.txt |   1 +
 .../apache/cassandra/dht/AbstractBounds.java|  74 ++
 src/java/org/apache/cassandra/dht/Bounds.java   |  10 ++
 .../apache/cassandra/dht/ExcludingBounds.java   |  10 ++
 .../cassandra/dht/IncludingExcludingBounds.java |  10 ++
 src/java/org/apache/cassandra/dht/Range.java|  10 ++
 .../cassandra/io/sstable/SSTableScanner.java|  42 --
 .../io/sstable/SSTableScannerTest.java  | 143 ---
 8 files changed, 270 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/572ef50d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2af8df6..36bdb39 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * SSTableScanner enforces its bounds (CASSANDRA-8946)
  * Cleanup cell equality (CASSANDRA-8947)
  * Introduce intra-cluster message coalescing (CASSANDRA-8692)
  * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/572ef50d/src/java/org/apache/cassandra/dht/AbstractBounds.java
--
diff --git a/src/java/org/apache/cassandra/dht/AbstractBounds.java 
b/src/java/org/apache/cassandra/dht/AbstractBounds.java
index 90eb6b5..6d2ee43 100644
--- a/src/java/org/apache/cassandra/dht/AbstractBounds.java
+++ b/src/java/org/apache/cassandra/dht/AbstractBounds.java
@@ -68,6 +68,8 @@ public abstract class AbstractBoundsT extends 
RingPositionT implements Seria
  * instead.
  */
 public abstract PairAbstractBoundsT, AbstractBoundsT split(T 
position);
+public abstract boolean inclusiveLeft();
+public abstract boolean inclusiveRight();
 
 @Override
 public int hashCode()
@@ -193,4 +195,76 @@ public abstract class AbstractBoundsT extends 
RingPositionT implements Seria
 return size;
 }
 }
+
+public static T extends RingPositionT AbstractBoundsT 
bounds(BoundaryT min, BoundaryT max)
+{
+return bounds(min.boundary, min.inclusive, max.boundary, 
max.inclusive);
+}
+public static T extends RingPositionT AbstractBoundsT bounds(T min, 
boolean inclusiveMin, T max, boolean inclusiveMax)
+{
+if (inclusiveMin  inclusiveMax)
+return new BoundsT(min, max);
+else if (inclusiveMax)
+return new RangeT(min, max);
+else if (inclusiveMin)
+return new IncludingExcludingBoundsT(min, max);
+else
+return new ExcludingBoundsT(min, max);
+}
+
+// represents one side of a bounds (which side is not encoded)
+public static class BoundaryT extends RingPositionT
+{
+public final T boundary;
+public final boolean inclusive;
+public Boundary(T boundary, boolean inclusive)
+{
+this.boundary = boundary;
+this.inclusive = inclusive;
+}
+}
+
+public BoundaryT leftBoundary()
+{
+return new Boundary(left, inclusiveLeft());
+}
+
+public BoundaryT rightBoundary()
+{
+return new Boundary(right, inclusiveRight());
+}
+
+public static T extends RingPositionT boolean isEmpty(BoundaryT 
left, BoundaryT right)
+{
+int c = left.boundary.compareTo(right.boundary);
+return c  0 || (c == 0  !(left.inclusive  right.inclusive));
+}
+
+public static T extends RingPositionT BoundaryT minRight(BoundaryT 
right1, T right2, boolean isInclusiveRight2)
+{
+return minRight(right1, new BoundaryT(right2, isInclusiveRight2));
+}
+
+public static T extends RingPositionT BoundaryT minRight(BoundaryT 
right1, BoundaryT right2)
+{
+int c = right1.boundary.compareTo(right2.boundary);
+if (c != 0)
+return c  0 ? right1 : right2;
+// return the exclusive version, if either
+return right2.inclusive ? right1 : right2;
+}
+
+public static T extends RingPositionT BoundaryT maxLeft(BoundaryT 
left1, T left2, boolean isInclusiveLeft2)
+{
+return maxLeft(left1, new 

[jira] [Updated] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-03-18 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-5791:

Fix Version/s: 3.0

 A nodetool command to validate all sstables in a node
 -

 Key: CASSANDRA-5791
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Assignee: Jeff Jirsa
Priority: Minor
 Fix For: 3.0

 Attachments: cassandra-5791-patch-3.diff, cassandra-5791.patch-2


 CUrrently there is no nodetool command to validate all sstables on disk. The 
 only way to do this is to run a repair and see if it succeeds. But we cannot 
 repair the system keyspace. 
 Also we can run upgrade sstables but that re writes all the sstables. 
 This command should check the hash of all sstables and return whether all 
 data is readable all not. This should NOT care about consistency. 
 The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix bad merge

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 144644bbf - 7bef6f93a


fix bad merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7bef6f93
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7bef6f93
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7bef6f93

Branch: refs/heads/trunk
Commit: 7bef6f93aea3a6897b53e909688f5948c018ccdf
Parents: 144644b
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 10:54:05 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 10:54:05 2015 +

--
 src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7bef6f93/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java 
b/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
index 4cf3517..2f9550e 100644
--- a/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
+++ b/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
@@ -104,7 +104,7 @@ public class DataIntegrityMetadata
 public FileDigestValidator(Descriptor descriptor) throws IOException
 {
 this.descriptor = descriptor;
-checksum = descriptor.version.hasAllAdlerChecksums() ? new 
Adler32() : new PureJavaCrc32();
+checksum = descriptor.version.hasAllAdlerChecksums() ? new 
Adler32() : CRC32Factory.instance.create();
 digestReader = RandomAccessReader.open(new 
File(descriptor.filenameFor(Component.DIGEST)));
 dataReader = RandomAccessReader.open(new 
File(descriptor.filenameFor(Component.DATA)));
 try



[jira] [Updated] (CASSANDRA-8559) OOM caused by large tombstone warning.

2015-03-18 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8559:
---
Reviewer: Sam Tunnicliffe

 OOM caused by large tombstone warning.
 --

 Key: CASSANDRA-8559
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8559
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.0.11 / 2.1
Reporter: Dominic Letz
Assignee: Aleksey Yeschenko
  Labels: tombstone
 Fix For: 2.0.14

 Attachments: 8559.txt, Selection_048.png, cassandra-2.0.11-8559.txt, 
 stacktrace.log


 When running with high amount of tombstones the error message generation from 
 CASSANDRA-6117 can lead to out of memory situation with the default setting.
 Attached a heapdump viewed in visualvm showing how this construct created two 
 777mb strings to print the error message for a read query and then crashed 
 OOM.
 {code}
 if (respectTombstoneThresholds()  columnCounter.ignored()  
 DatabaseDescriptor.getTombstoneWarnThreshold())
 {
 StringBuilder sb = new StringBuilder();
 CellNameType type = container.metadata().comparator;
 for (ColumnSlice sl : slices)
 {
 assert sl != null;
 sb.append('[');
 sb.append(type.getString(sl.start));
 sb.append('-');
 sb.append(type.getString(sl.finish));
 sb.append(']');
 }
 logger.warn(Read {} live and {} tombstoned cells in {}.{} (see 
 tombstone_warn_threshold). {} columns was requested, slices={}, delInfo={},
 columnCounter.live(), columnCounter.ignored(), 
 container.metadata().ksName, container.metadata().cfName, count, sb, 
 container.deletionInfo());
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-3852) use LIFO queueing policy when queue size exceeds thresholds

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366872#comment-14366872
 ] 

Benedict commented on CASSANDRA-3852:
-

I think the main benefit here is to reduce the _number_ of high latency 
queries, and accept that those in this cohort may be significantly worse.

However I'm not at all in dispute if this is useful (it absolutely is): my 
concern is that this may translate into *worse* overload, as we accumulate 
queries we are likely to drop. We need to have some strategy for ensuring this 
doesn't overload the server. Perhaps tracking and limiting of in-flight query 
sizes, perhaps simply dropping if a query passes into LIFO order and we aren't 
making progress on the FIFO queue fast enough? But in either case we probably 
want to report to the coordinator that it won't be receiving a response so that 
it too can drop its state and report a failure to the client.

 use LIFO queueing policy when queue size exceeds thresholds
 ---

 Key: CASSANDRA-3852
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3852
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Benedict
  Labels: performance
 Fix For: 3.1


 A strict FIFO policy for queueing (between stages) is detrimental to latency 
 and forward progress. Whenever a node is saturated beyond incoming request 
 rate, *all* requests become slow. If it is consistently saturated, you start 
 effectively timing out on *all* requests.
 A much better strategy from the point of view of latency is to serve a subset 
 requests quickly, and letting some time out, rather than letting all either 
 time out or be slow.
 Care must be taken such that:
 * We still guarantee that requests are processed reasonably timely (we 
 couldn't go strict LIFO for example as that would result in requests getting 
 stuck potentially forever on a loaded node).
 * Maybe, depending on the previous point's solution, ensure that some 
 requests bypass the policy and get prioritized (e.g., schema migrations, or 
 anything internal to a node).
 A possible implementation is to go LIFO whenever there are requests in the 
 queue that are older than N milliseconds (or a certain queue size, etc).
 Benefits:
 * All cases where the client is directly, or is indirectly affecting through 
 other layers, a system which has limited concurrency (e.g., thread pool size 
 of X to serve some incoming request rate), it is *much* better for a few 
 requests to time out while most are serviced quickly, than for all requests 
 to become slow, as it doesn't explode concurrency. Think any random 
 non-super-advanced php app, ruby web app, java servlet based app, etc. 
 Essentially, it optimizes very heavily for improved average latencies.
 * Systems with strict p95/p99/p999 requirements on latencies should greatly 
 benefit from such a policy. For example, suppose you have a system at 85% of 
 capacity, and it takes a write spike (or has a hiccup like GC pause, blocking 
 on a commit log write, etc). Suppose the hiccup racks up 500 ms worth of 
 requests. At 15% margin at steady state, that takes 500ms * 100/15 = 3.2 
 seconds to recover. Instead of *all* requests for an entire 3.2 second window 
 being slow, we'd serve requests quickly for 2.7 of those seconds, with the 
 incoming requests during that 500 ms interval being the ones primarily 
 affected. The flip side though is that once you're at the point where more 
 than N percent of requests end up having to wait for others to take LIFO 
 priority, the p(100-N) latencies will actually be *worse* than without this 
 change (but at this point you have to consider what the root reason for those 
 pXX requirements are).
 * In the case of complete saturation, it allows forward progress. Suppose 
 you're taking 25% more traffic than you are able to handle. Instead of 
 getting backed up and ending up essentially timing out *every single 
 request*, you will succeed in processing up to 75% of them (I say up to 
 because it depends; for example on a {{QUORUM}} request you need at least two 
 of the requests from the co-ordinator to succeed so the percentage is brought 
 down) and allowing clients to make forward progress and get work done, rather 
 than being stuck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366959#comment-14366959
 ] 

Benedict commented on CASSANDRA-5791:
-

Looks like this referenced PureJavaCRC32, which no longer is found on trunk. 
Could you have a look to confirm the patch otherwise applied cleanly?

 A nodetool command to validate all sstables in a node
 -

 Key: CASSANDRA-5791
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Assignee: Jeff Jirsa
Priority: Minor
 Fix For: 3.0

 Attachments: cassandra-5791-patch-3.diff, cassandra-5791.patch-2


 CUrrently there is no nodetool command to validate all sstables on disk. The 
 only way to do this is to run a repair and see if it succeeds. But we cannot 
 repair the system keyspace. 
 Also we can run upgrade sstables but that re writes all the sstables. 
 This command should check the hash of all sstables and return whether all 
 data is readable all not. This should NOT care about consistency. 
 The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8559) OOM caused by large tombstone warning.

2015-03-18 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366881#comment-14366881
 ] 

Sam Tunnicliffe commented on CASSANDRA-8559:


+1 LGTM

 OOM caused by large tombstone warning.
 --

 Key: CASSANDRA-8559
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8559
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.0.11 / 2.1
Reporter: Dominic Letz
Assignee: Aleksey Yeschenko
  Labels: tombstone
 Fix For: 2.0.14

 Attachments: 8559.txt, Selection_048.png, cassandra-2.0.11-8559.txt, 
 stacktrace.log


 When running with high amount of tombstones the error message generation from 
 CASSANDRA-6117 can lead to out of memory situation with the default setting.
 Attached a heapdump viewed in visualvm showing how this construct created two 
 777mb strings to print the error message for a read query and then crashed 
 OOM.
 {code}
 if (respectTombstoneThresholds()  columnCounter.ignored()  
 DatabaseDescriptor.getTombstoneWarnThreshold())
 {
 StringBuilder sb = new StringBuilder();
 CellNameType type = container.metadata().comparator;
 for (ColumnSlice sl : slices)
 {
 assert sl != null;
 sb.append('[');
 sb.append(type.getString(sl.start));
 sb.append('-');
 sb.append(type.getString(sl.finish));
 sb.append(']');
 }
 logger.warn(Read {} live and {} tombstoned cells in {}.{} (see 
 tombstone_warn_threshold). {} columns was requested, slices={}, delInfo={},
 columnCounter.live(), columnCounter.ignored(), 
 container.metadata().ksName, container.metadata().cfName, count, sb, 
 container.deletionInfo());
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7212) Allow to switch user within CQLSH session

2015-03-18 Thread Sachin Janani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sachin Janani updated CASSANDRA-7212:
-
Attachment: 7212_1.patch

Patch for Cassandra #7212 .

 Allow to switch user within CQLSH session
 -

 Key: CASSANDRA-7212
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7212
 Project: Cassandra
  Issue Type: Improvement
  Components: API
 Environment: [cqlsh 4.1.1 | Cassandra 2.0.7.31 | CQL spec 3.1.1 | 
 Thrift protocol 19.39.0]
Reporter: Jose Martinez Poblete
  Labels: cqlsh
 Attachments: 7212_1.patch


 Once a user is logged into CQLSH, it is not possible to switch to another 
 user  without exiting and relaunch
 This is a feature offered in postgres and probably other databases:
 http://secure.encivasolutions.com/knowledgebase.php?action=displayarticleid=1126
  
 Perhaps this could be implemented on CQLSH as part of the USE directive:
 USE Keyspace [USER] [password] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-18 Thread benedict
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7aefd914
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7aefd914
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7aefd914

Branch: refs/heads/trunk
Commit: 7aefd914c79a7a5d652abcc1531fec89d80c0adc
Parents: c2dc31c 572ef50
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:04:48 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:04:48 2015 +

--
 CHANGES.txt |   1 +
 .../apache/cassandra/dht/AbstractBounds.java|  74 ++
 src/java/org/apache/cassandra/dht/Bounds.java   |  10 ++
 .../apache/cassandra/dht/ExcludingBounds.java   |  10 ++
 .../cassandra/dht/IncludingExcludingBounds.java |  10 ++
 src/java/org/apache/cassandra/dht/Range.java|  10 ++
 .../io/sstable/format/big/BigTableScanner.java  |  42 --
 .../io/sstable/SSTableScannerTest.java  | 143 ---
 8 files changed, 270 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7aefd914/CHANGES.txt
--
diff --cc CHANGES.txt
index 997bf04,36bdb39..62b1079
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,78 -1,5 +1,79 @@@
 +3.0
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add nodetool command to validate all sstables in a node (CASSANDRA-5791)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * 

[1/3] cassandra git commit: SSTableScanner enforces its bounds

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 69ffd1fa0 - 572ef50dd
  refs/heads/trunk c2dc31c1f - 7aefd914c


SSTableScanner enforces its bounds

patch by benedict; reviewed by sylvain for CASSANDRA-8946


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/572ef50d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/572ef50d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/572ef50d

Branch: refs/heads/cassandra-2.1
Commit: 572ef50dd11fcb501ebe46f1dde6656e42cb96bb
Parents: 69ffd1f
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:02:35 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:02:35 2015 +

--
 CHANGES.txt |   1 +
 .../apache/cassandra/dht/AbstractBounds.java|  74 ++
 src/java/org/apache/cassandra/dht/Bounds.java   |  10 ++
 .../apache/cassandra/dht/ExcludingBounds.java   |  10 ++
 .../cassandra/dht/IncludingExcludingBounds.java |  10 ++
 src/java/org/apache/cassandra/dht/Range.java|  10 ++
 .../cassandra/io/sstable/SSTableScanner.java|  42 --
 .../io/sstable/SSTableScannerTest.java  | 143 ---
 8 files changed, 270 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/572ef50d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2af8df6..36bdb39 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * SSTableScanner enforces its bounds (CASSANDRA-8946)
  * Cleanup cell equality (CASSANDRA-8947)
  * Introduce intra-cluster message coalescing (CASSANDRA-8692)
  * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/572ef50d/src/java/org/apache/cassandra/dht/AbstractBounds.java
--
diff --git a/src/java/org/apache/cassandra/dht/AbstractBounds.java 
b/src/java/org/apache/cassandra/dht/AbstractBounds.java
index 90eb6b5..6d2ee43 100644
--- a/src/java/org/apache/cassandra/dht/AbstractBounds.java
+++ b/src/java/org/apache/cassandra/dht/AbstractBounds.java
@@ -68,6 +68,8 @@ public abstract class AbstractBoundsT extends 
RingPositionT implements Seria
  * instead.
  */
 public abstract PairAbstractBoundsT, AbstractBoundsT split(T 
position);
+public abstract boolean inclusiveLeft();
+public abstract boolean inclusiveRight();
 
 @Override
 public int hashCode()
@@ -193,4 +195,76 @@ public abstract class AbstractBoundsT extends 
RingPositionT implements Seria
 return size;
 }
 }
+
+public static T extends RingPositionT AbstractBoundsT 
bounds(BoundaryT min, BoundaryT max)
+{
+return bounds(min.boundary, min.inclusive, max.boundary, 
max.inclusive);
+}
+public static T extends RingPositionT AbstractBoundsT bounds(T min, 
boolean inclusiveMin, T max, boolean inclusiveMax)
+{
+if (inclusiveMin  inclusiveMax)
+return new BoundsT(min, max);
+else if (inclusiveMax)
+return new RangeT(min, max);
+else if (inclusiveMin)
+return new IncludingExcludingBoundsT(min, max);
+else
+return new ExcludingBoundsT(min, max);
+}
+
+// represents one side of a bounds (which side is not encoded)
+public static class BoundaryT extends RingPositionT
+{
+public final T boundary;
+public final boolean inclusive;
+public Boundary(T boundary, boolean inclusive)
+{
+this.boundary = boundary;
+this.inclusive = inclusive;
+}
+}
+
+public BoundaryT leftBoundary()
+{
+return new Boundary(left, inclusiveLeft());
+}
+
+public BoundaryT rightBoundary()
+{
+return new Boundary(right, inclusiveRight());
+}
+
+public static T extends RingPositionT boolean isEmpty(BoundaryT 
left, BoundaryT right)
+{
+int c = left.boundary.compareTo(right.boundary);
+return c  0 || (c == 0  !(left.inclusive  right.inclusive));
+}
+
+public static T extends RingPositionT BoundaryT minRight(BoundaryT 
right1, T right2, boolean isInclusiveRight2)
+{
+return minRight(right1, new BoundaryT(right2, isInclusiveRight2));
+}
+
+public static T extends RingPositionT BoundaryT minRight(BoundaryT 
right1, BoundaryT right2)
+{
+int c = right1.boundary.compareTo(right2.boundary);
+if (c != 0)
+return c  0 ? right1 : right2;
+// return the exclusive version, if either
+return right2.inclusive ? right1 : right2;
+}
+
+public static T 

[jira] [Commented] (CASSANDRA-8746) SSTableReader.cloneWithNewStart can drop too much page cache for compressed files

2015-03-18 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366850#comment-14366850
 ] 

Marcus Eriksson commented on CASSANDRA-8746:


+1

 SSTableReader.cloneWithNewStart can drop too much page cache for compressed 
 files
 -

 Key: CASSANDRA-8746
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8746
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: nodetool command to validate all sstables in a node

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk da8268f15 - 21bdf8700


nodetool command to validate all sstables in a node

patch by jeff jirsa; reviewed by branimir for CASSANDRA-5791


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21bdf870
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21bdf870
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21bdf870

Branch: refs/heads/trunk
Commit: 21bdf8700601f8150e8c13e0b4f71e061822c802
Parents: da8268f
Author: Jeff Jirsa j...@jeffjirsa.net
Authored: Wed Mar 18 10:10:31 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 10:10:31 2015 +

--
 CHANGES.txt |   1 +
 bin/sstableverify   |  55 +++
 bin/sstableverify.bat   |  41 ++
 .../apache/cassandra/db/ColumnFamilyStore.java  |   5 +
 .../db/compaction/CompactionManager.java|  36 ++
 .../cassandra/db/compaction/OperationType.java  |   3 +-
 .../cassandra/db/compaction/Verifier.java   | 280 
 .../apache/cassandra/io/sstable/Component.java  |   4 +-
 .../io/util/DataIntegrityMetadata.java  |  53 +++
 .../cassandra/service/StorageService.java   |  12 +
 .../cassandra/service/StorageServiceMBean.java  |   8 +
 .../org/apache/cassandra/tools/NodeProbe.java   |  19 +-
 .../org/apache/cassandra/tools/NodeTool.java|  35 +-
 .../cassandra/tools/StandaloneVerifier.java | 222 ++
 .../org/apache/cassandra/db/VerifyTest.java | 428 +++
 15 files changed, 1195 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bdf870/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8eafcd2..e090647 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Add nodetool command to validate all sstables in a node (CASSANDRA-5791)
  * Add WriteFailureException to native protocol, notify coordinator of
write failures (CASSANDRA-8592)
  * Convert SequentialWriter to nio (CASSANDRA-8709)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bdf870/bin/sstableverify
--
diff --git a/bin/sstableverify b/bin/sstableverify
new file mode 100644
index 000..c3e40c7
--- /dev/null
+++ b/bin/sstableverify
@@ -0,0 +1,55 @@
+#!/bin/sh
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# License); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+if [ x$CASSANDRA_INCLUDE = x ]; then
+for include in /usr/share/cassandra/cassandra.in.sh \
+   /usr/local/share/cassandra/cassandra.in.sh \
+   /opt/cassandra/cassandra.in.sh \
+   ~/.cassandra.in.sh \
+   `dirname $0`/cassandra.in.sh; do
+if [ -r $include ]; then
+. $include
+break
+fi
+done
+elif [ -r $CASSANDRA_INCLUDE ]; then
+. $CASSANDRA_INCLUDE
+fi
+
+# Use JAVA_HOME if set, otherwise look for java in PATH
+if [ -x $JAVA_HOME/bin/java ]; then
+JAVA=$JAVA_HOME/bin/java
+else
+JAVA=`which java`
+fi
+
+if [ -z $CLASSPATH ]; then
+echo You must set the CLASSPATH var 2
+exit 1
+fi
+
+if [ x$MAX_HEAP_SIZE = x ]; then
+MAX_HEAP_SIZE=256M
+fi
+
+$JAVA $JAVA_AGENT -ea -cp $CLASSPATH -Xmx$MAX_HEAP_SIZE \
+-Dcassandra.storagedir=$cassandra_storagedir \
+-Dlogback.configurationFile=logback-tools.xml \
+org.apache.cassandra.tools.StandaloneVerifier $@
+
+# vi:ai sw=4 ts=4 tw=0 et

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bdf870/bin/sstableverify.bat
--
diff --git a/bin/sstableverify.bat b/bin/sstableverify.bat
new file mode 100644
index 000..aa08826
--- /dev/null
+++ b/bin/sstableverify.bat
@@ -0,0 +1,41 @@
+@REM
+@REM  Licensed to the Apache Software Foundation (ASF) under one or more
+@REM  contributor license agreements.  See the NOTICE file distributed with
+@REM  

[jira] [Updated] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-03-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Björn Hachmann updated CASSANDRA-8851:
--
Attachment: system.log.gz
schema.txt

Hi @Benedict,

please find the schema information and the system log attached. Thank you very 
much for looking into this!

Kind regards
Björn

 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366930#comment-14366930
 ] 

Benedict commented on CASSANDRA-6809:
-

I wouldn't hold up commit over that, no. [~aboudreault] are you planning to do 
some further testing on this before commit, or should I go ahead and commit 
this to trunk?

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/4] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-18 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/abd528a0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --cc src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index 65d539f,000..53bb42e
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@@ -1,2058 -1,0 +1,2030 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.io.sstable.format;
 +
 +import java.io.*;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +import java.util.concurrent.*;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.concurrent.atomic.AtomicLong;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.base.Predicate;
 +import com.google.common.collect.Iterators;
 +import com.google.common.collect.Ordering;
 +import com.google.common.primitives.Longs;
 +import com.google.common.util.concurrent.RateLimiter;
 +
 +import com.clearspring.analytics.stream.cardinality.CardinalityMergeException;
 +import com.clearspring.analytics.stream.cardinality.HyperLogLogPlus;
 +import com.clearspring.analytics.stream.cardinality.ICardinality;
 +import org.apache.cassandra.cache.CachingOptions;
 +import org.apache.cassandra.cache.InstrumentingCache;
 +import org.apache.cassandra.cache.KeyCacheKey;
 +import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
 +import org.apache.cassandra.concurrent.ScheduledExecutors;
 +import org.apache.cassandra.config.*;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 +import org.apache.cassandra.db.commitlog.ReplayPosition;
 +import org.apache.cassandra.db.composites.CellName;
 +import org.apache.cassandra.db.filter.ColumnSlice;
 +import org.apache.cassandra.db.index.SecondaryIndex;
 +import org.apache.cassandra.dht.*;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.sstable.*;
 +import org.apache.cassandra.io.sstable.metadata.*;
 +import org.apache.cassandra.io.util.*;
 +import org.apache.cassandra.metrics.RestorableMeter;
 +import org.apache.cassandra.metrics.StorageMetrics;
 +import org.apache.cassandra.service.ActiveRepairService;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.utils.*;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +import org.apache.cassandra.utils.concurrent.Ref;
 +import org.apache.cassandra.utils.concurrent.RefCounted;
 +import org.apache.cassandra.utils.concurrent.SelfRefCounted;
 +
 +import static 
org.apache.cassandra.db.Directories.SECONDARY_INDEX_NAME_SEPARATOR;
 +
 +/**
 + * An SSTableReader can be constructed in a number of places, but typically 
is either
 + * read from disk at startup, or constructed from a flushed memtable, or 
after compaction
 + * to replace some existing sstables. However once created, an sstablereader 
may also be modified.
 + *
 + * A reader's OpenReason describes its current stage in its lifecycle, as 
follows:
 + *
 + * NORMAL
 + * From:   None= Reader has been read from disk, either at 
startup or from a flushed memtable
 + * EARLY   = Reader is the final result of a compaction
 + * MOVED_START = Reader WAS being compacted, but this failed and 
it has been restored to NORMAL status
 + *
 + * EARLY
 + * From:   None= Reader is a compaction replacement that is 
either incomplete and has been opened
 + *to represent its partial result status, or has 
been finished but the compaction
 + *it is a part of has not yet completed fully
 + * EARLY   = Same as from None, only it is not the first 
time it has been
 + *
 + * MOVED_START
 + * From:   NORMAL  = Reader is being compacted. This compaction has 
not finished, but the compaction result
 + *is 

[1/4] cassandra git commit: Use correct bounds for page cache eviction of compressed files

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 572ef50dd - 521b36311
  refs/heads/trunk 7aefd914c - abd528a0f


Use correct bounds for page cache eviction of compressed files

patch by benedict; reviewed by marcus for CASSANDRA-8746


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/521b3631
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/521b3631
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/521b3631

Branch: refs/heads/cassandra-2.1
Commit: 521b36311ad23f3defd6abf36becda61388add9c
Parents: 572ef50
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 11:08:08 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 11:08:08 2015 +

--
 CHANGES.txt |  1 +
 .../cassandra/io/sstable/SSTableReader.java | 40 +++-
 .../io/util/CompressedPoolingSegmentedFile.java |  7 
 .../io/util/CompressedSegmentedFile.java|  7 
 .../apache/cassandra/io/util/SegmentedFile.java |  6 +++
 5 files changed, 27 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/521b3631/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 36bdb39..3f96330 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Use correct bounds for page cache eviction of compressed files 
(CASSANDRA-8746)
  * SSTableScanner enforces its bounds (CASSANDRA-8946)
  * Cleanup cell equality (CASSANDRA-8947)
  * Introduce intra-cluster message coalescing (CASSANDRA-8692)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/521b3631/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index 41e4adb..f42bfc7 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -896,8 +896,8 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 {
 public void run()
 {
-CLibrary.trySkipCache(dfile.path, 0, dataStart);
-CLibrary.trySkipCache(ifile.path, 0, indexStart);
+dfile.dropPageCache(dataStart);
+ifile.dropPageCache(indexStart);
 if (runOnClose != null)
 runOnClose.run();
 }
@@ -920,8 +920,8 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 {
 public void run()
 {
-CLibrary.trySkipCache(dfile.path, 0, 0);
-CLibrary.trySkipCache(ifile.path, 0, 0);
+dfile.dropPageCache(0);
+ifile.dropPageCache(0);
 runOnClose.run();
 }
 };
@@ -2181,8 +2181,8 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 if (isCompacted.get())
 SystemKeyspace.clearSSTableReadMeter(desc.ksname, desc.cfname, 
desc.generation);
 // don't ideally want to dropPageCache for the file until all 
instances have been released
-dropPageCache(desc.filenameFor(Component.DATA));
-dropPageCache(desc.filenameFor(Component.PRIMARY_INDEX));
+CLibrary.trySkipCache(desc.filenameFor(Component.DATA), 0, 0);
+CLibrary.trySkipCache(desc.filenameFor(Component.PRIMARY_INDEX), 
0, 0);
 }
 
 public String name()
@@ -2204,32 +2204,4 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 return refc;
 }
 }
-
-private static void dropPageCache(String filePath)
-{
-RandomAccessFile file = null;
-
-try
-{
-file = new RandomAccessFile(filePath, r);
-
-int fd = CLibrary.getfd(file.getFD());
-
-if (fd  0)
-{
-if (logger.isDebugEnabled())
-logger.debug(String.format(Dropping page cache of file 
%s., filePath));
-
-CLibrary.trySkipCache(fd, 0, 0);
-}
-}
-catch (IOException e)
-{
-// we don't care if cache cleanup fails
-}
-finally
-{
-FileUtils.closeQuietly(file);
-}
-}
 }


[jira] [Created] (CASSANDRA-8987) cassandra-stress should support a more complex client model

2015-03-18 Thread Benedict (JIRA)
Benedict created CASSANDRA-8987:
---

 Summary: cassandra-stress should support a more complex client 
model
 Key: CASSANDRA-8987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8987
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict


Orthogonal to CASSANDRA-8986, but still very important, is stress' simulation 
of clients: currently we assume a fixed number of clients performing infinite 
synchronous work, whereas, as I 
[argued|https://groups.google.com/forum/#!topic/mechanical-sympathy/icNZJejUHfE%5B101-125%5D]
 on the mechanical sympathy mailing list, the correct model is to have a new 
client arrival distribution and a distinct client model. Ideally, however, I 
would like to expand this to support client models that can simulate 
multi-table transactions, with both synchronous and asynchronous steps. So, 
let's say we have three tables T1, T2, T3, we could say something like:

A client performs:
* a registration by insert to T1 (and/or perhaps lookup in T1), multiple 
inserts to T2 and T2, in parallel
* followed by a number of queries on T3

Probably the best way to achieve this is with a tiered transaction definition 
that can be composed, so that any single query or insert is a transaction 
that itself may be sequentially or in parallel composed with any other to 
compose a new macro transaction. This would then be combined with a client 
arrival rate distribution to produce a total cluster workload.

At least one remaining question is if we want the operations to be data 
dependent, in which case this may well interact with CASSANDRA-8986, and 
probably requires a little thought. [~jshook] [~jeromatron] [~mstump] 
[~tupshin] thoughts on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367120#comment-14367120
 ] 

T Jake Luciani commented on CASSANDRA-8986:
---

I think we should have *ONE* way to use the tool, right now there are old old 
legacy, legacy, yaml, yaml + cli flags, cli-flags only.

I think we should remove all forms of input other than yaml and some very light 
cli options. My reasoning is we have the best chance of documenting and 
capturing a reproducible profile, vs I used this magic incantation of flags to 
get it to work... I think.



 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367134#comment-14367134
 ] 

Benedict edited comment on CASSANDRA-8986 at 3/18/15 1:43 PM:
--

I agree, but that is also independent of this goal. I plan to do that refactor 
first (as a separate ticket; I think I have a few related ones filed). I do 
intend to retain a simple mode, though, since the old mode is still used 
widely, but will transparently create a StressProfile to perform it.

edit: ... actually, we may disagree a little. I want to ensure the profile can 
specify everything, but the cli is still a very useful way to override a number 
of properties, especially for scripting. Forcing users to write a separate yaml 
for every possible test is really ugly IMO.


was (Author: benedict):
I agree, but that is also independent of this goal. I plan to do that refactor 
first (as a separate ticket; I think I have a few related ones filed). I do 
intend to retain a simple mode, though, since the old mode is still used 
widely, but will transparently create a StressProfile to perform it.

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8987) cassandra-stress should support a more complex client model

2015-03-18 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8987:

Description: 
Orthogonal to CASSANDRA-8986, but still very important, is stress' simulation 
of clients: currently we assume a fixed number of clients performing infinite 
synchronous work, whereas, as I 
[argued|https://groups.google.com/forum/#!topic/mechanical-sympathy/icNZJejUHfE%5B101-125%5D]
 on the mechanical sympathy mailing list, the correct model is to have a new 
client arrival distribution and a distinct client model. Ideally, however, I 
would like to expand this to support client models that can simulate 
multi-table transactions, with both synchronous and asynchronous steps. So, 
let's say we have three tables T1, T2, T3, we could say something like:

A client performs:
* a registration by insert to T1 (and/or perhaps lookup in T1), multiple 
inserts to T2 and T2, in parallel
* followed by a number of queries on T3

Probably the best way to achieve this is with a tiered transaction definition 
that can be composed, so that any single query or insert is a transaction 
that itself may be sequentially or in parallel composed with any other to 
compose a new macro transaction. This would then be combined with a client 
arrival rate distribution to produce a total cluster workload.

At least one remaining question is if we want the operations to be data 
dependent, in which case this may well interact with CASSANDRA-8986, and 
probably requires a little thought. [~jshook] [~jeromatron] [~mstump] 
[~tupshin] [~jlacefie] thoughts on this?

  was:
Orthogonal to CASSANDRA-8986, but still very important, is stress' simulation 
of clients: currently we assume a fixed number of clients performing infinite 
synchronous work, whereas, as I 
[argued|https://groups.google.com/forum/#!topic/mechanical-sympathy/icNZJejUHfE%5B101-125%5D]
 on the mechanical sympathy mailing list, the correct model is to have a new 
client arrival distribution and a distinct client model. Ideally, however, I 
would like to expand this to support client models that can simulate 
multi-table transactions, with both synchronous and asynchronous steps. So, 
let's say we have three tables T1, T2, T3, we could say something like:

A client performs:
* a registration by insert to T1 (and/or perhaps lookup in T1), multiple 
inserts to T2 and T2, in parallel
* followed by a number of queries on T3

Probably the best way to achieve this is with a tiered transaction definition 
that can be composed, so that any single query or insert is a transaction 
that itself may be sequentially or in parallel composed with any other to 
compose a new macro transaction. This would then be combined with a client 
arrival rate distribution to produce a total cluster workload.

At least one remaining question is if we want the operations to be data 
dependent, in which case this may well interact with CASSANDRA-8986, and 
probably requires a little thought. [~jshook] [~jeromatron] [~mstump] 
[~tupshin] thoughts on this?


 cassandra-stress should support a more complex client model
 ---

 Key: CASSANDRA-8987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8987
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 Orthogonal to CASSANDRA-8986, but still very important, is stress' simulation 
 of clients: currently we assume a fixed number of clients performing infinite 
 synchronous work, whereas, as I 
 [argued|https://groups.google.com/forum/#!topic/mechanical-sympathy/icNZJejUHfE%5B101-125%5D]
  on the mechanical sympathy mailing list, the correct model is to have a new 
 client arrival distribution and a distinct client model. Ideally, however, I 
 would like to expand this to support client models that can simulate 
 multi-table transactions, with both synchronous and asynchronous steps. So, 
 let's say we have three tables T1, T2, T3, we could say something like:
 A client performs:
 * a registration by insert to T1 (and/or perhaps lookup in T1), multiple 
 inserts to T2 and T2, in parallel
 * followed by a number of queries on T3
 Probably the best way to achieve this is with a tiered transaction 
 definition that can be composed, so that any single query or insert is a 
 transaction that itself may be sequentially or in parallel composed with 
 any other to compose a new macro transaction. This would then be combined 
 with a client arrival rate distribution to produce a total cluster workload.
 At least one remaining question is if we want the operations to be data 
 dependent, in which case this may well interact with CASSANDRA-8986, and 
 probably requires a little thought. [~jshook] [~jeromatron] [~mstump] 
 [~tupshin] [~jlacefie] thoughts on this?




[jira] [Commented] (CASSANDRA-8826) Distributed aggregates

2015-03-18 Thread Cristian O (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367118#comment-14367118
 ] 

Cristian O commented on CASSANDRA-8826:
---

Hi Benedict,

Very nicely put :) Let's see if reason can eventually prevail...

I'm not sure if you're aware of Vertica. It's the pioneering columnar
store, originally created by M. Stonebreaker
(highly recommended to look into this guy and find his papers if you're not
aware of him)

Vertica is probably one of the best available analytics database, however
it's commercial and quite expensive.

There's a paper on Vertica describing its architecture here:

http://www.vldb.org/pvldb/vol7/p1259-gupta.pdf

You'll see that it's distribution model and even parts of the storage
engine design are remarkably similar to Cassandra. This is not accidental
as they are both shared nothing architectures.

Cassandra is quite well suited to implement some of the main analytical use
cases with probably minimal effort, and there would
be a lot of interest in this market if it succeeds.

As I mentioned yesterday, a very interesting use case is to do simple
aggregations over large amounts of data points (mainly time series) very
fast (under 5 secs) for a large number of users (many concurrent requests).

Spark/MR do not have the right architecture for this, in the OSS world a
direct competitor would be Impala (almost shared nothing) and HBase perhaps
which I hear it's trying to position itself towards this.


Cheers,
Cristian








 Distributed aggregates
 --

 Key: CASSANDRA-8826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8826
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor

 Aggregations have been implemented in CASSANDRA-4914.
 All calculation is performed on the coordinator. This means, that all data is 
 pulled by the coordinator and processed there.
 This ticket's about to distribute aggregates to make them more efficient. 
 Currently some related tickets (esp. CASSANDRA-8099) are currently in 
 progress - we should wait for them to land before talking about 
 implementation.
 Another playgrounds (not covered by this ticket), that might be related is 
 about _distributed filtering_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367134#comment-14367134
 ] 

Benedict commented on CASSANDRA-8986:
-

I agree, but that is also independent of this goal. I plan to do that refactor 
first (as a separate ticket; I think I have a few related ones filed). I do 
intend to retain a simple mode, though, since the old mode is still used 
widely, but will transparently create a StressProfile to perform it.

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8987) cassandra-stress should support a more complex client model

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367055#comment-14367055
 ] 

Benedict commented on CASSANDRA-8987:
-

This is particularly problematic for latency modelling, although we can improve 
that with a simpler model, and in doing so help users modelling their workloads

 cassandra-stress should support a more complex client model
 ---

 Key: CASSANDRA-8987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8987
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 Orthogonal to CASSANDRA-8986, but still very important, is stress' simulation 
 of clients: currently we assume a fixed number of clients performing infinite 
 synchronous work, whereas, as I 
 [argued|https://groups.google.com/forum/#!topic/mechanical-sympathy/icNZJejUHfE%5B101-125%5D]
  on the mechanical sympathy mailing list, the correct model is to have a new 
 client arrival distribution and a distinct client model. Ideally, however, I 
 would like to expand this to support client models that can simulate 
 multi-table transactions, with both synchronous and asynchronous steps. So, 
 let's say we have three tables T1, T2, T3, we could say something like:
 A client performs:
 * a registration by insert to T1 (and/or perhaps lookup in T1), multiple 
 inserts to T2 and T2, in parallel
 * followed by a number of queries on T3
 Probably the best way to achieve this is with a tiered transaction 
 definition that can be composed, so that any single query or insert is a 
 transaction that itself may be sequentially or in parallel composed with 
 any other to compose a new macro transaction. This would then be combined 
 with a client arrival rate distribution to produce a total cluster workload.
 At least one remaining question is if we want the operations to be data 
 dependent, in which case this may well interact with CASSANDRA-8986, and 
 probably requires a little thought. [~jshook] [~jeromatron] [~mstump] 
 [~tupshin] thoughts on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367091#comment-14367091
 ] 

Benedict commented on CASSANDRA-8986:
-

Well, my goal is to make the tool itself only a little more complex, but to 
support much more complex behaviours, which is why it needs a major overhaul 
(to introduce these behaviours with the current design would be prohibitively 
complex). 

We do need to think about the API, though, yes. Perhaps we should actually 
reduce the number of knobs: we could simply offer a distribution of total 
number of CQL rows in a partition, an optional ratio for each clustering column 
(defining where the row fan out occurs on average), and one of a category for 
how those rows are distributed: uniformly, normally and extremely are likely 
sufficient (without any tweaking parameters).

Any other API pieces we should consider? I've considered if we shouldn't 
support Nashorn for value generation so that users can define their own 
arbitrary javascript, but this could have some performance implications. This 
is also orthogonal to this change.

Almost all of CASSANDRA-8957 seems subsumed by this to me. Timeseries workloads 
are orthogonal to these changes, though, AFAICT, as they're basically just a 
matter of shifting the value domain based on seed.

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367069#comment-14367069
 ] 

T Jake Luciani commented on CASSANDRA-8986:
---

See CASSANDRA-8597

My concern is this will be adding more complexity to an already complex tool. 
Can we agree on the user api/interactions  before jumping into the 
implementation?

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Benedict (JIRA)
Benedict created CASSANDRA-8986:
---

 Summary: Major cassandra-stress refactor
 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict


We need a tool for both stressing _and_ validating more complex workloads than 
stress currently supports. Stress needs a raft of changes, and I think it would 
be easier to deliver many of these as a single major endeavour which I think is 
justifiable given its audience. The rough behaviours I want stress to support 
are:

* Ability to know exactly how many rows it will produce, for any clustering 
prefix, without generating those prefixes
* Ability to generate an amount of data proportional to the amount it will 
produce to the server (or consume from the server), rather than proportional to 
the variation in clustering columns
* Ability to reliably produce near identical behaviour each run
* Ability to understand complex overlays of operation types (LWT, Delete, 
Expiry, although perhaps not all implemented immediately, the framework for 
supporting them easily)
* Ability to (with minimal internal state) understand the complete cluster 
state through overlays of multiple procedural generations
* Ability to understand the in-flight state of in-progress operations (i.e. if 
we're applying a delete, understand that the delete may have been applied, and 
may not have been, for potentially multiple conflicting in flight operations)

I think the necessary changes to support this would give us the _functional_ 
base to support all the functionality I can currently envisage stress needing. 
Before embarking on this (which I may attempt very soon), it would be helpful 
to get input from others as to features missing from stress that I haven't 
covered here that we will certainly want in the future, so that they can be 
factored in to the overall design and hopefully avoid another refactor one year 
from now, as its complexity is scaling each time, and each time it is a higher 
sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] [~enigmacurry] 
[~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8920) Remove IntervalTree from maxPurgeableTimestamp calculation

2015-03-18 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367059#comment-14367059
 ] 

Marcus Eriksson commented on CASSANDRA-8920:


(adding comment here after discussion on irc)

This would probably be quite a bit slower for LCS since the overlappingSSTables 
contain the sstables that overlap the currently compacting ones but are not 
currently being compacted. This means that for LCS, this would contain all 
other sstables on the node when doing a L0 - L1 compaction.

For STCS this would probably work very well since we would almost always return 
all sstables from the interval tree. Perhaps we should let the compaction 
strategy decide if we should use the interval tree or not.

 Remove IntervalTree from maxPurgeableTimestamp calculation
 --

 Key: CASSANDRA-8920
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8920
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8920.txt


 The IntervalTree only maps partition keys. Since a majority of users deploy a 
 hashed partitioner the work is mostly wasted, since they will be evenly 
 distributed across the full token range owned by the node - and in some cases 
 it is a significant amount of work. We can perform a corroboration against 
 the file bounds if we get a BF match as a sanity check if we like, but 
 performing an IntervalTree search is significantly more expensive (esp. once 
 murmur hash calculation memoization goes mainstream).
 In LCS, the keys are bounded, to it might appear that it would help, but in 
 this scenario we only compact against like bounds, so again it is not helpful.
 With a ByteOrderedPartitioner it could potentially be of use, but this is 
 sufficiently rare to not optimise for IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8987) cassandra-stress should support a more complex client model

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367055#comment-14367055
 ] 

Benedict edited comment on CASSANDRA-8987 at 3/18/15 12:26 PM:
---

This is particularly problematic for latency modelling, although we can improve 
that with a simpler model. This suggestion is more complex, but also helps 
users modelling their workloads, and helps us understand the latency 
characteristics of complex actions (in which latency issues can be compounded)


was (Author: benedict):
This is particularly problematic for latency modelling, although we can improve 
that with a simpler model, and in doing so help users modelling their workloads

 cassandra-stress should support a more complex client model
 ---

 Key: CASSANDRA-8987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8987
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 Orthogonal to CASSANDRA-8986, but still very important, is stress' simulation 
 of clients: currently we assume a fixed number of clients performing infinite 
 synchronous work, whereas, as I 
 [argued|https://groups.google.com/forum/#!topic/mechanical-sympathy/icNZJejUHfE%5B101-125%5D]
  on the mechanical sympathy mailing list, the correct model is to have a new 
 client arrival distribution and a distinct client model. Ideally, however, I 
 would like to expand this to support client models that can simulate 
 multi-table transactions, with both synchronous and asynchronous steps. So, 
 let's say we have three tables T1, T2, T3, we could say something like:
 A client performs:
 * a registration by insert to T1 (and/or perhaps lookup in T1), multiple 
 inserts to T2 and T2, in parallel
 * followed by a number of queries on T3
 Probably the best way to achieve this is with a tiered transaction 
 definition that can be composed, so that any single query or insert is a 
 transaction that itself may be sequentially or in parallel composed with 
 any other to compose a new macro transaction. This would then be combined 
 with a client arrival rate distribution to produce a total cluster workload.
 At least one remaining question is if we want the operations to be data 
 dependent, in which case this may well interact with CASSANDRA-8986, and 
 probably requires a little thought. [~jshook] [~jeromatron] [~mstump] 
 [~tupshin] thoughts on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7212) Allow to switch user within CQLSH session

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7212:
---
Reviewer: Tyler Hobbs

 Allow to switch user within CQLSH session
 -

 Key: CASSANDRA-7212
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7212
 Project: Cassandra
  Issue Type: Improvement
  Components: API
 Environment: [cqlsh 4.1.1 | Cassandra 2.0.7.31 | CQL spec 3.1.1 | 
 Thrift protocol 19.39.0]
Reporter: Jose Martinez Poblete
  Labels: cqlsh
 Attachments: 7212_1.patch


 Once a user is logged into CQLSH, it is not possible to switch to another 
 user  without exiting and relaunch
 This is a feature offered in postgres and probably other databases:
 http://secure.encivasolutions.com/knowledgebase.php?action=displayarticleid=1126
  
 Perhaps this could be implemented on CQLSH as part of the USE directive:
 USE Keyspace [USER] [password] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367216#comment-14367216
 ] 

Ryan McGuire edited comment on CASSANDRA-8986 at 3/18/15 2:47 PM:
--

Biggest thing I want is standardization on how we distribute stress across 
multiple clients (CASSANDRA-8469). Blindly running multiple clients is likely 
to not be compatible with validation - it probably needs a decent amount of 
coordination between clients. 


was (Author: enigmacurry):
Biggest thing I want is standardization on is how we distribute stress across 
multiple clients (CASSANDRA-8469). Blindly running multiple clients is likely 
to not be compatible with validation - it probably needs a decent amount of 
coordination between clients. 

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8981) IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk

2015-03-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367229#comment-14367229
 ] 

T Jake Luciani commented on CASSANDRA-8981:
---

+1

 IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk
 --

 Key: CASSANDRA-8981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Joshua McKenzie
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8981.txt


 Keep running it repeatedly w/showoutput=yes in build.xml on junit and 
 you'll see it time out with:
 {noformat}
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 {noformat}
 I originally thought this was a Windows specific problem (CASSANDRA-8962) but 
 can reproduce on linux by just repeatedly running the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8981) IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk

2015-03-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367229#comment-14367229
 ] 

T Jake Luciani edited comment on CASSANDRA-8981 at 3/18/15 2:55 PM:


+1 this goes in 2.1


was (Author: tjake):
+1

 IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk
 --

 Key: CASSANDRA-8981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Joshua McKenzie
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8981.txt


 Keep running it repeatedly w/showoutput=yes in build.xml on junit and 
 you'll see it time out with:
 {noformat}
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 {noformat}
 I originally thought this was a Windows specific problem (CASSANDRA-8962) but 
 can reproduce on linux by just repeatedly running the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: ninja fix IndexSummaryManagerTest.testCompactionsRace

2015-03-18 Thread benedict
ninja fix IndexSummaryManagerTest.testCompactionsRace

patch by benedict; reviewed by tjake for CASSANDRA-8981


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/accbfa7a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/accbfa7a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/accbfa7a

Branch: refs/heads/trunk
Commit: accbfa7ac1843cdb47d5816772e8a7a3348097db
Parents: 521b363
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 15:04:10 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 15:04:10 2015 +

--
 .../cassandra/io/sstable/IndexSummaryManagerTest.java  | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/accbfa7a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java 
b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
index dec7705..877b6e6 100644
--- a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
@@ -508,8 +508,8 @@ public class IndexSummaryManagerTest extends SchemaLoader
 String cfname = StandardRace; // index interval of 8, no key caching
 Keyspace keyspace = Keyspace.open(ksname);
 ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(cfname);
-int numSSTables = 20;
-int numRows = 28;
+int numSSTables = 50;
+int numRows = 1  10;
 createSSTables(ksname, cfname, numSSTables, numRows);
 
 ListSSTableReader sstables = new ArrayList(cfs.getSSTables());
@@ -530,7 +530,8 @@ public class IndexSummaryManagerTest extends SchemaLoader
 try
 {
 
IndexSummaryManager.instance.redistributeSummaries();
-} catch (Throwable e)
+}
+catch (Throwable e)
 {
 failed.set(true);
 }
@@ -544,14 +545,14 @@ public class IndexSummaryManagerTest extends SchemaLoader
 
 try
 {
-Assert.assertFalse(failed.get());
+Assert.assertFalse(failed.getAndSet(true));
 
 for (SSTableReader sstable : sstables)
 {
 Assert.assertEquals(true, sstable.isMarkedCompacted());
 }
 
-Assert.assertEquals(20, sstables.size());
+Assert.assertEquals(numSSTables, sstables.size());
 
 try
 {
@@ -567,5 +568,7 @@ public class IndexSummaryManagerTest extends SchemaLoader
 tp.shutdownNow();
 CompactionManager.instance.finishCompactionsAndShutdown(10, 
TimeUnit.SECONDS);
 }
+
+cfs.truncateBlocking();
 }
 }



[jira] [Comment Edited] (CASSANDRA-8826) Distributed aggregates

2015-03-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367210#comment-14367210
 ] 

Jonathan Ellis edited comment on CASSANDRA-8826 at 3/18/15 2:45 PM:


Arguing that we should be competing with Vertica isn't going to win you many 
points here. :)

(Edit: in response to a now-deleted comment.)


was (Author: jbellis):
Arguing that we should be competing with Vertica isn't going to win you many 
points here. :)

 Distributed aggregates
 --

 Key: CASSANDRA-8826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8826
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor

 Aggregations have been implemented in CASSANDRA-4914.
 All calculation is performed on the coordinator. This means, that all data is 
 pulled by the coordinator and processed there.
 This ticket's about to distribute aggregates to make them more efficient. 
 Currently some related tickets (esp. CASSANDRA-8099) are currently in 
 progress - we should wait for them to land before talking about 
 implementation.
 Another playgrounds (not covered by this ticket), that might be related is 
 about _distributed filtering_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8826) Distributed aggregates

2015-03-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367210#comment-14367210
 ] 

Jonathan Ellis commented on CASSANDRA-8826:
---

Arguing that we should be competing with Vertica isn't going to win you many 
points here. :)

 Distributed aggregates
 --

 Key: CASSANDRA-8826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8826
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor

 Aggregations have been implemented in CASSANDRA-4914.
 All calculation is performed on the coordinator. This means, that all data is 
 pulled by the coordinator and processed there.
 This ticket's about to distribute aggregates to make them more efficient. 
 Currently some related tickets (esp. CASSANDRA-8099) are currently in 
 progress - we should wait for them to land before talking about 
 implementation.
 Another playgrounds (not covered by this ticket), that might be related is 
 about _distributed filtering_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367219#comment-14367219
 ] 

Benedict commented on CASSANDRA-8986:
-

CASSANDRA-8469 is another orthogonal issue, yeah. We really need to find a way 
for my evenings to not be the bottleneck on all of these stress features. 
There's months of development work to be done here.

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8690) Ensure all error handling code that may itself throw an exception utilises addSuppressed

2015-03-18 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-8690.
-
Resolution: Duplicate

 Ensure all error handling code that may itself throw an exception utilises 
 addSuppressed
 

 Key: CASSANDRA-8690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8690
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367236#comment-14367236
 ] 

T Jake Luciani commented on CASSANDRA-8986:
---

bq.  We really need to find a way for my evenings to not be the bottleneck on 
all of these stress features. 

Or others can help you :)

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8660) Cassandra makes Java 1.7+ SEGFAULT

2015-03-18 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-8660.
-
Resolution: Cannot Reproduce

Closing as it's been a while, there's been no movement and we could not 
reproduce. Please do file again if you find it recurring.

 Cassandra makes Java 1.7+ SEGFAULT
 --

 Key: CASSANDRA-8660
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8660
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux Redhat 6.5
Reporter: Pierre Belanger
Assignee: Benedict
Priority: Critical
 Attachments: 8660, hs_err_pid-patch1.log, hs_err_pid12004.log


 We upgraded a 3 nodes cluster from Cassandra 2.0.11-5f54285e9e to 
 2.1-7e6d9eb842.
 With Cassandra 2.1 , within ~10 min.  cassandra manages to makes Java 
 1.7.0_72-b14 segfault.  This is also tested and same behavior w/ Java 
 1.7.0_60-b19 and 1.8.0_25-b17.
 We were able to use same data, downgrade back to 2.0 and things ran for ~3-4 
 days w/ no hick up.   Upgrading back to Cassandra-2.1, Java does a segfault 
 again.
 See attached hs_err file.
 Pierre



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: ninja fix IndexSummaryManagerTest.testCompactionsRace

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 521b36311 - accbfa7ac
  refs/heads/trunk abd528a0f - 07cad8e8d


ninja fix IndexSummaryManagerTest.testCompactionsRace

patch by benedict; reviewed by tjake for CASSANDRA-8981


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/accbfa7a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/accbfa7a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/accbfa7a

Branch: refs/heads/cassandra-2.1
Commit: accbfa7ac1843cdb47d5816772e8a7a3348097db
Parents: 521b363
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 15:04:10 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 15:04:10 2015 +

--
 .../cassandra/io/sstable/IndexSummaryManagerTest.java  | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/accbfa7a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java 
b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
index dec7705..877b6e6 100644
--- a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
@@ -508,8 +508,8 @@ public class IndexSummaryManagerTest extends SchemaLoader
 String cfname = StandardRace; // index interval of 8, no key caching
 Keyspace keyspace = Keyspace.open(ksname);
 ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(cfname);
-int numSSTables = 20;
-int numRows = 28;
+int numSSTables = 50;
+int numRows = 1  10;
 createSSTables(ksname, cfname, numSSTables, numRows);
 
 ListSSTableReader sstables = new ArrayList(cfs.getSSTables());
@@ -530,7 +530,8 @@ public class IndexSummaryManagerTest extends SchemaLoader
 try
 {
 
IndexSummaryManager.instance.redistributeSummaries();
-} catch (Throwable e)
+}
+catch (Throwable e)
 {
 failed.set(true);
 }
@@ -544,14 +545,14 @@ public class IndexSummaryManagerTest extends SchemaLoader
 
 try
 {
-Assert.assertFalse(failed.get());
+Assert.assertFalse(failed.getAndSet(true));
 
 for (SSTableReader sstable : sstables)
 {
 Assert.assertEquals(true, sstable.isMarkedCompacted());
 }
 
-Assert.assertEquals(20, sstables.size());
+Assert.assertEquals(numSSTables, sstables.size());
 
 try
 {
@@ -567,5 +568,7 @@ public class IndexSummaryManagerTest extends SchemaLoader
 tp.shutdownNow();
 CompactionManager.instance.finishCompactionsAndShutdown(10, 
TimeUnit.SECONDS);
 }
+
+cfs.truncateBlocking();
 }
 }



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-18 Thread benedict
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07cad8e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07cad8e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07cad8e8

Branch: refs/heads/trunk
Commit: 07cad8e8d4f7896cd209e50dc9b8e176a2b433ff
Parents: abd528a accbfa7
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 15:04:42 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 15:04:42 2015 +

--
 .../cassandra/io/sstable/IndexSummaryManagerTest.java  | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07cad8e8/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
--
diff --cc test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
index 87486ca,877b6e6..7a00983
--- a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
@@@ -533,12 -504,12 +533,12 @@@ public class IndexSummaryManagerTes
  @Test
  public void testCompactionRace() throws InterruptedException, 
ExecutionException
  {
 -String ksname = Keyspace1;
 -String cfname = StandardRace; // index interval of 8, no key caching
 +String ksname = KEYSPACE1;
 +String cfname = CF_STANDARDRACE; // index interval of 8, no key 
caching
  Keyspace keyspace = Keyspace.open(ksname);
  ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(cfname);
- int numSSTables = 20;
- int numRows = 28;
+ int numSSTables = 50;
+ int numRows = 1  10;
  createSSTables(ksname, cfname, numSSTables, numRows);
  
  ListSSTableReader sstables = new ArrayList(cfs.getSSTables());



[jira] [Commented] (CASSANDRA-8696) nodetool repair on cassandra 2.1.2 keyspaces return java.lang.RuntimeException: Could not create snapshot

2015-03-18 Thread Andrew Vant (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367205#comment-14367205
 ] 

Andrew Vant commented on CASSANDRA-8696:


I'm also having this issue on 2.1.3. Although I'm not sure if it's related, I 
can't do anything with consistency:all without timeouts, either.

 nodetool repair on cassandra 2.1.2 keyspaces return 
 java.lang.RuntimeException: Could not create snapshot
 -

 Key: CASSANDRA-8696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8696
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu

 When trying to run nodetool repair -pr on cassandra node ( 2.1.2), cassandra 
 throw java exceptions: cannot create snapshot. 
 the error log from system.log:
 {noformat}
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:28,815 
 StreamResultFuture.java:166 - [Stream #692c1450-a692-11e4-9973-070e938df227 
 ID#0] Prepare completed. Receiving 2 files(221187 bytes), sending 5 
 files(632105 bytes)
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,046 
 StreamResultFuture.java:180 - [Stream #692c1450-a692-11e4-9973-070e938df227] 
 Session with /10.97.9.110 is complete
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,046 
 StreamResultFuture.java:212 - [Stream #692c1450-a692-11e4-9973-070e938df227] 
 All sessions completed
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,047 
 StreamingRepairTask.java:96 - [repair #685e3d00-a692-11e4-9973-070e938df227] 
 streaming task succeed, returning response to /10.98.194.68
 INFO  [RepairJobTask:1] 2015-01-28 02:07:29,065 StreamResultFuture.java:86 - 
 [Stream #692c6270-a692-11e4-9973-070e938df227] Executing streaming plan for 
 Repair
 INFO  [StreamConnectionEstablisher:4] 2015-01-28 02:07:29,065 
 StreamSession.java:213 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 Starting streaming to /10.66.187.201
 INFO  [StreamConnectionEstablisher:4] 2015-01-28 02:07:29,070 
 StreamCoordinator.java:209 - [Stream #692c6270-a692-11e4-9973-070e938df227, 
 ID#0] Beginning stream session with /10.66.187.201
 INFO  [STREAM-IN-/10.66.187.201] 2015-01-28 02:07:29,465 
 StreamResultFuture.java:166 - [Stream #692c6270-a692-11e4-9973-070e938df227 
 ID#0] Prepare completed. Receiving 5 files(627994 bytes), sending 5 
 files(632105 bytes)
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,971 
 StreamResultFuture.java:180 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 Session with /10.66.187.201 is complete
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,972 
 StreamResultFuture.java:212 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 All sessions completed
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,972 
 StreamingRepairTask.java:96 - [repair #685e3d00-a692-11e4-9973-070e938df227] 
 streaming task succeed, returning response to /10.98.194.68
 ERROR [RepairJobTask:1] 2015-01-28 02:07:39,444 RepairJob.java:127 - Error 
 occurred during snapshot phase
 java.lang.RuntimeException: Could not create snapshot at /10.97.9.110
 at 
 org.apache.cassandra.repair.SnapshotTask$SnapshotCallback.onFailure(SnapshotTask.java:77)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.net.MessagingService$5$1.run(MessagingService.java:347) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 INFO  [AntiEntropySessions:6] 2015-01-28 02:07:39,445 RepairSession.java:260 
 - [repair #6f85e740-a692-11e4-9973-070e938df227] new session: will sync 
 /10.98.194.68, /10.66.187.201, /10.226.218.135 on range 
 (12817179804668051873746972069086
 2638799,12863540308359254031520865977436165] for events.[bigint0text, 
 bigint0boolean, bigint0int, dataset_catalog, column_categories, 
 bigint0double, bigint0bigint]
 ERROR [AntiEntropySessions:5] 2015-01-28 02:07:39,445 RepairSession.java:303 
 - [repair #685e3d00-a692-11e4-9973-070e938df227] session completed with the 
 following error
 java.io.IOException: Failed during snapshot creation.
 at 
 org.apache.cassandra.repair.RepairSession.failedSnapshot(RepairSession.java:344)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.repair.RepairJob$2.onFailure(RepairJob.java:128) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172) 
 ~[guava-16.0.jar:na]
 at 
 

[jira] [Updated] (CASSANDRA-8981) IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk

2015-03-18 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8981:
--
Fix Version/s: (was: 3.0)
   2.1.4

 IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk
 --

 Key: CASSANDRA-8981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Joshua McKenzie
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8981.txt


 Keep running it repeatedly w/showoutput=yes in build.xml on junit and 
 you'll see it time out with:
 {noformat}
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 {noformat}
 I originally thought this was a Windows specific problem (CASSANDRA-8962) but 
 can reproduce on linux by just repeatedly running the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367238#comment-14367238
 ] 

Benedict commented on CASSANDRA-8986:
-

That's my hope, yes :)

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8981) IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk

2015-03-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8981:
--
Reviewer: T Jake Luciani

 IndexSummaryManagerTest.testCompactionsRace intermittently timing out on trunk
 --

 Key: CASSANDRA-8981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Joshua McKenzie
Assignee: Benedict
Priority: Minor
 Fix For: 3.0

 Attachments: 8981.txt


 Keep running it repeatedly w/showoutput=yes in build.xml on junit and 
 you'll see it time out with:
 {noformat}
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:56 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 [junit] WARN  17:02:57 Unable to cancel in-progress compactions for 
 StandardRace.  Perhaps there is an unusually large row in progress somewhere, 
 or the system is simply overloaded.
 {noformat}
 I originally thought this was a Windows specific problem (CASSANDRA-8962) but 
 can reproduce on linux by just repeatedly running the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8986) Major cassandra-stress refactor

2015-03-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367216#comment-14367216
 ] 

Ryan McGuire commented on CASSANDRA-8986:
-

Biggest thing I want is standardization on is how we distribute stress across 
multiple clients (CASSANDRA-8469). Blindly running multiple clients is likely 
to not be compatible with validation - it probably needs a decent amount of 
coordination between clients. 

 Major cassandra-stress refactor
 ---

 Key: CASSANDRA-8986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8986
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict

 We need a tool for both stressing _and_ validating more complex workloads 
 than stress currently supports. Stress needs a raft of changes, and I think 
 it would be easier to deliver many of these as a single major endeavour which 
 I think is justifiable given its audience. The rough behaviours I want stress 
 to support are:
 * Ability to know exactly how many rows it will produce, for any clustering 
 prefix, without generating those prefixes
 * Ability to generate an amount of data proportional to the amount it will 
 produce to the server (or consume from the server), rather than proportional 
 to the variation in clustering columns
 * Ability to reliably produce near identical behaviour each run
 * Ability to understand complex overlays of operation types (LWT, Delete, 
 Expiry, although perhaps not all implemented immediately, the framework for 
 supporting them easily)
 * Ability to (with minimal internal state) understand the complete cluster 
 state through overlays of multiple procedural generations
 * Ability to understand the in-flight state of in-progress operations (i.e. 
 if we're applying a delete, understand that the delete may have been applied, 
 and may not have been, for potentially multiple conflicting in flight 
 operations)
 I think the necessary changes to support this would give us the _functional_ 
 base to support all the functionality I can currently envisage stress 
 needing. Before embarking on this (which I may attempt very soon), it would 
 be helpful to get input from others as to features missing from stress that I 
 haven't covered here that we will certainly want in the future, so that they 
 can be factored in to the overall design and hopefully avoid another refactor 
 one year from now, as its complexity is scaling each time, and each time it 
 is a higher sunk cost. [~jbellis] [~iamaleksey] [~slebresne] [~tjake] 
 [~enigmacurry] [~aweisberg] [~blambov] [~jshook] ... and @everyone else :) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8826) Distributed aggregates

2015-03-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367281#comment-14367281
 ] 

Sylvain Lebresne commented on CASSANDRA-8826:
-

Maybe I can be a bit more precise cause I'm not ever sure we fundamentally 
disagree. If you're talking about optimizing aggregates over a single 
partition, even a reasonably large one, then I'm fine with that in principle.  
But to me, distributed aggregates refers to distributing aggregates over 
large quantity of data over many nodes _à la_ map-reduce. That's not 
particularly real time in my book btw and I maintain that imo that's exactly 
what Spark/hadoop are about and there is no point in reinventing that wheel.

Now, if we are talking about single partition aggregates, then the only 
relation with this ticket I can see is to push the aggregate on replicas to 
save cross-node traffics. We know it's not that that easy for CL  CL.ONE, and 
for CL.ONE, I think it's fine to assume that clients do token aware routing, at 
which point we already do no transfer data over the wire (and CASSANDRA-7168 
will indeed help improve higher CL quite a bit, even without any change to the 
current implementation). And I'm just not sure it's worth putting too much 
effort short term to optimize the CL.ONE but no token-aware routing case.


 Distributed aggregates
 --

 Key: CASSANDRA-8826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8826
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor

 Aggregations have been implemented in CASSANDRA-4914.
 All calculation is performed on the coordinator. This means, that all data is 
 pulled by the coordinator and processed there.
 This ticket's about to distribute aggregates to make them more efficient. 
 Currently some related tickets (esp. CASSANDRA-8099) are currently in 
 progress - we should wait for them to land before talking about 
 implementation.
 Another playgrounds (not covered by this ticket), that might be related is 
 about _distributed filtering_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8826) Distributed aggregates

2015-03-18 Thread Cristian O (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cristian O updated CASSANDRA-8826:
--
Comment: was deleted

(was: Hi Benedict,

Very nicely put :) Let's see if reason can eventually prevail...

I'm not sure if you're aware of Vertica. It's the pioneering columnar
store, originally created by M. Stonebreaker
(highly recommended to look into this guy and find his papers if you're not
aware of him)

Vertica is probably one of the best available analytics database, however
it's commercial and quite expensive.

There's a paper on Vertica describing its architecture here:

http://www.vldb.org/pvldb/vol7/p1259-gupta.pdf

You'll see that it's distribution model and even parts of the storage
engine design are remarkably similar to Cassandra. This is not accidental
as they are both shared nothing architectures.

Cassandra is quite well suited to implement some of the main analytical use
cases with probably minimal effort, and there would
be a lot of interest in this market if it succeeds.

As I mentioned yesterday, a very interesting use case is to do simple
aggregations over large amounts of data points (mainly time series) very
fast (under 5 secs) for a large number of users (many concurrent requests).

Spark/MR do not have the right architecture for this, in the OSS world a
direct competitor would be Impala (almost shared nothing) and HBase perhaps
which I hear it's trying to position itself towards this.


Cheers,
Cristian






)

 Distributed aggregates
 --

 Key: CASSANDRA-8826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8826
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor

 Aggregations have been implemented in CASSANDRA-4914.
 All calculation is performed on the coordinator. This means, that all data is 
 pulled by the coordinator and processed there.
 This ticket's about to distribute aggregates to make them more efficient. 
 Currently some related tickets (esp. CASSANDRA-8099) are currently in 
 progress - we should wait for them to land before talking about 
 implementation.
 Another playgrounds (not covered by this ticket), that might be related is 
 about _distributed filtering_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8988) Optimise IntervalTree

2015-03-18 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8988:

Attachment: 8988.txt

Attaching trivial patch that more than halves the expected number of comparisons

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8826) Distributed aggregates

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367340#comment-14367340
 ] 

Benedict commented on CASSANDRA-8826:
-

I don't think we _fundamentally_ disagree. I guess I should outline what I am 
thinking of.

Initially, for single partition queries, but expanding to multiple partition 
queries, I would like our abstraction for aggregations to support partial 
results (continuations, effectively) that can be shipped around along with 
digests, and composed on the coordinator (or repaired). A different result 
would be returned for the repaired and the unrepaired portions from each owner, 
and combined on the coordinator. This permits us to answer these queries 
quickly in the common case where there is agreement, permits quick repair, and 
allows us to expand support to aggregations over multiple partitions without 
really tremendous difficult, by resolving each partition independently into its 
own partial computation, that are then combined with each of the other partial 
computations.

I don't pretend this is _simple_, but nor do I think it is prohibitively 
complex nor out of scope. It seems a good solution to all of the above 
problems, and permits us to easily push the construction of each _partial_ 
computation much lower into the stack when we have the time, so that this (the 
main body of work) can be done much more efficiently, and with network traffic 
proportional to the size of the result, not the domain.

The same abstraction can be used to implement sampled or exact, single or multi 
partition aggregations. Most crucially supporting them with repaired data, 
which we cannot do with any of our map/reduce connectors, and supporting them 
in realtime



 Distributed aggregates
 --

 Key: CASSANDRA-8826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8826
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor

 Aggregations have been implemented in CASSANDRA-4914.
 All calculation is performed on the coordinator. This means, that all data is 
 pulled by the coordinator and processed there.
 This ticket's about to distribute aggregates to make them more efficient. 
 Currently some related tickets (esp. CASSANDRA-8099) are currently in 
 progress - we should wait for them to land before talking about 
 implementation.
 Another playgrounds (not covered by this ticket), that might be related is 
 about _distributed filtering_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8985:
---
Fix Version/s: 2.0.14

 java.lang.AssertionError: Added column does not sort as the last column
 ---

 Key: CASSANDRA-8985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8985
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.13
 OracleJDK1.7
 Debian 7.8
Reporter: Maxim
 Fix For: 2.0.14


 After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error:
 {code}ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java 
 (line 199) Exception in thread Thread[ReadStage:1823,5,main]
 java.lang.AssertionError: Added column does not sort as the last column
   at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
   at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
   at 
 org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211)
   at 
 org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54)
   at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:551)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1755)
   at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
   at 
 org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8094) Heavy writes in RangeSlice read requests

2015-03-18 Thread Minh Do (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367387#comment-14367387
 ] 

Minh Do commented on CASSANDRA-8094:


Will have some time in the  couple of weeks to check it in. Thanks

 Heavy writes in RangeSlice read  requests 
 --

 Key: CASSANDRA-8094
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8094
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Minh Do
Assignee: Minh Do
 Fix For: 2.0.14


 RangeSlice requests always do a scheduled read repair when coordinators try 
 to resolve replicas' responses no matter read_repair_chance is set or not.
 Because of this, in low writes and high reads clusters, there are very high 
 write requests going on between nodes.  
 We should have an option to turn this off and this can be different than the 
 read_repair_chance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8979) MerkleTree mismatch for deleted and non-existing rows

2015-03-18 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367434#comment-14367434
 ] 

Yuki Morishita commented on CASSANDRA-8979:
---

For 2.0 and PrecompactedRow, patch works as described.
Though I think we need to do the same for LazilyCompactedRow as well. (For 2.1+ 
we don't have PrecompactedRow anymore.)

LazilyCompactedRow does not have null ColumnFamily even when all it's cells are 
removed. So when we have wide rows, we still have hash mismatch between empty 
rows and removed rows.


 MerkleTree mismatch for deleted and non-existing rows
 -

 Key: CASSANDRA-8979
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8979
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Stefan Podkowinski
Assignee: Yuki Morishita
 Attachments: cassandra-2.0-8979-test.txt, 
 cassandra-2.0-8979-validator_patch.txt


 Validation compaction will currently create different hashes for rows that 
 have been deleted compared to nodes that have not seen the rows at all or 
 have already compacted them away. 
 In case this sounds familiar to you, see CASSANDRA-4905 which was supposed to 
 prevent hashing of expired tombstones. This still seems to be in place, but 
 does not address the issue completely. Or there was a change in 2.0 that 
 rendered the patch ineffective. 
 The problem is that rowHash() in the Validator will return a new hash in any 
 case, whether the PrecompactedRow did actually update the digest or not. This 
 will lead to the case that a purged, PrecompactedRow will not change the 
 digest, but we end up with a different tree compared to not having rowHash 
 called at all (such as in case the row already doesn't exist).
 As an implication, repair jobs will constantly detect mismatches between 
 older sstables containing purgable rows and nodes that have already compacted 
 these rows. After transfering the reported ranges, the newly created sstables 
 will immediately get deleted again during the following compaction. This will 
 happen for each repair run over again until the sstable with the purgable row 
 finally gets compacted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column

2015-03-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367342#comment-14367342
 ] 

Philip Thompson commented on CASSANDRA-8985:


Do you know what query you are performing that causes this error server side? 
Could you paste the output of {{DESCRIBE}} for your schema? Feel free to 
obfuscate the column names, but that will help us narrow down the bug.

 java.lang.AssertionError: Added column does not sort as the last column
 ---

 Key: CASSANDRA-8985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8985
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.13
 OracleJDK1.7
 Debian 7.8
Reporter: Maxim
 Fix For: 2.0.14


 After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error:
 {code}ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java 
 (line 199) Exception in thread Thread[ReadStage:1823,5,main]
 java.lang.AssertionError: Added column does not sort as the last column
   at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
   at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
   at 
 org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211)
   at 
 org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54)
   at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:551)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1755)
   at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
   at 
 org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8985:
---
Description: 
After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error:
{code}ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java (line 
199) Exception in thread Thread[ReadStage:1823,5,main]
java.lang.AssertionError: Added column does not sort as the last column
at 
org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
at 
org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211)
at 
org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792)
at 
org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:551)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1755)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745){code}


  was:
After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error:
ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java (line 199) 
Exception in thread Thread[ReadStage:1823,5,main]
java.lang.AssertionError: Added column does not sort as the last column
at 
org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
at 
org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211)
at 
org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792)
at 
org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:551)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1755)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



 java.lang.AssertionError: Added column does not sort as the last column
 ---

 Key: CASSANDRA-8985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8985
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.13
 OracleJDK1.7
 Debian 7.8
Reporter: Maxim
 Fix For: 2.0.14


 After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error:
 {code}ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java 
 (line 199) Exception in thread Thread[ReadStage:1823,5,main]
 java.lang.AssertionError: Added column does not sort as the last column
   at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
   at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
   at 
 org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211)
   at 
 org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54)
   at 
 

[jira] [Created] (CASSANDRA-8988) Optimise IntervalTree

2015-03-18 Thread Benedict (JIRA)
Benedict created CASSANDRA-8988:
---

 Summary: Optimise IntervalTree
 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4


We perform a lot of unnecessary comparisons in 
IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8884) Opening a non-system keyspace before first accessing the system keyspace results in deadlock

2015-03-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367392#comment-14367392
 ] 

Piotr Kołaczkowski commented on CASSANDRA-8884:
---

I'm running my code *outside* of Cassandra instance.
I'll create a simple program to demonstrate the issue.

 Opening a non-system keyspace before first accessing the system keyspace 
 results in deadlock
 

 Key: CASSANDRA-8884
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8884
 Project: Cassandra
  Issue Type: Bug
Reporter: Piotr Kołaczkowski
Assignee: Benjamin Lerer
 Attachments: bulk.jstack


 I created a writer like this:
 {code}
 val writer = CQLSSTableWriter.builder()
   .forTable(tableDef.cql)
   .using(insertStatement)
   .withPartitioner(partitioner)
   .inDirectory(outputDirectory)
   .withBufferSizeInMB(bufferSizeInMB)
   .build()
 {code}
 Then I'm trying to write a row with {{addRow}} and it blocks forever.
 Everything related to {{CQLSSTableWriter}}, including its creation, is 
 happening in only one thread.
 {noformat}
 SSTableBatchOpen:3 daemon prio=10 tid=0x7f4b399d7000 nid=0x4778 waiting 
 for monitor entry [0x7f4b240a7000]
java.lang.Thread.State: BLOCKED (on object monitor)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
   - waiting to lock 0xe35fd6d0 (a java.lang.Class for 
 org.apache.cassandra.db.Keyspace)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
   at 
 org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
   at 
 org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
   at 
 org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
   at 
 org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
   at 
 org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.init(SSTableReader.java:561)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 SSTableBatchOpen:2 daemon prio=10 tid=0x7f4b399e7800 nid=0x4777 waiting 
 for monitor entry [0x7f4b23ca3000]
java.lang.Thread.State: BLOCKED (on object monitor)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
   - waiting to lock 0xe35fd6d0 (a java.lang.Class for 
 org.apache.cassandra.db.Keyspace)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
   at 
 org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
   at 
 org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
   at 
 org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
   at 
 org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
   at 
 org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.init(SSTableReader.java:561)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 SSTableBatchOpen:1 daemon prio=10 tid=0x7f4b399e7000 nid=0x4776 waiting 
 for monitor entry [0x7f4b2359d000]
java.lang.Thread.State: BLOCKED (on object monitor)
   at 

[jira] [Created] (CASSANDRA-8989) Reading from table which contains collection type using token function and with CL ONE causes overwhelming writes to replicas

2015-03-18 Thread Miroslaw Partyka (JIRA)
Miroslaw Partyka created CASSANDRA-8989:
---

 Summary: Reading from table which contains collection type using 
token function and with CL  ONE causes overwhelming writes to replicas
 Key: CASSANDRA-8989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8989
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Miroslaw Partyka


When reading from a table at the aforementioned conditions, each read from 
replica also casues write to the replica. 

Confimed in version 2.0.12  2.0.13, version 2.1.3 seems ok.

To reproduce:

CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
'DC1': 2};
USE test;
CREATE TABLE bug(id int PRIMARY KEY, val mapint,int);
INSERT INTO bug(id, val) VALUES (1, {2: 3});
CONSISTENCY LOCAL_QUORUM
TRACING ON
SELECT * FROM bug WHERE token(id) = 0;

trace contains twice:
Appending to commitlog
Adding to bug memtable




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8989) Reading from table which contains collection type using token function and with CL ONE causes overwhelming writes to replicas

2015-03-18 Thread Miroslaw Partyka (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslaw Partyka updated CASSANDRA-8989:

Attachment: trace.txt

 Reading from table which contains collection type using token function and 
 with CL  ONE causes overwhelming writes to replicas
 ---

 Key: CASSANDRA-8989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8989
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Miroslaw Partyka
 Attachments: trace.txt


 When reading from a table at the aforementioned conditions, each read from 
 replica also casues write to the replica. 
 Confimed in version 2.0.12  2.0.13, version 2.1.3 seems ok.
 To reproduce:
 CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
 'DC1': 2};
 USE test;
 CREATE TABLE bug(id int PRIMARY KEY, val mapint,int);
 INSERT INTO bug(id, val) VALUES (1, {2: 3});
 CONSISTENCY LOCAL_QUORUM
 TRACING ON
 SELECT * FROM bug WHERE token(id) = 0;
 trace contains twice:
 Appending to commitlog
 Adding to bug memtable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: ninja fix bad merge

2015-03-18 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 07cad8e8d - f36fe9fb1


ninja fix bad merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f36fe9fb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f36fe9fb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f36fe9fb

Branch: refs/heads/trunk
Commit: f36fe9fb18b37656dbffd4f136edb31dc3abc45e
Parents: 07cad8e
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Mar 18 15:58:44 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Mar 18 15:58:44 2015 +

--
 .../org/apache/cassandra/io/sstable/SSTableScannerTest.java| 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f36fe9fb/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java 
b/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java
index 22eb5a0..0538a11 100644
--- a/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java
@@ -114,7 +114,7 @@ public class SSTableScannerTest
 
 private static Token token(int key)
 {
-return key == Integer.MIN_VALUE ? ByteOrderedPartitioner.MINIMUM : new 
BytesToken(toKey(key).getBytes());
+return key == Integer.MIN_VALUE ? ByteOrderedPartitioner.MINIMUM : new 
ByteOrderedPartitioner.BytesToken(toKey(key).getBytes());
 }
 
 private static RowPosition min(int key)
@@ -134,8 +134,8 @@ public class SSTableScannerTest
 
 private static RangeToken rangeFor(int start, int end)
 {
-return new RangeToken(new BytesToken(toKey(start).getBytes()),
-end == Integer.MIN_VALUE ? 
ByteOrderedPartitioner.MINIMUM : new BytesToken(toKey(end).getBytes()));
+return new RangeToken(new 
ByteOrderedPartitioner.BytesToken(toKey(start).getBytes()),
+end == Integer.MIN_VALUE ? 
ByteOrderedPartitioner.MINIMUM : new 
ByteOrderedPartitioner.BytesToken(toKey(end).getBytes()));
 }
 
 private static CollectionRangeToken makeRanges(int ... keys)



[jira] [Commented] (CASSANDRA-8094) Heavy writes in RangeSlice read requests

2015-03-18 Thread Mateusz Gajewski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367370#comment-14367370
 ] 

Mateusz Gajewski commented on CASSANDRA-8094:
-

Is there a chance that this bug will be fixed? We are using cassandra with 
spark to build some aggregates and during reads our cluster gets really high 
number of writes that makes it unusable

 Heavy writes in RangeSlice read  requests 
 --

 Key: CASSANDRA-8094
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8094
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Minh Do
Assignee: Minh Do
 Fix For: 2.0.14


 RangeSlice requests always do a scheduled read repair when coordinators try 
 to resolve replicas' responses no matter read_repair_chance is set or not.
 Because of this, in low writes and high reads clusters, there are very high 
 write requests going on between nodes.  
 We should have an option to turn this off and this can be different than the 
 read_repair_chance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8989) Reading from table which contains collection type using token function and with CL ONE causes overwhelming writes to replicas

2015-03-18 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367408#comment-14367408
 ] 

Brandon Williams commented on CASSANDRA-8989:
-

Can you add the actual trace output?

 Reading from table which contains collection type using token function and 
 with CL  ONE causes overwhelming writes to replicas
 ---

 Key: CASSANDRA-8989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8989
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Miroslaw Partyka

 When reading from a table at the aforementioned conditions, each read from 
 replica also casues write to the replica. 
 Confimed in version 2.0.12  2.0.13, version 2.1.3 seems ok.
 To reproduce:
 CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
 'DC1': 2};
 USE test;
 CREATE TABLE bug(id int PRIMARY KEY, val mapint,int);
 INSERT INTO bug(id, val) VALUES (1, {2: 3});
 CONSISTENCY LOCAL_QUORUM
 TRACING ON
 SELECT * FROM bug WHERE token(id) = 0;
 trace contains twice:
 Appending to commitlog
 Adding to bug memtable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8988) Optimise IntervalTree

2015-03-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8988:
--
Reviewer: Branimir Lambov

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8920) Remove IntervalTree from maxPurgeableTimestamp calculation

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367573#comment-14367573
 ] 

Benedict edited comment on CASSANDRA-8920 at 3/18/15 6:06 PM:
--

OK, so following our discussion this morning I thought this afternoon I'd have 
a quick crack at just making it faster. It's actually possible to make the 
amortized cost constant, and I've introduced a new class to do this. Mostly 
it's boiler plate, the interesting code is just a few lines.

Basically we sort the intervals in ascending order of both min _and_ max. Each 
time we visit a key, we search forwards from the last key in the min-ordered 
collection, inserting every item that is now behind us. We then search forwards 
from the last key in the max-ordered collection _removing_ everything that is 
now before us. We maintain this collection in a HashSet, and iterate the 
contents of the hashset, performing our BF lookups on the contents of this set. 
The result is behaviour that is algorithmically optimal for both LCS and STCS.

I've wired up some randomised testing that ensures the results are identical to 
performing an interval tree search, which helpfully also corroborates that this 
collection is behaving correctly.

The only caveat is that algorithmic performance is O(max(N,S)/N) where S is the 
number of sstables, and N the number of keys we're compacting. So if somehow S 
is much larger than N, performance will not be constant. But this would be 
indicative of significantly larger problems.

I've uploaded my patch 
[here|https://github.com/belliottsmith/cassandra/tree/8920]


was (Author: benedict):
OK, so following our discussion this morning I thought this afternoon I'd have 
a quick crack at just making it faster. It's actually possible to make the 
amortized cost constant, and I've introduced a new class to do this. Mostly 
it's boiler plate, the interesting code is just a few lines.

Basically we sort the intervals in ascending order of both min _and_ max. Each 
time we visit a key, we search forwards from the last key in the min-ordered 
collection, inserting every item that is now behind us. We then search forwards 
from the last key in the max-ordered collection _removing_ everything that is 
now before us. We maintain this collection in a HashSet, and iterate the 
contents of the hashset, performing our BF lookups on the contents of this set. 
The result is behaviour that is algorithmically optimal for both LCS and STCS.

I've wired up some randomised testing that ensures the results are identical to 
performing an interval tree search, which helpfully also corroborates that this 
collection is behaving correctly.

The only caveat is that algorithmic performance is O(max(N,S)/N) where S is the 
number of sstables, and N the number of keys we're compacting. So if somehow S 
is much larger than N, performance will not be constant. But this would be 
indicative of significantly larger problems.

 Remove IntervalTree from maxPurgeableTimestamp calculation
 --

 Key: CASSANDRA-8920
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8920
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8920.txt


 The IntervalTree only maps partition keys. Since a majority of users deploy a 
 hashed partitioner the work is mostly wasted, since they will be evenly 
 distributed across the full token range owned by the node - and in some cases 
 it is a significant amount of work. We can perform a corroboration against 
 the file bounds if we get a BF match as a sanity check if we like, but 
 performing an IntervalTree search is significantly more expensive (esp. once 
 murmur hash calculation memoization goes mainstream).
 In LCS, the keys are bounded, to it might appear that it would help, but in 
 this scenario we only compact against like bounds, so again it is not helpful.
 With a ByteOrderedPartitioner it could potentially be of use, but this is 
 sufficiently rare to not optimise for IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-03-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367613#comment-14367613
 ] 

T Jake Luciani commented on CASSANDRA-8851:
---

bq. The inexplicable thing

Could downsampling have run?  The key was dropped?


 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8838) Resumable bootstrap streaming

2015-03-18 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367511#comment-14367511
 ] 

Yuki Morishita commented on CASSANDRA-8838:
---

Committed, thanks for review!

I also made pull request for cassandra-dtest here 
(https://github.com/riptano/cassandra-dtest/pull/199).

 Resumable bootstrap streaming
 -

 Key: CASSANDRA-8838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8838
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
  Labels: dense-storage
 Fix For: 3.0


 This allows the bootstrapping node not to be streamed already received data.
 The bootstrapping node records received keyspace/ranges as one stream session 
 completes. When some sessions with other nodes fail, bootstrapping fails 
 also, though next time it re-bootstraps, already received keyspace/ranges are 
 skipped to be streamed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8920) Remove IntervalTree from maxPurgeableTimestamp calculation

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367576#comment-14367576
 ] 

Benedict commented on CASSANDRA-8920:
-

I think for LCS we can still probably do better than we currently do. The 
IntervalTree is actually quite an expensive structure to query, and in this 
scenario I think we can do something much simpler: if we sort in ascending 
order by first _and_ ascending order by last,

I've uploaded my patch 
[here|https://github.com/belliottsmith/cassandra/tree/8920]

 Remove IntervalTree from maxPurgeableTimestamp calculation
 --

 Key: CASSANDRA-8920
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8920
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8920.txt


 The IntervalTree only maps partition keys. Since a majority of users deploy a 
 hashed partitioner the work is mostly wasted, since they will be evenly 
 distributed across the full token range owned by the node - and in some cases 
 it is a significant amount of work. We can perform a corroboration against 
 the file bounds if we get a BF match as a sanity check if we like, but 
 performing an IntervalTree search is significantly more expensive (esp. once 
 murmur hash calculation memoization goes mainstream).
 In LCS, the keys are bounded, to it might appear that it would help, but in 
 this scenario we only compact against like bounds, so again it is not helpful.
 With a ByteOrderedPartitioner it could potentially be of use, but this is 
 sufficiently rare to not optimise for IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8920) Remove IntervalTree from maxPurgeableTimestamp calculation

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367573#comment-14367573
 ] 

Benedict commented on CASSANDRA-8920:
-

OK, so following our discussion this morning I thought this afternoon I'd have 
a quick crack at just making it faster. It's actually possible to make the 
amortized cost constant, and I've introduced a new class to do this. Mostly 
it's boiler plate, the interesting code is just a few lines.

Basically we sort the intervals in ascending order of both min _and_ max. Each 
time we visit a key, we search forwards from the last key in the min-ordered 
collection, inserting every item that is now behind us. We then search forwards 
from the last key in the max-ordered collection _removing_ everything that is 
now before us. We maintain this collection in a HashSet, and iterate the 
contents of the hashset, performing our BF lookups on the contents of this set. 
The result is behaviour that is algorithmically optimal for both LCS and STCS.

I've wired up some randomised testing that ensures the results are identical to 
performing an interval tree search, which helpfully also corroborates that this 
collection is behaving correctly.

The only caveat is that algorithmic performance is O(max(N,S)/N) where S is the 
number of sstables, and N the number of keys we're compacting. So if somehow S 
is much larger than N, performance will not be constant. But this would be 
indicative of significantly larger problems.

 Remove IntervalTree from maxPurgeableTimestamp calculation
 --

 Key: CASSANDRA-8920
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8920
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8920.txt


 The IntervalTree only maps partition keys. Since a majority of users deploy a 
 hashed partitioner the work is mostly wasted, since they will be evenly 
 distributed across the full token range owned by the node - and in some cases 
 it is a significant amount of work. We can perform a corroboration against 
 the file bounds if we get a BF match as a sanity check if we like, but 
 performing an IntervalTree search is significantly more expensive (esp. once 
 murmur hash calculation memoization goes mainstream).
 In LCS, the keys are bounded, to it might appear that it would help, but in 
 this scenario we only compact against like bounds, so again it is not helpful.
 With a ByteOrderedPartitioner it could potentially be of use, but this is 
 sufficiently rare to not optimise for IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-03-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367608#comment-14367608
 ] 

Björn Hachmann commented on CASSANDRA-8851:
---

Hi @Benedikt,

I have gathered the information you asked for. Those can be downloaded from our 
server:

A list of all files in our data directory:
http://clients.metrigo.com/data_files.txt

The complete logs assembled in correct order for the whole march:
http://clients.metrigo.com/system.log.gz

A heap dump, created with jmap:
http://clients.metrigo.com/heapdump.bin.gz

Kind regards
Björn

 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8696) nodetool repair on cassandra 2.1.2 keyspaces return java.lang.RuntimeException: Could not create snapshot

2015-03-18 Thread Jeff Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367470#comment-14367470
 ] 

Jeff Liu commented on CASSANDRA-8696:
-

[~benedict] and team, there have been several people reporting the same issue 
since I created the ticket. Any progress on reproducing and identifying the 
root cause? Thanks.

 nodetool repair on cassandra 2.1.2 keyspaces return 
 java.lang.RuntimeException: Could not create snapshot
 -

 Key: CASSANDRA-8696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8696
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu

 When trying to run nodetool repair -pr on cassandra node ( 2.1.2), cassandra 
 throw java exceptions: cannot create snapshot. 
 the error log from system.log:
 {noformat}
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:28,815 
 StreamResultFuture.java:166 - [Stream #692c1450-a692-11e4-9973-070e938df227 
 ID#0] Prepare completed. Receiving 2 files(221187 bytes), sending 5 
 files(632105 bytes)
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,046 
 StreamResultFuture.java:180 - [Stream #692c1450-a692-11e4-9973-070e938df227] 
 Session with /10.97.9.110 is complete
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,046 
 StreamResultFuture.java:212 - [Stream #692c1450-a692-11e4-9973-070e938df227] 
 All sessions completed
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,047 
 StreamingRepairTask.java:96 - [repair #685e3d00-a692-11e4-9973-070e938df227] 
 streaming task succeed, returning response to /10.98.194.68
 INFO  [RepairJobTask:1] 2015-01-28 02:07:29,065 StreamResultFuture.java:86 - 
 [Stream #692c6270-a692-11e4-9973-070e938df227] Executing streaming plan for 
 Repair
 INFO  [StreamConnectionEstablisher:4] 2015-01-28 02:07:29,065 
 StreamSession.java:213 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 Starting streaming to /10.66.187.201
 INFO  [StreamConnectionEstablisher:4] 2015-01-28 02:07:29,070 
 StreamCoordinator.java:209 - [Stream #692c6270-a692-11e4-9973-070e938df227, 
 ID#0] Beginning stream session with /10.66.187.201
 INFO  [STREAM-IN-/10.66.187.201] 2015-01-28 02:07:29,465 
 StreamResultFuture.java:166 - [Stream #692c6270-a692-11e4-9973-070e938df227 
 ID#0] Prepare completed. Receiving 5 files(627994 bytes), sending 5 
 files(632105 bytes)
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,971 
 StreamResultFuture.java:180 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 Session with /10.66.187.201 is complete
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,972 
 StreamResultFuture.java:212 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 All sessions completed
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,972 
 StreamingRepairTask.java:96 - [repair #685e3d00-a692-11e4-9973-070e938df227] 
 streaming task succeed, returning response to /10.98.194.68
 ERROR [RepairJobTask:1] 2015-01-28 02:07:39,444 RepairJob.java:127 - Error 
 occurred during snapshot phase
 java.lang.RuntimeException: Could not create snapshot at /10.97.9.110
 at 
 org.apache.cassandra.repair.SnapshotTask$SnapshotCallback.onFailure(SnapshotTask.java:77)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.net.MessagingService$5$1.run(MessagingService.java:347) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 INFO  [AntiEntropySessions:6] 2015-01-28 02:07:39,445 RepairSession.java:260 
 - [repair #6f85e740-a692-11e4-9973-070e938df227] new session: will sync 
 /10.98.194.68, /10.66.187.201, /10.226.218.135 on range 
 (12817179804668051873746972069086
 2638799,12863540308359254031520865977436165] for events.[bigint0text, 
 bigint0boolean, bigint0int, dataset_catalog, column_categories, 
 bigint0double, bigint0bigint]
 ERROR [AntiEntropySessions:5] 2015-01-28 02:07:39,445 RepairSession.java:303 
 - [repair #685e3d00-a692-11e4-9973-070e938df227] session completed with the 
 following error
 java.io.IOException: Failed during snapshot creation.
 at 
 org.apache.cassandra.repair.RepairSession.failedSnapshot(RepairSession.java:344)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.repair.RepairJob$2.onFailure(RepairJob.java:128) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172) 
 

[jira] [Commented] (CASSANDRA-8989) Reading from table which contains collection type using token function and with CL ONE causes overwhelming writes to replicas

2015-03-18 Thread Miroslaw Partyka (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367478#comment-14367478
 ] 

Miroslaw Partyka commented on CASSANDRA-8989:
-

I added an attachment with trace. It is extremely simplified case.
In fact we had tens of millions of writes to the replicas while reading using 
Spark.

 Reading from table which contains collection type using token function and 
 with CL  ONE causes overwhelming writes to replicas
 ---

 Key: CASSANDRA-8989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8989
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Miroslaw Partyka
 Attachments: trace.txt


 When reading from a table at the aforementioned conditions, each read from 
 replica also casues write to the replica. 
 Confimed in version 2.0.12  2.0.13, version 2.1.3 seems ok.
 To reproduce:
 CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
 'DC1': 2};
 USE test;
 CREATE TABLE bug(id int PRIMARY KEY, val mapint,int);
 INSERT INTO bug(id, val) VALUES (1, {2: 3});
 CONSISTENCY LOCAL_QUORUM
 TRACING ON
 SELECT * FROM bug WHERE token(id) = 0;
 trace contains twice:
 Appending to commitlog
 Adding to bug memtable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Resumable bootstrap streaming

2015-03-18 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk f36fe9fb1 - 690fbf3ba


Resumable bootstrap streaming

patch by yukim; reviewed by Stefania Alborghetti for CASSANDRA-8838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/690fbf3b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/690fbf3b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/690fbf3b

Branch: refs/heads/trunk
Commit: 690fbf3ba90cee726eb58ed1f69700d178993f75
Parents: f36fe9f
Author: Yuki Morishita yu...@apache.org
Authored: Wed Mar 18 12:18:28 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Mar 18 12:18:28 2015 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/SystemKeyspace.java | 83 -
 .../org/apache/cassandra/dht/BootStrapper.java  | 23 +++--
 .../org/apache/cassandra/dht/RangeStreamer.java | 95 +---
 .../apache/cassandra/dht/StreamStateStore.java  | 82 +
 .../cassandra/service/StorageService.java   | 53 +++
 .../apache/cassandra/streaming/StreamEvent.java |  5 ++
 .../cassandra/streaming/StreamSession.java  |  4 +-
 .../apache/cassandra/db/SystemKeyspaceTest.java | 24 ++---
 .../apache/cassandra/dht/BootStrapperTest.java  |  5 +-
 .../cassandra/dht/StreamStateStoreTest.java | 76 
 11 files changed, 372 insertions(+), 79 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/690fbf3b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c07599a..955d8e3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -71,6 +71,7 @@
  * Select optimal CRC32 implementation at runtime (CASSANDRA-8614)
  * Evaluate MurmurHash of Token once per query (CASSANDRA-7096)
  * Generalize progress reporting (CASSANDRA-8901)
+ * Resumable bootstrap streaming (CASSANDRA-8838)
 
 2.1.4
  * Use correct bounds for page cache eviction of compressed files 
(CASSANDRA-8746)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/690fbf3b/src/java/org/apache/cassandra/db/SystemKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index dcd0e55..9fa3c6b 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -18,6 +18,7 @@
 package org.apache.cassandra.db;
 
 import java.io.DataInputStream;
+import java.io.IOError;
 import java.io.IOException;
 import java.net.InetAddress;
 import java.nio.ByteBuffer;
@@ -27,6 +28,7 @@ import javax.management.openmbean.*;
 
 import com.google.common.base.Function;
 import com.google.common.collect.*;
+import com.google.common.io.ByteStreams;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -35,12 +37,13 @@ import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.UntypedResultSet;
-import org.apache.cassandra.db.compaction.CompactionHistoryTabularData;
 import org.apache.cassandra.db.commitlog.ReplayPosition;
+import org.apache.cassandra.db.compaction.CompactionHistoryTabularData;
 import org.apache.cassandra.db.compaction.LeveledCompactionStrategy;
 import org.apache.cassandra.db.composites.Composite;
 import org.apache.cassandra.db.filter.QueryFilter;
 import org.apache.cassandra.db.marshal.*;
+import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.exceptions.ConfigurationException;
@@ -49,6 +52,7 @@ import org.apache.cassandra.io.util.DataOutputBuffer;
 import org.apache.cassandra.locator.IEndpointSnitch;
 import org.apache.cassandra.locator.LocalStrategy;
 import org.apache.cassandra.metrics.RestorableMeter;
+import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.schema.LegacySchemaTables;
 import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.service.paxos.Commit;
@@ -78,6 +82,7 @@ public final class SystemKeyspace
 public static final String COMPACTION_HISTORY = compaction_history;
 public static final String SSTABLE_ACTIVITY = sstable_activity;
 public static final String SIZE_ESTIMATES = size_estimates;
+public static final String AVAILABLE_RANGES = available_ranges;
 
 public static final CFMetaData Hints =
 compile(HINTS,
@@ -218,7 +223,7 @@ public final class SystemKeyspace
 private static final CFMetaData SizeEstimates =
 compile(SIZE_ESTIMATES,
 per-table primary range size estimates,
- 

[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367596#comment-14367596
 ] 

Benedict commented on CASSANDRA-8851:
-

The inexplicable thing that requires more information is that it's passed a 
value that has just been fetched by firstKeyBeyond(), which should be 
guaranteed to return an existing key

 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8989) Reading from table which contains collection type using token function and with CL ONE causes overwhelming writes to replicas

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8989:
---
Description: 
When reading from a table at the aforementioned conditions, each read from 
replica also casues write to the replica. 

Confimed in version 2.0.12  2.0.13, version 2.1.3 seems ok.

To reproduce:

{code}CREATE KEYSPACE test WITH replication = {'class': 
'NetworkTopologyStrategy', 'DC1': 2};
USE test;
CREATE TABLE bug(id int PRIMARY KEY, val mapint,int);
INSERT INTO bug(id, val) VALUES (1, {2: 3});
CONSISTENCY LOCAL_QUORUM
TRACING ON
SELECT * FROM bug WHERE token(id) = 0;{code}

trace contains twice:
Appending to commitlog
Adding to bug memtable


  was:
When reading from a table at the aforementioned conditions, each read from 
replica also casues write to the replica. 

Confimed in version 2.0.12  2.0.13, version 2.1.3 seems ok.

To reproduce:

CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
'DC1': 2};
USE test;
CREATE TABLE bug(id int PRIMARY KEY, val mapint,int);
INSERT INTO bug(id, val) VALUES (1, {2: 3});
CONSISTENCY LOCAL_QUORUM
TRACING ON
SELECT * FROM bug WHERE token(id) = 0;

trace contains twice:
Appending to commitlog
Adding to bug memtable



 Reading from table which contains collection type using token function and 
 with CL  ONE causes overwhelming writes to replicas
 ---

 Key: CASSANDRA-8989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8989
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Miroslaw Partyka
 Attachments: trace.txt


 When reading from a table at the aforementioned conditions, each read from 
 replica also casues write to the replica. 
 Confimed in version 2.0.12  2.0.13, version 2.1.3 seems ok.
 To reproduce:
 {code}CREATE KEYSPACE test WITH replication = {'class': 
 'NetworkTopologyStrategy', 'DC1': 2};
 USE test;
 CREATE TABLE bug(id int PRIMARY KEY, val mapint,int);
 INSERT INTO bug(id, val) VALUES (1, {2: 3});
 CONSISTENCY LOCAL_QUORUM
 TRACING ON
 SELECT * FROM bug WHERE token(id) = 0;{code}
 trace contains twice:
 Appending to commitlog
 Adding to bug memtable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8696) nodetool repair on cassandra 2.1.2 keyspaces return java.lang.RuntimeException: Could not create snapshot

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367482#comment-14367482
 ] 

Benedict commented on CASSANDRA-8696:
-

Hi Jeff. I'd love to help out, but this is not a part of the codebase I'm 
sufficiently familiar with to have useful input, at least in an acceptable 
timeframe. I agree it should be addressed.  Have you tried enabling DEBUG 
logging as [~yukim] suggested?

 nodetool repair on cassandra 2.1.2 keyspaces return 
 java.lang.RuntimeException: Could not create snapshot
 -

 Key: CASSANDRA-8696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8696
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu

 When trying to run nodetool repair -pr on cassandra node ( 2.1.2), cassandra 
 throw java exceptions: cannot create snapshot. 
 the error log from system.log:
 {noformat}
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:28,815 
 StreamResultFuture.java:166 - [Stream #692c1450-a692-11e4-9973-070e938df227 
 ID#0] Prepare completed. Receiving 2 files(221187 bytes), sending 5 
 files(632105 bytes)
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,046 
 StreamResultFuture.java:180 - [Stream #692c1450-a692-11e4-9973-070e938df227] 
 Session with /10.97.9.110 is complete
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,046 
 StreamResultFuture.java:212 - [Stream #692c1450-a692-11e4-9973-070e938df227] 
 All sessions completed
 INFO  [STREAM-IN-/10.97.9.110] 2015-01-28 02:07:29,047 
 StreamingRepairTask.java:96 - [repair #685e3d00-a692-11e4-9973-070e938df227] 
 streaming task succeed, returning response to /10.98.194.68
 INFO  [RepairJobTask:1] 2015-01-28 02:07:29,065 StreamResultFuture.java:86 - 
 [Stream #692c6270-a692-11e4-9973-070e938df227] Executing streaming plan for 
 Repair
 INFO  [StreamConnectionEstablisher:4] 2015-01-28 02:07:29,065 
 StreamSession.java:213 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 Starting streaming to /10.66.187.201
 INFO  [StreamConnectionEstablisher:4] 2015-01-28 02:07:29,070 
 StreamCoordinator.java:209 - [Stream #692c6270-a692-11e4-9973-070e938df227, 
 ID#0] Beginning stream session with /10.66.187.201
 INFO  [STREAM-IN-/10.66.187.201] 2015-01-28 02:07:29,465 
 StreamResultFuture.java:166 - [Stream #692c6270-a692-11e4-9973-070e938df227 
 ID#0] Prepare completed. Receiving 5 files(627994 bytes), sending 5 
 files(632105 bytes)
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,971 
 StreamResultFuture.java:180 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 Session with /10.66.187.201 is complete
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,972 
 StreamResultFuture.java:212 - [Stream #692c6270-a692-11e4-9973-070e938df227] 
 All sessions completed
 INFO  [StreamReceiveTask:22] 2015-01-28 02:07:31,972 
 StreamingRepairTask.java:96 - [repair #685e3d00-a692-11e4-9973-070e938df227] 
 streaming task succeed, returning response to /10.98.194.68
 ERROR [RepairJobTask:1] 2015-01-28 02:07:39,444 RepairJob.java:127 - Error 
 occurred during snapshot phase
 java.lang.RuntimeException: Could not create snapshot at /10.97.9.110
 at 
 org.apache.cassandra.repair.SnapshotTask$SnapshotCallback.onFailure(SnapshotTask.java:77)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.net.MessagingService$5$1.run(MessagingService.java:347) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 INFO  [AntiEntropySessions:6] 2015-01-28 02:07:39,445 RepairSession.java:260 
 - [repair #6f85e740-a692-11e4-9973-070e938df227] new session: will sync 
 /10.98.194.68, /10.66.187.201, /10.226.218.135 on range 
 (12817179804668051873746972069086
 2638799,12863540308359254031520865977436165] for events.[bigint0text, 
 bigint0boolean, bigint0int, dataset_catalog, column_categories, 
 bigint0double, bigint0bigint]
 ERROR [AntiEntropySessions:5] 2015-01-28 02:07:39,445 RepairSession.java:303 
 - [repair #685e3d00-a692-11e4-9973-070e938df227] session completed with the 
 following error
 java.io.IOException: Failed during snapshot creation.
 at 
 org.apache.cassandra.repair.RepairSession.failedSnapshot(RepairSession.java:344)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.repair.RepairJob$2.onFailure(RepairJob.java:128) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 

[jira] [Updated] (CASSANDRA-6458) nodetool getendpoints doesn't validate key arity

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-6458:
---
   Attachment: 6458.txt
   6458-2.1.txt
Fix Version/s: 2.1.4
   3.0
 Assignee: Philip Thompson

 nodetool getendpoints doesn't validate key arity 
 -

 Key: CASSANDRA-6458
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6458
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Daneel Yaitskov
Assignee: Philip Thompson
Priority: Trivial
  Labels: lhf
 Fix For: 3.0, 2.1.4

 Attachments: 6458-2.1.txt, 6458.txt


 I have a complex row key.
 $ create table b (x int, s text, ((x,s)) primary key);
 In cqlsh I cannot fill row key partially:
 {noformat}
 $ insert into b (x) values(4);
 Bad Request: Missing mandatory PRIMARY KEY part s
 {noformat}
 But nodetool can find hosts by incomplete key
 {noformat}
 $ nodetool -h cas3 getendpoints anti_portal b 12
 192.168.4.4
 192.168.4.5
 192.168.4.6
 {noformat}
 No error is reported.
 I found that columns are separated by :.
 And If I pass to many elements then the error happens.
 {noformat}
 $ nodetool -h cas3 getendpoints anit_portal b 12:dd:dd
 Exception in thread main org.apache.cassandra.serializers.MarshalException: 
 unable to make int from '12:dd:dd'
 at org.apache.cassandra.db.marshal.Int32Type.fromString(Int32Type.java:69)
 at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2495)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.NumberFormatException: For input string: 12:dd:dd
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:492)
 at java.lang.Integer.parseInt(Integer.java:527)
 at org.apache.cassandra.db.marshal.Int32Type.fromString(Int32Type.java:65)
 ... 36 more
 {noformat}
 I think showing huge stack trace is 

[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree

2015-03-18 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367824#comment-14367824
 ] 

Branimir Lambov commented on CASSANDRA-8988:


Have you pushed these optimizations? [The latest commit I'm 
seeing|https://github.com/apache/cassandra/commit/1f8dfbb90de126737304224f7ce1cef4424ca4b3]
 in this branch is the same as the attached patch.

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree

2015-03-18 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367832#comment-14367832
 ] 

Benedict commented on CASSANDRA-8988:
-

Whoops. Some git mistyping. Force pushed an update

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-6574) key cache shrinks on restart

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-6574.

Resolution: Cannot Reproduce

Closing at cannot reproduce for now, due to  1 month response time from 
reporter. If this is still an issue, please re-open.

 key cache shrinks on restart
 

 Key: CASSANDRA-6574
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6574
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.12 + patches
Reporter: Chris Burroughs
 Attachments: key-cache-entries.png, key-cache-rate.png, key-cache.png


 During a rolling restart the key number of entries the number of entries in 
 the key cache is shrinking.  That is far fewer entries are loaded than are 
 saved.  This has obvious bad consequences for post restart performance.
 {noformat}
 key_cache_size_in_mb: 48
 key_cache_save_period: 900
 # Number of keys from the key cache to save
 # Disabled by default, meaning all keys are going to be saved
 # key_cache_keys_to_save: 100
 row_cache_size_in_mb: 256
 row_cache_save_period: 300
 row_cache_keys_to_save: 5
 row_cache_provider: SerializingCacheProvider
 saved_caches_directory: /home/cassandra/shared/saved_caches
 {noformat}
 Same log lines:
 {noformat}
  INFO [CompactionExecutor:24543] 2014-01-11 11:35:47,783 AutoSavingCache.java 
 (line 289) Saved KeyCache (398028 items) in 1252 ms
 *** RESTART ***
  INFO [main] 2014-01-11 11:44:59,608 AutoSavingCache.java (line 140) reading 
 saved cache /home/cassandra/shared/saved_caches/ks-cf1-KeyCache-b.db
  INFO [main] 2014-01-11 11:45:00,509 AutoSavingCache.java (line 140) reading 
 saved cache /home/cassandra/shared/saved_caches/ks-cf2-RowCache-b.db
  INFO [main] 2014-01-11 12:02:48,675 ColumnFamilyStore.java (line 452) 
 completed loading (1068166 ms; 5 keys) row cache for ks.cf2
  INFO [main] 2014-01-11 12:02:48,769 CassandraDaemon.java (line 291) 
 completed pre-loading (67760 keys) key cache.
  INFO [main] 2014-01-11 12:02:48,769 CassandraDaemon.java (line 294) 
 completed pre-loading (5 keys) row cache.
  INFO [CompactionExecutor:1] 2014-01-11 12:02:49,133 AutoSavingCache.java 
 (line 289) Saved RowCache (5 items) in 266 ms
  INFO [CompactionExecutor:2] 2014-01-11 12:02:49,575 AutoSavingCache.java 
 (line 289) Saved KeyCache (67760 items) in 707 ms
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6500) SSTableSimpleWriters are not writing Summary.db

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-6500:
---
Assignee: Yuki Morishita

 SSTableSimpleWriters are not writing Summary.db
 ---

 Key: CASSANDRA-6500
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6500
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.0.14


 I noticed ERROR from one of test in ColumnFamilyStoreTest reporting 
 Summary.db is missing:
 
 ERROR 10:08:15,122 Missing component: 
 build/test/cassandra/data/Keyspace1/Standard3/Keyspace1-Standard3-jb-1-Summary.db
 
 Looks like this is due to the change in CASSANDRA-5894.
 SSTableSimpleWriter#close changed to call SSTableWriter#close instead of 
 SSTW#closeAndOpenReader, which does not call SSTableReader.saveSummary 
 anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7048) Cannot get comparator 2 in CompositeType

2015-03-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367851#comment-14367851
 ] 

Philip Thompson commented on CASSANDRA-7048:


[~0x6e6562], is this an issue that you're still having or able to reproduce on 
a newer version?

 Cannot get comparator 2 in CompositeType
 

 Key: CASSANDRA-7048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7048
 Project: Cassandra
  Issue Type: Bug
 Environment: Archlinux, AWS m1.large
Reporter: Ben Hood
 Attachments: cassandra.log.zip


 I've left a Cassandra instance in limbo for the last days, meaning that it 
 has been happily serving read requests, but I've cut off the data ingress, 
 but I was doing some read-only development.
 After not writing anything to Cassandra for a few days, I got the following 
 error for the first write to Cassandra:
 Caused by: java.lang.RuntimeException: Cannot get comparator 2 in 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.TimestampType,org.apache.cassandra.db.marshal.UTF8Type).
  This might due to a mismatch between the schema and the data read
 at 
 org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:137)
 at 
 org.apache.cassandra.db.filter.ColumnCounter$GroupByPrefix.count(ColumnCounter.java:115)
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:192)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
 ... 3 more
 Caused by: java.lang.IndexOutOfBoundsException: index (2) must be less than 
 size (2)
 at 
 com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
 at 
 com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
 at 
 com.google.common.collect.RegularImmutableList.get(RegularImmutableList.java:65)
 at 
 org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
 ... 17 more
 I'm not sure whether this is the root cause, so I'm attaching the server log 
 file.
 I'm going to try to investigate a bit further, to see what changes, if any 
 the application driver introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7048) Cannot get comparator 2 in CompositeType

2015-03-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7048:
---
Description: 
I've left a Cassandra instance in limbo for the last days, meaning that it has 
been happily serving read requests, but I've cut off the data ingress, but I 
was doing some read-only development.

After not writing anything to Cassandra for a few days, I got the following 
error for the first write to Cassandra:
{code}
Caused by: java.lang.RuntimeException: Cannot get comparator 2 in 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.TimestampType,org.apache.cassandra.db.marshal.UTF8Type).
 This might due to a mismatch between the schema and the data read
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:137)
at 
org.apache.cassandra.db.filter.ColumnCounter$GroupByPrefix.count(ColumnCounter.java:115)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:192)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
... 3 more
Caused by: java.lang.IndexOutOfBoundsException: index (2) must be less than 
size (2)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
at 
com.google.common.collect.RegularImmutableList.get(RegularImmutableList.java:65)
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
... 17 more{code}


I'm not sure whether this is the root cause, so I'm attaching the server log 
file.

I'm going to try to investigate a bit further, to see what changes, if any the 
application driver introduced.

  was:
I've left a Cassandra instance in limbo for the last days, meaning that it has 
been happily serving read requests, but I've cut off the data ingress, but I 
was doing some read-only development.

After not writing anything to Cassandra for a few days, I got the following 
error for the first write to Cassandra:

Caused by: java.lang.RuntimeException: Cannot get comparator 2 in 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.TimestampType,org.apache.cassandra.db.marshal.UTF8Type).
 This might due to a mismatch between the schema and the data read
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:137)
at 
org.apache.cassandra.db.filter.ColumnCounter$GroupByPrefix.count(ColumnCounter.java:115)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:192)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
at 

[jira] [Commented] (CASSANDRA-7280) Hadoop support not respecting cassandra.input.split.size

2015-03-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367860#comment-14367860
 ] 

Philip Thompson commented on CASSANDRA-7280:


Just to follow up, this property is not being respected ever? Or just in ALLOW 
FILTERING queries? If ever, I think that's no longer the case in trunk, but I'm 
not sure what interaction it might have with filtering.

 Hadoop support not respecting cassandra.input.split.size
 

 Key: CASSANDRA-7280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7280
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Jeremy Hanna

 Long ago (0.7), I tried to set the cassandra.input.split.size property and 
 never really got it to respect that property.  However the batch size was 
 useful for what I needed to affect the timeouts.
 Now with the cql record reader and the native paging, users can specify 
 queries potentially using allow filtering clauses.  The input split size is 
 more important because the server may have to scan through many many records 
 to get matching records.  If the user can effectively set the input split 
 size, then that gives a hard limit on how many records it will traverse.
 Currently it appears to be overriding the property, perhaps in the 
 client.describe_splits_ex method on the server side.
 It can be argued that users shouldn't be using allow filtering clauses in 
 their cql in the first place.  However it is still a bug that the input split 
 size is not honored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8880) Add metrics to monitor the amount of tombstones created

2015-03-18 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367865#comment-14367865
 ] 

Chris Lohfink commented on CASSANDRA-8880:
--

This wont capture top level row/range tombstones:
{code}
create keyspace test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1}
create TABLE test.blarg (key text, value text, PRIMARY KEY (key));
create table test.cf ( key text, col text, val text, PRIMARY KEY (key, col));

INSERT INTO test.blarg (key, value) VALUES ('1', '2');
DELETE FROM test.blarg WHERE key = '1';

INSERT INTO test.cf (key, col, val) VALUES ('1', '2', '3');
delete from test.cf WHERE key = '1' AND col = '2';

{code}
for example wont be counted at all since the columnfamily iterable would be 
empty. 

might be hard to even know how many tombstones the range/row deletions will 
cause on reads without doing one so I think the tombstones scanned metric will 
still be pretty important.

 Add metrics to monitor the amount of tombstones created
 ---

 Key: CASSANDRA-8880
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8880
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière
Assignee: Lyuben Todorov
Priority: Minor
  Labels: metrics
 Fix For: 2.1.4

 Attachments: cassandra-2.1-8880.patch


 AFAIK there's currently no way to monitor the amount of tombstones created on 
 a Cassandra node. CASSANDRA-6057 has made it possible for users to figure out 
 how many tombstones are scanned at read time, but in write mostly workloads, 
 it may not be possible to realize if some inappropriate queries are 
 generating too many tombstones.
 Therefore the following additional metrics should be added:
 * {{writtenCells}}: amount of cells that have been written
 * {{writtenTombstoneCells}}: amount of tombstone cells that have been written
 Alternatively these could be exposed as a single gauge such as 
 {{writtenTombstoneCellsRatio}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-03-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367870#comment-14367870
 ] 

Björn Hachmann commented on CASSANDRA-8851:
---

Hi [~benedict],

you can download the data-files 171591 here:
http://clients.metrigo.com/data_file_171591.tar.gz.gpg

I have encrypted the archive using OpenPGP and will provide you the passphrase 
per separate E-Mail.

It is interesting that you mention these files as their timestamps are from the 
day we upgraded from 2.0 to 2.1. Unfortunately the error message does not 
contain a file name, so we haven't connected the error with those files yet. 
Furthermore there are obviously much more NPE's than occurrences of those file 
names in the log files. But hopefully a clue!


 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >