[jira] [Created] (CASSANDRA-8586) support millions of sstables by lazily acquiring/caching/dropping filehandles

2015-01-08 Thread Tupshin Harper (JIRA)
Tupshin Harper created CASSANDRA-8586:
-

 Summary: support millions of sstables by lazily 
acquiring/caching/dropping filehandles
 Key: CASSANDRA-8586
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8586
 Project: Cassandra
  Issue Type: New Feature
Reporter: Tupshin Harper
Assignee: Aleksey Yeschenko


This might turn into a meta ticket if other obstacles are found in the goal of 
supporting a huge number of sstables.

Technically, the only gap that I know of to prevent us from supporting absurd 
numbers of sstables is the fact that we hold on to an open filehandle for every 
single sstable. 

For use cases that are willing to take a hit to read-performance in order to 
achieve high densities and low write amplification, a mechanism for only 
retaining file handles for recently read sstables could be very valuable.

This will allow for alternate compaction strategies and compaction strategy 
tuning that don't try to optimize for read performance as aggresively.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8586) support millions of sstables by lazily acquiring/caching/dropping filehandles

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8586:
-
Fix Version/s: 3.1

 support millions of sstables by lazily acquiring/caching/dropping filehandles
 -

 Key: CASSANDRA-8586
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8586
 Project: Cassandra
  Issue Type: New Feature
Reporter: Tupshin Harper
Assignee: Aleksey Yeschenko
 Fix For: 3.1


 This might turn into a meta ticket if other obstacles are found in the goal 
 of supporting a huge number of sstables.
 Technically, the only gap that I know of to prevent us from supporting absurd 
 numbers of sstables is the fact that we hold on to an open filehandle for 
 every single sstable. 
 For use cases that are willing to take a hit to read-performance in order to 
 achieve high densities and low write amplification, a mechanism for only 
 retaining file handles for recently read sstables could be very valuable.
 This will allow for alternate compaction strategies and compaction strategy 
 tuning that don't try to optimize for read performance as aggresively.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8194:
---
Attachment: 8194-V5.txt

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194-V5.txt, 
 8194.patch, CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:943)
   at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:828)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:140)
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:245)
   ... 28 more
 ERROR [Thrift:17232] 2014-10-24 05:06:51,004 CustomTThreadPoolServer.java 
 (line 224) Error occurred during processing of message.
 

[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270280#comment-14270280
 ] 

Sam Tunnicliffe commented on CASSANDRA-8194:


I've attached an new patch (V5) which: 

* Splits out the cache from Auth into a separate class.
* Adds a new setting to yaml - permissions_update_interval_in_ms
* The cache now does both async refresh on active keys after the update 
interval  expiry of inactive keys after the validity period expires. (If 
errors are encountered during refresh, expiry still applies so we won't serve 
stale data indefinitely).
* Removed the AuthMBean as the bean was never being registered.
* The new update interval setting defaults to whatever validity_period_in_ms is 
set to, so default behaviour is preserved.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194-V5.txt, 
 8194.patch, CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: 

cassandra git commit: Introduce background cache refreshing to permissions cache

2015-01-08 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 bd3c47ca7 - e750ab238


Introduce background cache refreshing to permissions cache

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-8194


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e750ab23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e750ab23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e750ab23

Branch: refs/heads/cassandra-2.0
Commit: e750ab238e07daa61180d2451ba90f819a4cf5a1
Parents: bd3c47c
Author: Sam Tunnicliffe s...@beobal.com
Authored: Fri Jan 9 04:02:32 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 9 04:02:32 2015 +0300

--
 CHANGES.txt |   2 +
 conf/cassandra.yaml |   8 ++
 src/java/org/apache/cassandra/auth/Auth.java|  55 ++
 .../org/apache/cassandra/auth/AuthMBean.java|  27 -
 .../apache/cassandra/auth/PermissionsCache.java | 108 +++
 .../org/apache/cassandra/config/Config.java |   2 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +-
 .../apache/cassandra/service/ClientState.java   |  15 +--
 8 files changed, 138 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ccbf45..adb374a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Introduce background cache refreshing to permissions cache
+   (CASSANDRA-8194)
  * Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 5eaffc2..45290aa 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -79,6 +79,14 @@ authorizer: AllowAllAuthorizer
 # Will be disabled automatically for AllowAllAuthorizer.
 permissions_validity_in_ms: 2000
 
+# Refresh interval for permissions cache (if enabled).
+# After this interval, cache entries become eligible for refresh. Upon next
+# access, an async reload is scheduled and the old value returned until it
+# completes. If permissions_validity_in_ms is non-zero, then this must be
+# also.
+# Defaults to the same value as permissions_validity_in_ms.
+# permissions_update_interval_in_ms: 1000
+
 # The partitioner is responsible for distributing groups of rows (by
 # partition key) across nodes in the cluster.  You should leave this
 # alone for new clusters.  The partitioner can NOT be changed without

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/src/java/org/apache/cassandra/auth/Auth.java
--
diff --git a/src/java/org/apache/cassandra/auth/Auth.java 
b/src/java/org/apache/cassandra/auth/Auth.java
index 94d4b3d..465643d 100644
--- a/src/java/org/apache/cassandra/auth/Auth.java
+++ b/src/java/org/apache/cassandra/auth/Auth.java
@@ -20,9 +20,6 @@ package org.apache.cassandra.auth;
 import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
-import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
-import com.google.common.cache.LoadingCache;
 import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Lists;
 import org.apache.commons.lang3.StringUtils;
@@ -32,9 +29,9 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.config.Schema;
-import org.apache.cassandra.cql3.UntypedResultSet;
-import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.QueryOptions;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.cql3.statements.SelectStatement;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.exceptions.RequestExecutionException;
@@ -43,9 +40,8 @@ import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.service.*;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.utils.ByteBufferUtil;
-import org.apache.cassandra.utils.Pair;
 
-public class Auth implements AuthMBean
+public class Auth
 {
 private static final Logger logger = LoggerFactory.getLogger(Auth.class);
 
@@ -57,8 +53,10 @@ public class Auth implements AuthMBean
 public static final String 

[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-08 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/auth/Auth.java
src/java/org/apache/cassandra/service/ClientState.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c5a959c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c5a959c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c5a959c

Branch: refs/heads/cassandra-2.1
Commit: 8c5a959c97729c5fbd536bf0f47cf6330c0bddbc
Parents: 5674a96 e750ab2
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 9 04:08:05 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 9 04:08:05 2015 +0300

--
 CHANGES.txt |   2 +
 conf/cassandra.yaml |   8 ++
 src/java/org/apache/cassandra/auth/Auth.java|  55 ++
 .../org/apache/cassandra/auth/AuthMBean.java|  27 -
 .../apache/cassandra/auth/PermissionsCache.java | 108 +++
 .../org/apache/cassandra/config/Config.java |   2 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +-
 .../apache/cassandra/service/ClientState.java   |  15 +--
 8 files changed, 138 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c5a959c/CHANGES.txt
--
diff --cc CHANGES.txt
index dac555b,adb374a..57c2f49
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,55 -1,6 +1,57 @@@
 -2.0.12:
 +2.1.3
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 + * Force config client mode in 

[1/2] cassandra git commit: Introduce background cache refreshing to permissions cache

2015-01-08 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 5674a96d4 - 8c5a959c9


Introduce background cache refreshing to permissions cache

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-8194


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e750ab23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e750ab23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e750ab23

Branch: refs/heads/cassandra-2.1
Commit: e750ab238e07daa61180d2451ba90f819a4cf5a1
Parents: bd3c47c
Author: Sam Tunnicliffe s...@beobal.com
Authored: Fri Jan 9 04:02:32 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 9 04:02:32 2015 +0300

--
 CHANGES.txt |   2 +
 conf/cassandra.yaml |   8 ++
 src/java/org/apache/cassandra/auth/Auth.java|  55 ++
 .../org/apache/cassandra/auth/AuthMBean.java|  27 -
 .../apache/cassandra/auth/PermissionsCache.java | 108 +++
 .../org/apache/cassandra/config/Config.java |   2 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +-
 .../apache/cassandra/service/ClientState.java   |  15 +--
 8 files changed, 138 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ccbf45..adb374a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Introduce background cache refreshing to permissions cache
+   (CASSANDRA-8194)
  * Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 5eaffc2..45290aa 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -79,6 +79,14 @@ authorizer: AllowAllAuthorizer
 # Will be disabled automatically for AllowAllAuthorizer.
 permissions_validity_in_ms: 2000
 
+# Refresh interval for permissions cache (if enabled).
+# After this interval, cache entries become eligible for refresh. Upon next
+# access, an async reload is scheduled and the old value returned until it
+# completes. If permissions_validity_in_ms is non-zero, then this must be
+# also.
+# Defaults to the same value as permissions_validity_in_ms.
+# permissions_update_interval_in_ms: 1000
+
 # The partitioner is responsible for distributing groups of rows (by
 # partition key) across nodes in the cluster.  You should leave this
 # alone for new clusters.  The partitioner can NOT be changed without

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/src/java/org/apache/cassandra/auth/Auth.java
--
diff --git a/src/java/org/apache/cassandra/auth/Auth.java 
b/src/java/org/apache/cassandra/auth/Auth.java
index 94d4b3d..465643d 100644
--- a/src/java/org/apache/cassandra/auth/Auth.java
+++ b/src/java/org/apache/cassandra/auth/Auth.java
@@ -20,9 +20,6 @@ package org.apache.cassandra.auth;
 import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
-import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
-import com.google.common.cache.LoadingCache;
 import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Lists;
 import org.apache.commons.lang3.StringUtils;
@@ -32,9 +29,9 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.config.Schema;
-import org.apache.cassandra.cql3.UntypedResultSet;
-import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.QueryOptions;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.cql3.statements.SelectStatement;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.exceptions.RequestExecutionException;
@@ -43,9 +40,8 @@ import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.service.*;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.utils.ByteBufferUtil;
-import org.apache.cassandra.utils.Pair;
 
-public class Auth implements AuthMBean
+public class Auth
 {
 private static final Logger logger = LoggerFactory.getLogger(Auth.class);
 
@@ -57,8 +53,10 @@ public class Auth implements AuthMBean
 public static final String 

cassandra git commit: use parameterized logging

2015-01-08 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk dde4d8293 - a289e71bd


use parameterized logging


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a289e71b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a289e71b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a289e71b

Branch: refs/heads/trunk
Commit: a289e71bd883bce6b78169b64c95cd6c2e1a8853
Parents: dde4d82
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 00:33:25 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 00:33:25 2015 -0500

--
 src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a289e71b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
index 6b40864..e98d3d2 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
@@ -150,7 +150,7 @@ public class CommitLogSegment
 }
 else
 {
-logger.debug(Creating new CommitLog segment:  + logFile);
+logger.debug(Creating new CommitLog segment: {}, 
logFile);
 }
 }
 



[jira] [Created] (CASSANDRA-8588) Fix DropTypeStatements isusedBy for maps (typo ignored values)

2015-01-08 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-8588:
---

 Summary: Fix DropTypeStatements isusedBy for maps (typo ignored 
values)
 Key: CASSANDRA-8588
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8588
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 2.1.3
 Attachments: is_used_by_maps.txt

simple typo caused the value of maps not to be checked but instead the key 
checked twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: fix typo causing bad format string marker

2015-01-08 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 8c5a959c9 - 818ec3310


fix typo causing bad format string marker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ed54e808
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ed54e808
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ed54e808

Branch: refs/heads/cassandra-2.1
Commit: ed54e80855a37f88a8625e1ecf9ce574f0d4081d
Parents: e750ab2
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 00:27:47 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 00:27:47 2015 -0500

--
 .../db/compaction/DateTieredCompactionStrategyTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed54e808/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
 
b/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
index 299e1af..84230da 100644
--- 
a/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
+++ 
b/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
@@ -70,7 +70,7 @@ public class DateTieredCompactionStrategyTest extends 
SchemaLoader
 {
 options.put(DateTieredCompactionStrategyOptions.BASE_TIME_KEY, 
-1337);
 validateOptions(options);
-fail(String.format(%Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.BASE_TIME_KEY));
+fail(String.format(Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.BASE_TIME_KEY));
 }
 catch (ConfigurationException e)
 {
@@ -81,7 +81,7 @@ public class DateTieredCompactionStrategyTest extends 
SchemaLoader
 {
 
options.put(DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY, -1337);
 validateOptions(options);
-fail(String.format(%Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY));
+fail(String.format(Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY));
 }
 catch (ConfigurationException e)
 {



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-08 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/818ec331
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/818ec331
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/818ec331

Branch: refs/heads/cassandra-2.1
Commit: 818ec33107be17e000b182ab682013528a596d9b
Parents: 8c5a959 ed54e80
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 00:28:30 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 00:28:30 2015 -0500

--
 .../db/compaction/DateTieredCompactionStrategyTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/818ec331/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
--



cassandra git commit: fix typo causing bad format string marker

2015-01-08 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 e750ab238 - ed54e8085


fix typo causing bad format string marker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ed54e808
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ed54e808
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ed54e808

Branch: refs/heads/cassandra-2.0
Commit: ed54e80855a37f88a8625e1ecf9ce574f0d4081d
Parents: e750ab2
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 00:27:47 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 00:27:47 2015 -0500

--
 .../db/compaction/DateTieredCompactionStrategyTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed54e808/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
 
b/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
index 299e1af..84230da 100644
--- 
a/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
+++ 
b/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
@@ -70,7 +70,7 @@ public class DateTieredCompactionStrategyTest extends 
SchemaLoader
 {
 options.put(DateTieredCompactionStrategyOptions.BASE_TIME_KEY, 
-1337);
 validateOptions(options);
-fail(String.format(%Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.BASE_TIME_KEY));
+fail(String.format(Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.BASE_TIME_KEY));
 }
 catch (ConfigurationException e)
 {
@@ -81,7 +81,7 @@ public class DateTieredCompactionStrategyTest extends 
SchemaLoader
 {
 
options.put(DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY, -1337);
 validateOptions(options);
-fail(String.format(%Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY));
+fail(String.format(Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY));
 }
 catch (ConfigurationException e)
 {



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-08 Thread marcuse
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d1a552dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d1a552dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d1a552dd

Branch: refs/heads/trunk
Commit: d1a552dd7882f9ddb88e0e58d1af46791093
Parents: a289e71 14b2d7a
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Jan 9 07:34:30 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Jan 9 07:34:30 2015 +0100

--
 CHANGES.txt   | 1 +
 .../org/apache/cassandra/db/compaction/CompactionManager.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1a552dd/CHANGES.txt
--
diff --cc CHANGES.txt
index 8686d6c,2028633..6d364cc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,50 -1,5 +1,51 @@@
 +3.0
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 +
 +
  2.1.3
+  * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
   * Fix case-sensitivity of index name on CREATE and DROP INDEX
 statements (CASSANDRA-8365)
   * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1a552dd/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--



cassandra git commit: Don't use the same CleanupStrategy for all sstables

2015-01-08 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 818ec3310 - 14b2d7a16


Don't use the same CleanupStrategy for all sstables

Patch by marcuse; reviewed by thobbs for CASSANDRA-8537


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/14b2d7a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/14b2d7a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/14b2d7a1

Branch: refs/heads/cassandra-2.1
Commit: 14b2d7a16b4b69eed6be5473dbbf238f040fb5c5
Parents: 818ec33
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Jan 8 14:39:08 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Jan 9 07:33:17 2015 +0100

--
 CHANGES.txt   | 1 +
 .../org/apache/cassandra/db/compaction/CompactionManager.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/14b2d7a1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 57c2f49..2028633 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
  * Fix case-sensitivity of index name on CREATE and DROP INDEX
statements (CASSANDRA-8365)
  * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/14b2d7a1/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 872ebed..eb7c0ee 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -347,7 +347,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 return AllSSTableOpStatus.ABORTED;
 }
 final boolean hasIndexes = cfStore.indexManager.hasIndexes();
-final CleanupStrategy cleanupStrategy = CleanupStrategy.get(cfStore, 
ranges);
+
 return parallelAllSSTableOperation(cfStore, new OneSSTableOperation()
 {
 @Override
@@ -361,6 +361,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 @Override
 public void execute(SSTableReader input) throws IOException
 {
+CleanupStrategy cleanupStrategy = CleanupStrategy.get(cfStore, 
ranges);
 doCleanupOne(cfStore, input, cleanupStrategy, ranges, 
hasIndexes);
 }
 });



[1/2] cassandra git commit: Don't use the same CleanupStrategy for all sstables

2015-01-08 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk a289e71bd - d1a552dd7


Don't use the same CleanupStrategy for all sstables

Patch by marcuse; reviewed by thobbs for CASSANDRA-8537


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/14b2d7a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/14b2d7a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/14b2d7a1

Branch: refs/heads/trunk
Commit: 14b2d7a16b4b69eed6be5473dbbf238f040fb5c5
Parents: 818ec33
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Jan 8 14:39:08 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Jan 9 07:33:17 2015 +0100

--
 CHANGES.txt   | 1 +
 .../org/apache/cassandra/db/compaction/CompactionManager.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/14b2d7a1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 57c2f49..2028633 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
  * Fix case-sensitivity of index name on CREATE and DROP INDEX
statements (CASSANDRA-8365)
  * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/14b2d7a1/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 872ebed..eb7c0ee 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -347,7 +347,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 return AllSSTableOpStatus.ABORTED;
 }
 final boolean hasIndexes = cfStore.indexManager.hasIndexes();
-final CleanupStrategy cleanupStrategy = CleanupStrategy.get(cfStore, 
ranges);
+
 return parallelAllSSTableOperation(cfStore, new OneSSTableOperation()
 {
 @Override
@@ -361,6 +361,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 @Override
 public void execute(SSTableReader input) throws IOException
 {
+CleanupStrategy cleanupStrategy = CleanupStrategy.get(cfStore, 
ranges);
 doCleanupOne(cfStore, input, cleanupStrategy, ranges, 
hasIndexes);
 }
 });



[jira] [Created] (CASSANDRA-8587) Fix MessageOut's serializeSize calculation

2015-01-08 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-8587:
---

 Summary: Fix MessageOut's serializeSize calculation
 Key: CASSANDRA-8587
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8587
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 2.0.12
 Attachments: ss.txt

simple typos keep size calculation to small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: fix typo causing bad format string marker

2015-01-08 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2fc3a8934 - dde4d8293


fix typo causing bad format string marker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ed54e808
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ed54e808
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ed54e808

Branch: refs/heads/trunk
Commit: ed54e80855a37f88a8625e1ecf9ce574f0d4081d
Parents: e750ab2
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 00:27:47 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 00:27:47 2015 -0500

--
 .../db/compaction/DateTieredCompactionStrategyTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed54e808/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
 
b/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
index 299e1af..84230da 100644
--- 
a/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
+++ 
b/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
@@ -70,7 +70,7 @@ public class DateTieredCompactionStrategyTest extends 
SchemaLoader
 {
 options.put(DateTieredCompactionStrategyOptions.BASE_TIME_KEY, 
-1337);
 validateOptions(options);
-fail(String.format(%Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.BASE_TIME_KEY));
+fail(String.format(Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.BASE_TIME_KEY));
 }
 catch (ConfigurationException e)
 {
@@ -81,7 +81,7 @@ public class DateTieredCompactionStrategyTest extends 
SchemaLoader
 {
 
options.put(DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY, -1337);
 validateOptions(options);
-fail(String.format(%Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY));
+fail(String.format(Negative %s should be rejected, 
DateTieredCompactionStrategyOptions.MAX_SSTABLE_AGE_KEY));
 }
 catch (ConfigurationException e)
 {



[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-08 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/818ec331
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/818ec331
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/818ec331

Branch: refs/heads/trunk
Commit: 818ec33107be17e000b182ab682013528a596d9b
Parents: 8c5a959 ed54e80
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 00:28:30 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 00:28:30 2015 -0500

--
 .../db/compaction/DateTieredCompactionStrategyTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/818ec331/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
--



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-08 Thread dbrosius
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dde4d829
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dde4d829
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dde4d829

Branch: refs/heads/trunk
Commit: dde4d82938f0d5951b0473cfb9d62d7ab0d9b637
Parents: 2fc3a89 818ec33
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 00:28:56 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 00:28:56 2015 -0500

--
 .../db/compaction/DateTieredCompactionStrategyTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dde4d829/test/unit/org/apache/cassandra/db/compaction/DateTieredCompactionStrategyTest.java
--



[Cassandra Wiki] Update of HowToContribute by RussellHatch

2015-01-08 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The HowToContribute page has been changed by RussellHatch:
https://wiki.apache.org/cassandra/HowToContribute?action=diffrev1=59rev2=60

Comment:
update code coverage info from cobertura to jacoco

   1. Run all tests by running `nosetests` from the dtest checkout.  You can 
run a specific module like so: `nosetests cql_tests.py`.  You can run a 
specific test method like this: `nosetests cql_tests.py:TestCQL.counters_test`
  
  === Running the code coverage task ===
-  1. Unzip this one: 
http://sourceforge.net/projects/cobertura/files/cobertura/1.9.4.1/cobertura-1.9.4.1-bin.zip/download
-  1. `ant codecoverage -Dcobertura.dir=/path/to/cobertura`
-  1. `/path/to/cobertura/cobertura-report.sh --destination 
build/cobertura/html source code src/java`
-  1. View `build/cobertura/html/index.html`
+  1. Run a basic coverage report of unit tests using `ant codecoverage`
+  1. Alternatively, run any test task with `ant jacoco-run 
-Dtaskname=some_test_taskname`. Run more test tasks in this fashion to push 
more coverage data onto the report in progress. Then manually build the report 
with `ant jacoco-report` (the 'codecoverage' task shown above does this 
automatically).
+  1. View the report at `build/jacoco/index.html`.
+  1. When done, clean up jacoco data so it doesn't confuse your next coverage 
report: `ant jacoco-cleanup`.
  
  === Continuous integration ===
  Jenkins runs the Cassandra tests continuously: http://cassci.datastax.com/ 
(Builders for stable branches also exist.)


[jira] [Updated] (CASSANDRA-8580) AssertionErrors after activating unchecked_tombstone_compaction with leveled compaction

2015-01-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8580:
---
Fix Version/s: 2.1.3
 Assignee: Marcus Eriksson

 AssertionErrors after activating unchecked_tombstone_compaction with leveled 
 compaction
 ---

 Key: CASSANDRA-8580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8580
 Project: Cassandra
  Issue Type: Bug
Reporter: Björn Hachmann
Assignee: Marcus Eriksson
 Fix For: 2.1.3


 During our upgrade of Cassandra from version 2.0.7 to 2.1.2 we experienced a 
 serious problem regarding the setting unchecked_tombstone_compaction in 
 combination with leveled compaction strategy.
 In order to prevent tombstone-threshold-warnings we activated the setting for 
 a specific table after the upgrade. Some time after that we observed new 
 errors in our log files:
 {code}
 INFO  [CompactionExecutor:184] 2014-12-11 12:36:06,597 
 CompactionTask.java:136 - Compacting 
 [SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1848-Data.db'),
  SSTableReader(path='/
 data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1847-Data.db'),
  
 SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1845-Data.db'),
  SSTableReader
 (path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1846-Data.db')]
 ERROR [CompactionExecutor:183] 2014-12-11 12:36:06,613 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:183,1,main]
 java.lang.AssertionError: 
 /data/cassandra/data/metrigo_prod/new_user_data/metrigo_prod-new_user_data-tmplink-ka-705732-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:243)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:146)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 Obviously that error aborted the compaction and after some time the number of 
 pending compactions became very high on every node. Of course, this in turn 
 had a negative impact on several other metrics.
 After reverting the setting we had to restart all nodes. After that 
 compactions could finish again and the pending compactions could be worked 
 off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8552) Large compactions run out of off-heap RAM

2015-01-08 Thread Brent Haines (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brent Haines updated CASSANDRA-8552:

Attachment: data.cql

Here is our schema for application data

 Large compactions run out of off-heap RAM
 -

 Key: CASSANDRA-8552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8552
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.4 
 AWS EC2
 12 m1.xlarge nodes [4 cores, 16GB RAM, 1TB storage (251GB Used)]
 Java build 1.7.0_55-b13 and build 1.8.0_25-b17
Reporter: Brent Haines
Assignee: Benedict
Priority: Blocker
 Fix For: 2.1.3

 Attachments: Screen Shot 2015-01-02 at 9.36.11 PM.png, data.cql, 
 fhandles.log, freelog.log, lsof.txt, meminfo.txt, sysctl.txt, system.log


 We have a large table of storing, effectively event logs and a pair of 
 denormalized tables for indexing.
 When updating from 2.0 to 2.1 we saw performance improvements, but some 
 random and silent crashes during nightly repairs. We lost a node (totally 
 corrupted) and replaced it. That node has never stabilized -- it simply can't 
 finish the compactions. 
 Smaller compactions finish. Larger compactions, like these two never finish - 
 {code}
 pending tasks: 48
compaction type   keyspace table completed total   
  unit   progress
 Compaction   data   stories   16532973358   75977993784   
 bytes 21.76%
 Compaction   data   stories_by_text   10593780658   38555048812   
 bytes 27.48%
 Active compaction remaining time :   0h10m51s
 {code}
 We are not getting exceptions and are not running out of heap space. The 
 Ubuntu OOM killer is reaping the process after all of the memory is consumed. 
 We watch memory in the opscenter console and it will grow. If we turn off the 
 OOM killer for the process, it will run until everything else is killed 
 instead and then the kernel panics.
 We have the following settings configured: 
 2G Heap
 512M New
 {code}
 memtable_heap_space_in_mb: 1024
 memtable_offheap_space_in_mb: 1024
 memtable_allocation_type: heap_buffers
 commitlog_total_space_in_mb: 2048
 concurrent_compactors: 1
 compaction_throughput_mb_per_sec: 128
 {code}
 The compaction strategy is leveled (these are read-intensive tables that are 
 rarely updated)
 I have tried every setting, every option and I have the system where the MTBF 
 is about an hour now, but we never finish compacting because there are some 
 large compactions pending. None of the GC tools or settings help because it 
 is not a GC problem. It is an off-heap memory problem.
 We are getting these messages in our syslog 
 {code}
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219527] BUG: Bad page map in 
 process java  pte:0320 pmd:2d6fa5067
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219545] addr:7fb820be3000 
 vm_flags:0870 anon_vma:  (null) mapping:  (null) 
 index:7fb820be3
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219556] CPU: 3 PID: 27344 
 Comm: java Tainted: GB3.13.0-24-generic #47-Ubuntu
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219559]  880028510e40 
 88020d43da98 81715ac4 7fb820be3000
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219565]  88020d43dae0 
 81174183 0320 0007fb820be3
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219568]  8802d6fa5f18 
 0320 7fb820be3000 7fb820be4000
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219572] Call Trace:
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219584]  [81715ac4] 
 dump_stack+0x45/0x56
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219591]  [81174183] 
 print_bad_pte+0x1a3/0x250
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219594]  [81175439] 
 vm_normal_page+0x69/0x80
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219598]  [8117580b] 
 unmap_page_range+0x3bb/0x7f0
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219602]  [81175cc1] 
 unmap_single_vma+0x81/0xf0
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219605]  [81176d39] 
 unmap_vmas+0x49/0x90
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219610]  [8117feec] 
 exit_mmap+0x9c/0x170
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219617]  [8110fcf3] 
 ? __delayacct_add_tsk+0x153/0x170
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219621]  [8106482c] 
 mmput+0x5c/0x120
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219625]  [81069bbc] 
 do_exit+0x26c/0xa50
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219631]  [810d7591] 
 ? __unqueue_futex+0x31/0x60
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219634]  [810d83b6] 
 ? 

[jira] [Commented] (CASSANDRA-8552) Large compactions run out of off-heap RAM

2015-01-08 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270098#comment-14270098
 ] 

Alan Boudreault commented on CASSANDRA-8552:


[~thebrenthaines] They are not LCS ?

 Large compactions run out of off-heap RAM
 -

 Key: CASSANDRA-8552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8552
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.4 
 AWS EC2
 12 m1.xlarge nodes [4 cores, 16GB RAM, 1TB storage (251GB Used)]
 Java build 1.7.0_55-b13 and build 1.8.0_25-b17
Reporter: Brent Haines
Assignee: Benedict
Priority: Blocker
 Fix For: 2.1.3

 Attachments: Screen Shot 2015-01-02 at 9.36.11 PM.png, data.cql, 
 fhandles.log, freelog.log, lsof.txt, meminfo.txt, sysctl.txt, system.log


 We have a large table of storing, effectively event logs and a pair of 
 denormalized tables for indexing.
 When updating from 2.0 to 2.1 we saw performance improvements, but some 
 random and silent crashes during nightly repairs. We lost a node (totally 
 corrupted) and replaced it. That node has never stabilized -- it simply can't 
 finish the compactions. 
 Smaller compactions finish. Larger compactions, like these two never finish - 
 {code}
 pending tasks: 48
compaction type   keyspace table completed total   
  unit   progress
 Compaction   data   stories   16532973358   75977993784   
 bytes 21.76%
 Compaction   data   stories_by_text   10593780658   38555048812   
 bytes 27.48%
 Active compaction remaining time :   0h10m51s
 {code}
 We are not getting exceptions and are not running out of heap space. The 
 Ubuntu OOM killer is reaping the process after all of the memory is consumed. 
 We watch memory in the opscenter console and it will grow. If we turn off the 
 OOM killer for the process, it will run until everything else is killed 
 instead and then the kernel panics.
 We have the following settings configured: 
 2G Heap
 512M New
 {code}
 memtable_heap_space_in_mb: 1024
 memtable_offheap_space_in_mb: 1024
 memtable_allocation_type: heap_buffers
 commitlog_total_space_in_mb: 2048
 concurrent_compactors: 1
 compaction_throughput_mb_per_sec: 128
 {code}
 The compaction strategy is leveled (these are read-intensive tables that are 
 rarely updated)
 I have tried every setting, every option and I have the system where the MTBF 
 is about an hour now, but we never finish compacting because there are some 
 large compactions pending. None of the GC tools or settings help because it 
 is not a GC problem. It is an off-heap memory problem.
 We are getting these messages in our syslog 
 {code}
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219527] BUG: Bad page map in 
 process java  pte:0320 pmd:2d6fa5067
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219545] addr:7fb820be3000 
 vm_flags:0870 anon_vma:  (null) mapping:  (null) 
 index:7fb820be3
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219556] CPU: 3 PID: 27344 
 Comm: java Tainted: GB3.13.0-24-generic #47-Ubuntu
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219559]  880028510e40 
 88020d43da98 81715ac4 7fb820be3000
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219565]  88020d43dae0 
 81174183 0320 0007fb820be3
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219568]  8802d6fa5f18 
 0320 7fb820be3000 7fb820be4000
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219572] Call Trace:
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219584]  [81715ac4] 
 dump_stack+0x45/0x56
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219591]  [81174183] 
 print_bad_pte+0x1a3/0x250
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219594]  [81175439] 
 vm_normal_page+0x69/0x80
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219598]  [8117580b] 
 unmap_page_range+0x3bb/0x7f0
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219602]  [81175cc1] 
 unmap_single_vma+0x81/0xf0
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219605]  [81176d39] 
 unmap_vmas+0x49/0x90
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219610]  [8117feec] 
 exit_mmap+0x9c/0x170
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219617]  [8110fcf3] 
 ? __delayacct_add_tsk+0x153/0x170
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219621]  [8106482c] 
 mmput+0x5c/0x120
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219625]  [81069bbc] 
 do_exit+0x26c/0xa50
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219631]  [810d7591] 
 ? __unqueue_futex+0x31/0x60
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219634] 

[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-08 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fc3a893
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fc3a893
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fc3a893

Branch: refs/heads/trunk
Commit: 2fc3a8934b211af762f9b3bec4c5b2dc46d6e0f5
Parents: e412319 8c5a959
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 9 04:08:30 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 9 04:08:30 2015 +0300

--
 CHANGES.txt |   2 +
 conf/cassandra.yaml |   8 ++
 src/java/org/apache/cassandra/auth/Auth.java|  55 ++
 .../org/apache/cassandra/auth/AuthMBean.java|  27 -
 .../apache/cassandra/auth/PermissionsCache.java | 108 +++
 .../org/apache/cassandra/config/Config.java |   2 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +-
 .../apache/cassandra/service/ClientState.java   |  15 +--
 8 files changed, 138 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fc3a893/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fc3a893/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fc3a893/src/java/org/apache/cassandra/config/Config.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fc3a893/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fc3a893/src/java/org/apache/cassandra/service/ClientState.java
--



[1/3] cassandra git commit: Introduce background cache refreshing to permissions cache

2015-01-08 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk e41231933 - 2fc3a8934


Introduce background cache refreshing to permissions cache

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-8194


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e750ab23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e750ab23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e750ab23

Branch: refs/heads/trunk
Commit: e750ab238e07daa61180d2451ba90f819a4cf5a1
Parents: bd3c47c
Author: Sam Tunnicliffe s...@beobal.com
Authored: Fri Jan 9 04:02:32 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 9 04:02:32 2015 +0300

--
 CHANGES.txt |   2 +
 conf/cassandra.yaml |   8 ++
 src/java/org/apache/cassandra/auth/Auth.java|  55 ++
 .../org/apache/cassandra/auth/AuthMBean.java|  27 -
 .../apache/cassandra/auth/PermissionsCache.java | 108 +++
 .../org/apache/cassandra/config/Config.java |   2 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +-
 .../apache/cassandra/service/ClientState.java   |  15 +--
 8 files changed, 138 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ccbf45..adb374a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Introduce background cache refreshing to permissions cache
+   (CASSANDRA-8194)
  * Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 5eaffc2..45290aa 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -79,6 +79,14 @@ authorizer: AllowAllAuthorizer
 # Will be disabled automatically for AllowAllAuthorizer.
 permissions_validity_in_ms: 2000
 
+# Refresh interval for permissions cache (if enabled).
+# After this interval, cache entries become eligible for refresh. Upon next
+# access, an async reload is scheduled and the old value returned until it
+# completes. If permissions_validity_in_ms is non-zero, then this must be
+# also.
+# Defaults to the same value as permissions_validity_in_ms.
+# permissions_update_interval_in_ms: 1000
+
 # The partitioner is responsible for distributing groups of rows (by
 # partition key) across nodes in the cluster.  You should leave this
 # alone for new clusters.  The partitioner can NOT be changed without

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e750ab23/src/java/org/apache/cassandra/auth/Auth.java
--
diff --git a/src/java/org/apache/cassandra/auth/Auth.java 
b/src/java/org/apache/cassandra/auth/Auth.java
index 94d4b3d..465643d 100644
--- a/src/java/org/apache/cassandra/auth/Auth.java
+++ b/src/java/org/apache/cassandra/auth/Auth.java
@@ -20,9 +20,6 @@ package org.apache.cassandra.auth;
 import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
-import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
-import com.google.common.cache.LoadingCache;
 import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Lists;
 import org.apache.commons.lang3.StringUtils;
@@ -32,9 +29,9 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.config.Schema;
-import org.apache.cassandra.cql3.UntypedResultSet;
-import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.QueryOptions;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.cql3.statements.SelectStatement;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.exceptions.RequestExecutionException;
@@ -43,9 +40,8 @@ import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.service.*;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.utils.ByteBufferUtil;
-import org.apache.cassandra.utils.Pair;
 
-public class Auth implements AuthMBean
+public class Auth
 {
 private static final Logger logger = LoggerFactory.getLogger(Auth.class);
 
@@ -57,8 +53,10 @@ public class Auth implements AuthMBean
 public static final String USERS_CF = 

[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-08 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/auth/Auth.java
src/java/org/apache/cassandra/service/ClientState.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c5a959c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c5a959c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c5a959c

Branch: refs/heads/trunk
Commit: 8c5a959c97729c5fbd536bf0f47cf6330c0bddbc
Parents: 5674a96 e750ab2
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 9 04:08:05 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 9 04:08:05 2015 +0300

--
 CHANGES.txt |   2 +
 conf/cassandra.yaml |   8 ++
 src/java/org/apache/cassandra/auth/Auth.java|  55 ++
 .../org/apache/cassandra/auth/AuthMBean.java|  27 -
 .../apache/cassandra/auth/PermissionsCache.java | 108 +++
 .../org/apache/cassandra/config/Config.java |   2 +
 .../cassandra/config/DatabaseDescriptor.java|  10 +-
 .../apache/cassandra/service/ClientState.java   |  15 +--
 8 files changed, 138 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c5a959c/CHANGES.txt
--
diff --cc CHANGES.txt
index dac555b,adb374a..57c2f49
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,55 -1,6 +1,57 @@@
 -2.0.12:
 +2.1.3
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 + * Force config client mode in 

[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-01-08 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270142#comment-14270142
 ] 

Jason Brown commented on CASSANDRA-6809:


Please reassign as I'm not going to realistically get a chance to review for 
the next few weeks.

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: performance
 Fix For: 3.0

 Attachments: logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6809) Compressed Commit Log

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6809:
-
Reviewer:   (was: Jason Brown)

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: performance
 Fix For: 3.0

 Attachments: logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6809) Compressed Commit Log

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6809:
-
Reviewer: Ariel Weisberg

bq. Please reassign as I'm not going to realistically get a chance to review 
for the next few weeks.

Too bad :( Re-assigned to Ariel.

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: performance
 Fix For: 3.0

 Attachments: logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8552) Large compactions run out of off-heap RAM

2015-01-08 Thread Brent Haines (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270185#comment-14270185
 ] 

Brent Haines commented on CASSANDRA-8552:
-

The stories* tables are, but that script doesn't reflect that -- 

{code}
cqlsh:data describe columnfamily stories;

CREATE TABLE data.stories (
id timeuuid PRIMARY KEY,
action_data timeuuid,
action_name text,
app_id timeuuid,
app_instance_id timeuuid,
data maptext, text,
objects settimeuuid,
time_stamp timestamp,
user_id timeuuid
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'Stories represent the timeline and are placed in the 
dashboard for the brand manager to see'
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';

cqlsh:data describe columnfamily stories_by_text;

CREATE TABLE data.stories_by_text (
ref_id timeuuid,
second_type text,
second_value text,
object_type text,
field_name text,
value text,
story_id timeuuid,
PRIMARY KEY ((ref_id, second_type, second_value, object_type, field_name), 
value, story_id)
) WITH CLUSTERING ORDER BY (value ASC, story_id ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'Searchable fields and actions in a story are indexed by ref 
id which corresponds to a brand, app, app instance, or user.'
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';

cqlsh:data describe columnfamily stories_by_number;

CREATE TABLE data.stories_by_number (
ref_id timeuuid,
second_type text,
second_value text,
object_type text,
field_name text,
value bigint,
story_id timeuuid,
PRIMARY KEY ((ref_id, second_type, second_value, object_type, field_name), 
value, story_id)
) WITH CLUSTERING ORDER BY (value ASC, story_id ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'Searchable fields and actions in a story are indexed by ref 
id which corresponds to a brand, app, app instance, or user.'
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';

{code}



 Large compactions run out of off-heap RAM
 -

 Key: CASSANDRA-8552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8552
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.4 
 AWS EC2
 12 m1.xlarge nodes [4 cores, 16GB RAM, 1TB storage (251GB Used)]
 Java build 1.7.0_55-b13 and build 1.8.0_25-b17
Reporter: Brent Haines
Assignee: Benedict
Priority: Blocker
 Fix For: 2.1.3

 Attachments: Screen Shot 2015-01-02 at 9.36.11 PM.png, data.cql, 
 fhandles.log, freelog.log, lsof.txt, meminfo.txt, sysctl.txt, system.log


 We have a large table of storing, effectively event logs and a pair of 
 denormalized tables for indexing.
 When updating from 2.0 to 2.1 we saw performance improvements, but some 
 random and silent crashes during nightly repairs. We lost a node (totally 
 corrupted) and replaced it. That node has never stabilized -- it simply can't 
 finish the compactions. 
 Smaller compactions finish. Larger compactions, like these two never finish - 
 {code}
 pending tasks: 48
compaction type   keyspace table completed total   
  unit   progress
 

[jira] [Commented] (CASSANDRA-8374) Better support of null for UDF

2015-01-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269231#comment-14269231
 ] 

Sylvain Lebresne commented on CASSANDRA-8374:
-

bq. I'm thinking of a different scenario – where my function explicitly handles 
null, e.g. by turning it into a default value, but this behavior is 
suppressed by the function short-circuiting to null

Fair enough, I'll admit I wasn't specifically thinking of that scenario. That 
said, if you do write a function explicitly handling null, presumably you'll 
test that case right away and even if you're surprised by the result, I 
wouldn't expect it to take you a terribly long time to figure it out (by 
checking the doc for instance). Though that's probably wishful thinking on my 
part. I'm still also bugged by the idea of not using a default that would be 
right 99% of the time because it may suprise a few users.

But anyway, I understand the concerns (I just happen to not be entirely 
convinced they would be such a big deal in practice, but you seem to be 
convinced otherwise so I'm willing to accept that I'm probably wrong) and so as 
I said above, my preference goes to forcing an explicit choice.


 Better support of null for UDF
 --

 Key: CASSANDRA-8374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8374
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
 Fix For: 3.0

 Attachments: 8473-1.txt, 8473-2.txt


 Currently, every function needs to deal with it's argument potentially being 
 {{null}}. There is very many case where that's just annoying, users should be 
 able to define a function like:
 {noformat}
 CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;'
 {noformat}
 without having this crashing as soon as a column it's applied to doesn't a 
 value for some rows (I'll note that this definition apparently cannot be 
 compiled currently, which should be looked into).  
 In fact, I think that by default methods shouldn't have to care about 
 {{null}} values: if the value is {{null}}, we should not call the method at 
 all and return {{null}}. There is still methods that may explicitely want to 
 handle {{null}} (to return a default value for instance), so maybe we can add 
 an {{ALLOW NULLS}} to the creation syntax.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2015-01-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269255#comment-14269255
 ] 

Benjamin Lerer commented on CASSANDRA-8355:
---

[~snazy] could you review?

 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.1.3

 Attachments: CASSANDRA-8355.txt


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8580) AssertionErrors after activating unchecked_tombstone_compaction with leveled compaction

2015-01-08 Thread JIRA
Björn Hachmann created CASSANDRA-8580:
-

 Summary: AssertionErrors after activating 
unchecked_tombstone_compaction with leveled compaction
 Key: CASSANDRA-8580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8580
 Project: Cassandra
  Issue Type: Bug
Reporter: Björn Hachmann


During our upgrade of Cassandra from version 2.0.7 to 2.1.2 we experienced a 
serious problem regarding the setting unchecked_tombstone_compaction in 
combination with leveled compaction strategy.

In order to prevent tombstone-threshold-warnings we activated the setting for a 
specific table after the upgrade. Some time after that we observed new errors 
in our log files:

INFO  [CompactionExecutor:184] 2014-12-11 12:36:06,597 CompactionTask.java:136 
- Compacting 
[SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1848-Data.db'),
 SSTableReader(path='/
data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1847-Data.db'),
 
SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1845-Data.db'),
 SSTableReader
(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1846-Data.db')]
ERROR [CompactionExecutor:183] 2014-12-11 12:36:06,613 CassandraDaemon.java:153 
- Exception in thread Thread[CompactionExecutor:183,1,main]
java.lang.AssertionError: 
/data/cassandra/data/metrigo_prod/new_user_data/metrigo_prod-new_user_data-tmplink-ka-705732-Data.db
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:243)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:146)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]

Obviously that error aborted the compaction and after some time the number of 
pending compactions became very high on every node. Of course, this in turn had 
a negative impact on several other metrics.

After reverting the setting we had to restart all nodes. After that compactions 
could finish again and the pending compactions could be worked off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-01-08 Thread xiangdong Huang (JIRA)
xiangdong Huang created CASSANDRA-8581:
--

 Summary: Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png

When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
correct but when I use ReducerToCassandra, the program will call loadYaml().

The reason is that the program catch a exception at line 196 of 
ColumnFamilyRecoderWriter.java. 

Then it check why the exception occur, then it loadYaml to check if the disk is 
broken...

However, the exception is NullPointerException. because the client is not 
initialized.
 
So we need a check to judge whether the client is null. 
(
The exception, original code and fixed code are in the attachments.
)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2015-01-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8355:
--
Attachment: CASSANDRA-8355.txt

The patch fix the problem in the {{Cql.g}} file and add a unit test to verify 
the behavior.

 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.1.3

 Attachments: CASSANDRA-8355.txt


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2015-01-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269264#comment-14269264
 ] 

mck commented on CASSANDRA-8371:


 Björn Hegerfors thanks. i'll take a look into getting those diagnostics 
 hopefully next week.

Apologises for not following up on this!

We've been using DTCS on another table happily all this time.
And the original table this issue was reported against remains back on LCS.
And we're still running 2.0.11

We have other tables that would be a better fit (with the default options) on 
DTCS and would like to switch over before experimenting any more with that 
original table which seems quite happy with LCS. Playing around with the 
options aganst brand new features in C* I've been burnt with before, so the 
plan is to focus on operational experience on default settings to begin with.

I knew when I entered this issue that the deletes were going against what DTCS 
is to be used on (although there turned out to be surprisingly many more 
deletes than i knew about), and that entering the issue could at least help 
others making the same mistake to quickly identify their fault. Instead arose a 
healthy discussion on other aspects around DCTS, educational in itself, but for 
my part i'd be happy to close the issue, and let those discussions continue in 
new issues created.


 DateTieredCompactionStrategy is always compacting 
 --

 Key: CASSANDRA-8371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: mck
Assignee: Björn Hegerfors
  Labels: compaction, performance
 Attachments: java_gc_counts_rate-month.png, 
 read-latency-recommenders-adview.png, read-latency.png, 
 sstables-recommenders-adviews.png, sstables.png, vg2_iad-month.png


 Running 2.0.11 and having switched a table to 
 [DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
 disk IO and gc count increase, along with the number of reads happening in 
 the compaction hump of cfhistograms.
 Data, and generally performance, looks good, but compactions are always 
 happening, and pending compactions are building up.
 The schema for this is 
 {code}CREATE TABLE search (
   loginid text,
   searchid timeuuid,
   description text,
   searchkey text,
   searchurl text,
   PRIMARY KEY ((loginid), searchid)
 );{code}
 We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
 CQL executed against this keyspace, and traffic patterns, can be seen in 
 slides 7+8 of https://prezi.com/b9-aj6p2esft/
 Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
 screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
 to DTCS (week ~46).
 These screenshots are also found in the prezi on slides 9-11.
 [~pmcfadin], [~Bj0rn], 
 Can this be a consequence of occasional deleted rows, as is described under 
 (3) in the description of CASSANDRA-6602 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2015-01-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-8371:
---
Comment: was deleted

(was:  Björn Hegerfors thanks. i'll take a look into getting those diagnostics 
hopefully next week.

Apologises for not following up on this!

We've been using DTCS on another table happily all this time.
And the original table this issue was reported against remains back on LCS.
And we're still running 2.0.11

We have other tables that would be a better fit (with the default options) on 
DTCS and would like to switch over before experimenting any more with that 
original table which seems quite happy with LCS. Playing around with the 
options aganst brand new features in C* I've been burnt with before, so the 
plan is to focus on operational experience on default settings to begin with.

I knew when I entered this issue that the deletes were going against what DTCS 
is to be used on (although there turned out to be surprisingly many more 
deletes than i knew about), and that entering the issue could at least help 
others making the same mistake to quickly identify their fault. Instead arose a 
healthy discussion on other aspects around DCTS, educational in itself, but for 
my part i'd be happy to close the issue, and let those discussions continue in 
new issues created.
)

 DateTieredCompactionStrategy is always compacting 
 --

 Key: CASSANDRA-8371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: mck
Assignee: Björn Hegerfors
  Labels: compaction, performance
 Attachments: java_gc_counts_rate-month.png, 
 read-latency-recommenders-adview.png, read-latency.png, 
 sstables-recommenders-adviews.png, sstables.png, vg2_iad-month.png


 Running 2.0.11 and having switched a table to 
 [DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
 disk IO and gc count increase, along with the number of reads happening in 
 the compaction hump of cfhistograms.
 Data, and generally performance, looks good, but compactions are always 
 happening, and pending compactions are building up.
 The schema for this is 
 {code}CREATE TABLE search (
   loginid text,
   searchid timeuuid,
   description text,
   searchkey text,
   searchurl text,
   PRIMARY KEY ((loginid), searchid)
 );{code}
 We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
 CQL executed against this keyspace, and traffic patterns, can be seen in 
 slides 7+8 of https://prezi.com/b9-aj6p2esft/
 Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
 screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
 to DTCS (week ~46).
 These screenshots are also found in the prezi on slides 9-11.
 [~pmcfadin], [~Bj0rn], 
 Can this be a consequence of occasional deleted rows, as is described under 
 (3) in the description of CASSANDRA-6602 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2015-01-08 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269265#comment-14269265
 ] 

Robert Stupp commented on CASSANDRA-8355:
-

Sure

 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.1.3

 Attachments: CASSANDRA-8355.txt


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2015-01-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8355:

Reviewer: Robert Stupp

 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.1.3

 Attachments: CASSANDRA-8355.txt


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8580) AssertionErrors after activating unchecked_tombstone_compaction with leveled compaction

2015-01-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269279#comment-14269279
 ] 

Marcus Eriksson commented on CASSANDRA-8580:


Could you post more of the log leading up to the exception? Want to see a line 
like ... Compacting 
[SSTableReader(path='/data/cassandra/data/metrigo_prod/new_user_data/metrigo_prod-new_user_data-tmplink-ka-705732-Data.db']

 AssertionErrors after activating unchecked_tombstone_compaction with leveled 
 compaction
 ---

 Key: CASSANDRA-8580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8580
 Project: Cassandra
  Issue Type: Bug
Reporter: Björn Hachmann

 During our upgrade of Cassandra from version 2.0.7 to 2.1.2 we experienced a 
 serious problem regarding the setting unchecked_tombstone_compaction in 
 combination with leveled compaction strategy.
 In order to prevent tombstone-threshold-warnings we activated the setting for 
 a specific table after the upgrade. Some time after that we observed new 
 errors in our log files:
 INFO  [CompactionExecutor:184] 2014-12-11 12:36:06,597 
 CompactionTask.java:136 - Compacting 
 [SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1848-Data.db'),
  SSTableReader(path='/
 data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1847-Data.db'),
  
 SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1845-Data.db'),
  SSTableReader
 (path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1846-Data.db')]
 ERROR [CompactionExecutor:183] 2014-12-11 12:36:06,613 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:183,1,main]
 java.lang.AssertionError: 
 /data/cassandra/data/metrigo_prod/new_user_data/metrigo_prod-new_user_data-tmplink-ka-705732-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:243)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:146)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 
 Obviously that error aborted the compaction and after some time the number of 
 pending compactions became very high on every node. Of course, this in turn 
 had a negative impact on several other metrics.
 After reverting the setting we had to restart all nodes. After that 
 compactions could finish again and the pending compactions could be worked 
 off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2015-01-08 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269283#comment-14269283
 ] 

Robert Stupp commented on CASSANDRA-8355:
-

Note: the offending NPE is thrown at
{noformat}
java.lang.NullPointerException
at 
org.apache.cassandra.cql3.functions.FunctionCall$Raw.toString(FunctionCall.java:172)
at java.lang.String.valueOf(String.java:2981)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at 
org.apache.cassandra.cql3.CqlParser.convertPropertyMap(CqlParser.java:327)
at org.apache.cassandra.cql3.CqlParser.property(CqlParser.java:12082)
at org.apache.cassandra.cql3.CqlParser.properties(CqlParser.java:11988)
at 
org.apache.cassandra.cql3.CqlParser.alterTableStatement(CqlParser.java:5231)
at org.apache.cassandra.cql3.CqlParser.cqlStatement(CqlParser.java:784)
at org.apache.cassandra.cql3.CqlParser.query(CqlParser.java:365)
at 
org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:535)
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:510)
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:251)
{noformat}

Can you investigate whether the same issue can occur in {{unaliasedSelector}} 
(~ line 296 in Cql.g)? On a quick code view and some experiments in cqlsh I 
think it cannot, but 4 eyes see more ;)

Otherwise +1 without any objective


 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.1.3

 Attachments: CASSANDRA-8355.txt


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7019) Improve tombstone compactions

2015-01-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-7019:
---
Summary: Improve tombstone compactions  (was: Major tombstone compaction)

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 3.0


 It should be possible to do a major tombstone compaction by including all 
 sstables, but writing them out 1:1, meaning that if you have 10 sstables 
 before, you will have 10 sstables after the compaction with the same data, 
 minus all the expired tombstones.
 We could do this in two ways:
 # a nodetool command that includes _all_ sstables
 # once we detect that an sstable has more than x% (20%?) expired tombstones, 
 we start one of these compactions, and include all overlapping sstables that 
 contain older data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7272) Add Major Compaction to LCS

2015-01-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269017#comment-14269017
 ] 

Marcus Eriksson commented on CASSANDRA-7272:


[~yarin] doubt it, we only do bug fixes in 2.0

 Add Major Compaction to LCS 
 --

 Key: CASSANDRA-7272
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7272
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
Assignee: Marcus Eriksson
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 LCS has a number of minor issues (maybe major depending on your perspective).
 LCS is primarily used for wide rows so for instance when you repair data in 
 LCS you end up with a copy of an entire repaired row in L0.  Over time if you 
 repair you end up with multiple copies of a row in L0 - L5.  This can make 
 predicting disk usage confusing.  
 Another issue is cleaning up tombstoned data.  If a tombstone lives in level 
 1 and data for the cell lives in level 5 the data will not be reclaimed from 
 disk until the tombstone reaches level 5.
 I propose we add a major compaction for LCS that forces consolidation of 
 data to level 5 to address these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8528) Add an ExecutionException to the protocol

2015-01-08 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269014#comment-14269014
 ] 

Robert Stupp commented on CASSANDRA-8528:
-

I'd like to transport the function to the client (i.e. its keyspace, name, 
arg-types) - that's why I called it FunctionException.

In detail (reported to clients via native protocol):
* new error code for function execution failures, with function + message as 
payload
* new error code for broken functions, with function + message as payload
* new error code for 'generic' execution exception

Any objections on adding a {{ExecutionException}} and 
{{FunctionExecutionException}} that extends EE plus another 
{{BrokenFunctionException}} ?

 Add an ExecutionException to the protocol
 -

 Key: CASSANDRA-8528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8528
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
  Labels: client-impacting, protocolv4
 Fix For: 3.0

 Attachments: 8528-001.txt


 With the introduction of UDF, we should add an ExecutionException (or 
 FunctionExecutionException or something like that) to the exceptions that can 
 be sent back to client. We can't guarantee that UDFs won't throw and none of 
 our existing exception is terribly adapted to report such event to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-7272) Add Major Compaction to LCS

2015-01-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reopened CASSANDRA-7272:

  Assignee: Marcus Eriksson

7019 is supposed to improve the tombstone compactions we do when there are no 
other compactions to do - this ticket adds major compaction to LCS

 Add Major Compaction to LCS 
 --

 Key: CASSANDRA-7272
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7272
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
Assignee: Marcus Eriksson
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 LCS has a number of minor issues (maybe major depending on your perspective).
 LCS is primarily used for wide rows so for instance when you repair data in 
 LCS you end up with a copy of an entire repaired row in L0.  Over time if you 
 repair you end up with multiple copies of a row in L0 - L5.  This can make 
 predicting disk usage confusing.  
 Another issue is cleaning up tombstoned data.  If a tombstone lives in level 
 1 and data for the cell lives in level 5 the data will not be reclaimed from 
 disk until the tombstone reaches level 5.
 I propose we add a major compaction for LCS that forces consolidation of 
 data to level 5 to address these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7019) Improve tombstone compactions

2015-01-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269022#comment-14269022
 ] 

Marcus Eriksson commented on CASSANDRA-7019:


Updated titles and reopened 7272 - this ticket is about improving the 
single-sstable tombstone compactions while 7019 is adding major compaction to 
LCS

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this ticket we should try to include overlapping sstables in those 
 compactions to be able to actually drop the tombstones. Might only be doable 
 with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2015-01-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8365:
--
Attachment: CASSANDRA-8365-trunk-unittests-fix.txt

This patch is a fix for the {{CreateIndexStatementTest}} of trunk.

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cqlsh, docs
 Fix For: 2.1.3

 Attachments: CASSANDRA-8365-V2.txt, 
 CASSANDRA-8365-trunk-unittests-fix.txt, CASSANDRA-8365.txt


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7019) Improve tombstone compactions

2015-01-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-7019:
---
Description: 
When there are no other compactions to do, we trigger a single-sstable 
compaction if there is more than X% droppable tombstones in the sstable.

In this ticket we should try to include overlapping sstables in those 
compactions to be able to actually drop the tombstones. Might only be doable 
with LCS (with STCS we would probably end up including all sstables)

  was:
It should be possible to do a major tombstone compaction by including all 
sstables, but writing them out 1:1, meaning that if you have 10 sstables 
before, you will have 10 sstables after the compaction with the same data, 
minus all the expired tombstones.

We could do this in two ways:
# a nodetool command that includes _all_ sstables
# once we detect that an sstable has more than x% (20%?) expired tombstones, we 
start one of these compactions, and include all overlapping sstables that 
contain older data.


 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this ticket we should try to include overlapping sstables in those 
 compactions to be able to actually drop the tombstones. Might only be doable 
 with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7019) Improve tombstone compactions

2015-01-08 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269142#comment-14269142
 ] 

sankalp kohli commented on CASSANDRA-7019:
--

7019 is adding major compaction to LCS
You mean 7272 :)

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this ticket we should try to include overlapping sstables in those 
 compactions to be able to actually drop the tombstones. Might only be doable 
 with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8579) sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer

2015-01-08 Thread JIRA
Jimmy Mårdell created CASSANDRA-8579:


 Summary: sstablemetadata can't load 
org.apache.cassandra.tools.SSTableMetadataViewer
 Key: CASSANDRA-8579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8579
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jimmy Mårdell
Priority: Minor


The sstablemetadata tool only works when running from the source tree. The 
classpath doesn't get set correctly when running on a deployed environment.

This bug looks to exist in 2.1 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7019) Improve tombstone compactions

2015-01-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269147#comment-14269147
 ] 

Marcus Eriksson commented on CASSANDRA-7019:


haha, thanks! updated

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this ticket we should try to include overlapping sstables in those 
 compactions to be able to actually drop the tombstones. Might only be doable 
 with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7019) Improve tombstone compactions

2015-01-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269022#comment-14269022
 ] 

Marcus Eriksson edited comment on CASSANDRA-7019 at 1/8/15 10:23 AM:
-

Updated titles and reopened 7272 - this ticket is about improving the 
single-sstable tombstone compactions while 7272 is adding major compaction to 
LCS


was (Author: krummas):
Updated titles and reopened 7272 - this ticket is about improving the 
single-sstable tombstone compactions while 7019 is adding major compaction to 
LCS

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this ticket we should try to include overlapping sstables in those 
 compactions to be able to actually drop the tombstones. Might only be doable 
 with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8579) sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer

2015-01-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Mårdell updated CASSANDRA-8579:
-
Attachment: cassandra-2.1-8579-1.txt
cassandra-2.0-8579-1.txt

 sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer
 ---

 Key: CASSANDRA-8579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8579
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jimmy Mårdell
Priority: Minor
 Attachments: cassandra-2.0-8579-1.txt, cassandra-2.1-8579-1.txt


 The sstablemetadata tool only works when running from the source tree. The 
 classpath doesn't get set correctly when running on a deployed environment.
 This bug looks to exist in 2.1 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8537) ConcurrentModificationException while executing 'nodetool cleanup'

2015-01-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8537:
---
Attachment: 0001-don-t-use-the-same-cleanupstrategy-for-all-sstables.patch

seems we reuse the same CleanupStrategy over all threads in the multithreaded 
cleanup, patch fixes that

 ConcurrentModificationException while executing 'nodetool cleanup'
 --

 Key: CASSANDRA-8537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8537
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian 7.7, Oracle JRE 1.7.0_72
Reporter: Noureddine Chatti
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1.3

 Attachments: 
 0001-don-t-use-the-same-cleanupstrategy-for-all-sstables.patch


 After adding a new node to an existing cluster (7 already started nodes), and 
 waiting a few minutes to be sure that data migration to the new node is 
 completed, I began to use the command nodetool cleanup sequentially on each 
 old node. When I issued this command on the third node, after a few minutes I 
 got a ConcurrentModificationException.
 ~$ nodetool cleanup
 error: null
 -- StackTrace --
 java.util.ConcurrentModificationException
 at java.util.ArrayList$Itr.checkForComodification(Unknown Source)
 at java.util.ArrayList$Itr.next(Unknown Source)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.deleteFromIndexes(SecondaryIndexManager.java:476)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$CleanupStrategy$Full.cleanup(CompactionManager.java:833)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doCleanupOne(CompactionManager.java:704)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:97)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:370)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:267)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269301#comment-14269301
 ] 

Aleksey Yeschenko commented on CASSANDRA-6809:
--

[~jasobrown] Are you still interested in reviewing this, or should I reassign 
the reviewer? We kinda want this to happen in 3.0.

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: performance
 Fix For: 3.0

 Attachments: logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8552) Large compactions run out of off-heap RAM

2015-01-08 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269306#comment-14269306
 ] 

Alan Boudreault commented on CASSANDRA-8552:


I am afraid that I am not able to reproduce this issue :(. My second tests of 
replacing/adding a node just finish and everything went well. The server is 
mostly using 4-5G of RAM (75MB free, ~10G cached) but nothing crash or is 
killed by the OOM. I've seen more than 300-400 pending tasks of compaction but 
all finish at a certain point without failure. The repair TO another node 
during a high compaction finished properly too.

 Large compactions run out of off-heap RAM
 -

 Key: CASSANDRA-8552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8552
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.4 
 AWS EC2
 12 m1.xlarge nodes [4 cores, 16GB RAM, 1TB storage (251GB Used)]
 Java build 1.7.0_55-b13 and build 1.8.0_25-b17
Reporter: Brent Haines
Assignee: Benedict
Priority: Blocker
 Fix For: 2.1.3

 Attachments: Screen Shot 2015-01-02 at 9.36.11 PM.png, fhandles.log, 
 freelog.log, lsof.txt, meminfo.txt, sysctl.txt, system.log


 We have a large table of storing, effectively event logs and a pair of 
 denormalized tables for indexing.
 When updating from 2.0 to 2.1 we saw performance improvements, but some 
 random and silent crashes during nightly repairs. We lost a node (totally 
 corrupted) and replaced it. That node has never stabilized -- it simply can't 
 finish the compactions. 
 Smaller compactions finish. Larger compactions, like these two never finish - 
 {code}
 pending tasks: 48
compaction type   keyspace table completed total   
  unit   progress
 Compaction   data   stories   16532973358   75977993784   
 bytes 21.76%
 Compaction   data   stories_by_text   10593780658   38555048812   
 bytes 27.48%
 Active compaction remaining time :   0h10m51s
 {code}
 We are not getting exceptions and are not running out of heap space. The 
 Ubuntu OOM killer is reaping the process after all of the memory is consumed. 
 We watch memory in the opscenter console and it will grow. If we turn off the 
 OOM killer for the process, it will run until everything else is killed 
 instead and then the kernel panics.
 We have the following settings configured: 
 2G Heap
 512M New
 {code}
 memtable_heap_space_in_mb: 1024
 memtable_offheap_space_in_mb: 1024
 memtable_allocation_type: heap_buffers
 commitlog_total_space_in_mb: 2048
 concurrent_compactors: 1
 compaction_throughput_mb_per_sec: 128
 {code}
 The compaction strategy is leveled (these are read-intensive tables that are 
 rarely updated)
 I have tried every setting, every option and I have the system where the MTBF 
 is about an hour now, but we never finish compacting because there are some 
 large compactions pending. None of the GC tools or settings help because it 
 is not a GC problem. It is an off-heap memory problem.
 We are getting these messages in our syslog 
 {code}
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219527] BUG: Bad page map in 
 process java  pte:0320 pmd:2d6fa5067
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219545] addr:7fb820be3000 
 vm_flags:0870 anon_vma:  (null) mapping:  (null) 
 index:7fb820be3
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219556] CPU: 3 PID: 27344 
 Comm: java Tainted: GB3.13.0-24-generic #47-Ubuntu
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219559]  880028510e40 
 88020d43da98 81715ac4 7fb820be3000
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219565]  88020d43dae0 
 81174183 0320 0007fb820be3
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219568]  8802d6fa5f18 
 0320 7fb820be3000 7fb820be4000
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219572] Call Trace:
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219584]  [81715ac4] 
 dump_stack+0x45/0x56
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219591]  [81174183] 
 print_bad_pte+0x1a3/0x250
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219594]  [81175439] 
 vm_normal_page+0x69/0x80
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219598]  [8117580b] 
 unmap_page_range+0x3bb/0x7f0
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219602]  [81175cc1] 
 unmap_single_vma+0x81/0xf0
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219605]  [81176d39] 
 unmap_vmas+0x49/0x90
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219610]  [8117feec] 
 exit_mmap+0x9c/0x170
 Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219617]  [8110fcf3] 
 ? 

[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269312#comment-14269312
 ] 

Aleksey Yeschenko commented on CASSANDRA-8194:
--

bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at 

[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269317#comment-14269317
 ] 

Aleksey Yeschenko commented on CASSANDRA-8194:
--

bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at 

[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269314#comment-14269314
 ] 

Aleksey Yeschenko commented on CASSANDRA-8194:
--

bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at 

[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269315#comment-14269315
 ] 

Aleksey Yeschenko commented on CASSANDRA-8194:
--

bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at 

[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269313#comment-14269313
 ] 

Aleksey Yeschenko commented on CASSANDRA-8194:
--

bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at 

[jira] [Issue Comment Deleted] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8194:
-
Comment: was deleted

(was: bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.)

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   

[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269316#comment-14269316
 ] 

Aleksey Yeschenko commented on CASSANDRA-8194:
--

bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at 

[jira] [Issue Comment Deleted] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8194:
-
Comment: was deleted

(was: bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.)

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   

[jira] [Issue Comment Deleted] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8194:
-
Comment: was deleted

(was: bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.)

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   

[jira] [Issue Comment Deleted] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8194:
-
Comment: was deleted

(was: bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.)

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   

[jira] [Issue Comment Deleted] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8194:
-
Comment: was deleted

(was: bq. My only question would be should we have a MAX_STALE time

We already have it - it's called 'permissions_validity_in_ms', and should be 
honored as that. If we want to refresh periodically, it would probably be more 
tasteful to add another config param, 'permissions_update_interval_in_ms' - or 
something like it. After thinking about it for a while, re-purposing 
'permissions_validity_in_ms' for the refresh rate feels very wrong to me - it 
breaks the existing expectations, and also lies.)

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   

[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-01-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269712#comment-14269712
 ] 

Joshua McKenzie commented on CASSANDRA-8535:


[~shalupov] Is this occurring on the same systems as CASSANDRA-8544?  If so, do 
you still see failed to rename errors while running a cluster after disabling 
A/V and Windows Search or adding exclusions for Cassandra in those services?

CASSANDRA-8551 was created shortly after this ticket, specifically dealing with 
a renaming issue in compaction on Windows on the 2.1 branch during unit-tests.  
If your problem proves to be test-only after addressing A/V and search, I'd 
prefer to close this as duplicate so as not to cloud the issue as 8551 is more 
limited in scope.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie

 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8583) Check for Thread.start()

2015-01-08 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-8583:
---

 Summary: Check for Thread.start()
 Key: CASSANDRA-8583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8583
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor


Old classes sometimes still use 
{noformat}
  new Thread(...).start()
{noformat}
which might be costly.

This ticket's about to find and possibly fix such code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2015-01-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269736#comment-14269736
 ] 

Benjamin Lerer commented on CASSANDRA-8355:
---

I had a look at it and run some tests. I think it is fine.
Thanks for the review.

 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.1.3

 Attachments: CASSANDRA-8355.txt


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix create index statement unit tests on trunk

2015-01-08 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 9606a17b3 - 028fd2950


Fix create index statement unit tests on trunk

Patch by Benjamin Lerer; reviewed by Tyler Hobbs as a follow-up for 
CASSANDRA-8365


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/028fd295
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/028fd295
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/028fd295

Branch: refs/heads/trunk
Commit: 028fd2950195479b90d21ee1bd795cf1a9c661e7
Parents: 9606a17
Author: Tyler Hobbs ty...@datastax.com
Authored: Thu Jan 8 11:27:09 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Thu Jan 8 11:27:48 2015 -0600

--
 test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/028fd295/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java 
b/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
index 18e1be5..847466e 100644
--- a/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
+++ b/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
@@ -79,7 +79,7 @@ public class CreateIndexStatementTest extends CQLTester
 else
 {
 execute(USE  + KEYSPACE);
-dropIndex(DROP INDEX  + indexName);
+execute(DROP INDEX  + indexName);
 }
 
 assertInvalidMessage(No secondary indexes on the restricted columns 
support the provided operators,



[jira] [Commented] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-01-08 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269709#comment-14269709
 ] 

Brandon Williams commented on CASSANDRA-8581:
-

Can you attach a patch instead of screenshots of the code?

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.3

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2015-01-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-8365.

Resolution: Fixed

Ah, I missed that {{dropIndex()}} was setting the keyspace to {{system}}.  
Thanks for the fix, committed!

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cqlsh, docs
 Fix For: 2.1.3

 Attachments: CASSANDRA-8365-V2.txt, 
 CASSANDRA-8365-trunk-unittests-fix.txt, CASSANDRA-8365.txt


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-01-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8535:
---
Fix Version/s: 2.1.3

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
 Fix For: 2.1.3


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Switch CommitLogSegment from RandomAccessFile to nio

2015-01-08 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 028fd2950 - 2b4029a76


Switch CommitLogSegment from RandomAccessFile to nio

Patch by jmckenzie; reviewed by belliottsmith for CASSANDRA-8308


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2b4029a7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2b4029a7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2b4029a7

Branch: refs/heads/trunk
Commit: 2b4029a763173af31633274844a4a3de1f73fa99
Parents: 028fd29
Author: Joshua McKenzie jmcken...@apache.org
Authored: Thu Jan 8 11:49:09 2015 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Thu Jan 8 11:49:09 2015 -0600

--
 CHANGES.txt |  1 +
 .../db/commitlog/CommitLogSegment.java  | 42 +---
 .../org/apache/cassandra/utils/CLibrary.java| 21 +-
 .../unit/org/apache/cassandra/SchemaLoader.java | 11 -
 4 files changed, 57 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2b4029a7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9f946a3..71ccc58 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
  * Allow mixing token and partition key restrictions (CASSANDRA-7016)
  * Support index key/value entries on map collections (CASSANDRA-8473)
  * Modernize schema tables (CASSANDRA-8261)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2b4029a7/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
index 185f57a..3383f1e 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
@@ -23,6 +23,7 @@ import java.io.RandomAccessFile;
 import java.nio.ByteBuffer;
 import java.nio.MappedByteBuffer;
 import java.nio.channels.FileChannel;
+import java.nio.file.StandardOpenOption;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Comparator;
@@ -104,7 +105,7 @@ public class CommitLogSegment
 public final long id;
 
 private final File logFile;
-private final RandomAccessFile logFileAccessor;
+private final FileChannel channel;
 private final int fd;
 
 private final MappedByteBuffer buffer;
@@ -134,7 +135,6 @@ public class CommitLogSegment
 id = getNextId();
 descriptor = new CommitLogDescriptor(id);
 logFile = new File(DatabaseDescriptor.getCommitLogLocation(), 
descriptor.fileName());
-boolean isCreating = true;
 
 try
 {
@@ -147,25 +147,37 @@ public class CommitLogSegment
 logger.debug(Re-using discarded CommitLog segment for {} 
from {}, id, filePath);
 if (!oldFile.renameTo(logFile))
 throw new IOException(Rename from  + filePath +  to 
 + id +  failed);
-isCreating = false;
+}
+else
+{
+logger.debug(Creating new CommitLog segment:  + logFile);
 }
 }
 
-// Open the initial the segment file
-logFileAccessor = new RandomAccessFile(logFile, rw);
+// Extend or truncate the file size to the standard segment size 
as we may have restarted after a segment
+// size configuration change, leaving incorrectly sized segments 
on disk.
+// NOTE: while we're using RAF to allow extension of file on disk 
w/out sparse, we need to avoid using RAF
+// for grabbing the FileChannel due to FILE_SHARE_DELETE flag bug 
on windows.
+// See: https://bugs.openjdk.java.net/browse/JDK-6357433 and 
CASSANDRA-8308
+if (logFile.length() != 
DatabaseDescriptor.getCommitLogSegmentSize())
+{
+try (RandomAccessFile raf = new RandomAccessFile(logFile, 
rw))
+{
+
raf.setLength(DatabaseDescriptor.getCommitLogSegmentSize());
+}
+catch (IOException e)
+{
+throw new FSWriteError(e, logFile);
+}
+}
 
-if (isCreating)
-logger.debug(Creating new commit log segment {}, 
logFile.getPath());
+channel = FileChannel.open(logFile.toPath(), 
StandardOpenOption.WRITE, StandardOpenOption.READ);
 
-// Map the segment, extending or truncating it to the 

cassandra git commit: ninja - correct comment in CommitLogSegment

2015-01-08 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2b4029a76 - 39a2410ad


ninja - correct comment in CommitLogSegment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/39a2410a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/39a2410a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/39a2410a

Branch: refs/heads/trunk
Commit: 39a2410add2c435ea9b221ebcb73cbca9c0892d0
Parents: 2b4029a
Author: Joshua McKenzie jmcken...@apache.org
Authored: Thu Jan 8 11:52:57 2015 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Thu Jan 8 11:52:57 2015 -0600

--
 src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/39a2410a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
index 3383f1e..6b40864 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
@@ -156,7 +156,7 @@ public class CommitLogSegment
 
 // Extend or truncate the file size to the standard segment size 
as we may have restarted after a segment
 // size configuration change, leaving incorrectly sized segments 
on disk.
-// NOTE: while we're using RAF to allow extension of file on disk 
w/out sparse, we need to avoid using RAF
+// NOTE: while we're using RAF to easily adjust file size, we need 
to avoid using RAF
 // for grabbing the FileChannel due to FILE_SHARE_DELETE flag bug 
on windows.
 // See: https://bugs.openjdk.java.net/browse/JDK-6357433 and 
CASSANDRA-8308
 if (logFile.length() != 
DatabaseDescriptor.getCommitLogSegmentSize())



[jira] [Updated] (CASSANDRA-8583) Check for Thread.start()

2015-01-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8583:

Description: 
Old classes sometimes still use 
{noformat}
  new Thread(...).start()
{noformat}
which might be costly.

This ticket's about to find and possibly fix such code.

Locations in code worth to investigate (IMO). This list is not prioritized - 
it's just the order I've found Thread.start()
# 
{{org.apache.cassandra.streaming.compress.CompressedInputStream#CompressedInputStream}}
 creates one thread per input stream to decompress in a separate thread. If 
necessary, should be easily replaceable with a thread-pool
# 
{{org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter#SSTableSimpleUnsortedWriter(java.io.File,
 org.apache.cassandra.config.CFMetaData, org.apache.cassandra.dht.IPartitioner, 
long)}} creates one thread per write. If necessary, should be easily 
replaceable with a thread-pool
# {{org.apache.cassandra.streaming.ConnectionHandler.MessageHandler#start}} 
creates one thread. If necessary, should be easily replaceable with a 
thread-pool.
# {{org.apache.cassandra.net.OutboundTcpConnection#handshakeVersion}} creates 
one thread just to implement a timeout. Not sure why not just using 
{{Socket.setSoTimeout}}
# 
{{org.apache.cassandra.service.StorageService#forceRepairAsync(java.lang.String,
 org.apache.cassandra.repair.messages.RepairOption)}} creates one thread per 
repair. Not sure whether it's worth to investigate this one, since repairs are 
long running operations
# {{org.apache.cassandra.db.index.SecondaryIndex#buildIndexAsync}} creates a 
thread. Not sure whether it's worth to investigate this one.

Beside these, there are threads used in {{MessagingService}} and for streaming 
(blocking I/O model). These could be changed by using non-blocking I/O - but 
that's a much bigger task with much higher risks.

  was:
Old classes sometimes still use 
{noformat}
  new Thread(...).start()
{noformat}
which might be costly.

This ticket's about to find and possibly fix such code.


 Check for Thread.start()
 

 Key: CASSANDRA-8583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8583
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Priority: Minor

 Old classes sometimes still use 
 {noformat}
   new Thread(...).start()
 {noformat}
 which might be costly.
 This ticket's about to find and possibly fix such code.
 Locations in code worth to investigate (IMO). This list is not prioritized - 
 it's just the order I've found Thread.start()
 # 
 {{org.apache.cassandra.streaming.compress.CompressedInputStream#CompressedInputStream}}
  creates one thread per input stream to decompress in a separate thread. If 
 necessary, should be easily replaceable with a thread-pool
 # 
 {{org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter#SSTableSimpleUnsortedWriter(java.io.File,
  org.apache.cassandra.config.CFMetaData, 
 org.apache.cassandra.dht.IPartitioner, long)}} creates one thread per write. 
 If necessary, should be easily replaceable with a thread-pool
 # {{org.apache.cassandra.streaming.ConnectionHandler.MessageHandler#start}} 
 creates one thread. If necessary, should be easily replaceable with a 
 thread-pool.
 # {{org.apache.cassandra.net.OutboundTcpConnection#handshakeVersion}} creates 
 one thread just to implement a timeout. Not sure why not just using 
 {{Socket.setSoTimeout}}
 # 
 {{org.apache.cassandra.service.StorageService#forceRepairAsync(java.lang.String,
  org.apache.cassandra.repair.messages.RepairOption)}} creates one thread per 
 repair. Not sure whether it's worth to investigate this one, since repairs 
 are long running operations
 # {{org.apache.cassandra.db.index.SecondaryIndex#buildIndexAsync}} creates a 
 thread. Not sure whether it's worth to investigate this one.
 Beside these, there are threads used in {{MessagingService}} and for 
 streaming (blocking I/O model). These could be changed by using non-blocking 
 I/O - but that's a much bigger task with much higher risks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8584) Add strerror output on failed trySkipCache calls

2015-01-08 Thread Joshua McKenzie (JIRA)
Joshua McKenzie created CASSANDRA-8584:
--

 Summary: Add strerror output on failed trySkipCache calls
 Key: CASSANDRA-8584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8584
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Trivial
 Fix For: 2.1.3


Since trySkipCache returns an errno rather than -1 and setting errno like our 
other CLibrary calls, it's thread-safe and we could print out more helpful 
information if we failed to prompt the kernel to skip the page cache.  That 
system call should always succeed unless we have an invalid fd as it's free to 
ignore us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8584) Add strerror output on failed trySkipCache calls

2015-01-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8584:
---
Attachment: 8584_v1.txt

v1 attached.  Warns if we fail trySkipCache - it will require some digging if 
we get ourselves into a situation like this since we only have the integer fd 
at time of error and can't print out what file it failed on, but better to know 
we have a failure during unit tests / runtime than just silently failing and 
believing our system call was working.

 Add strerror output on failed trySkipCache calls
 

 Key: CASSANDRA-8584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8584
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Trivial
 Fix For: 2.1.3

 Attachments: 8584_v1.txt


 Since trySkipCache returns an errno rather than -1 and setting errno like our 
 other CLibrary calls, it's thread-safe and we could print out more helpful 
 information if we failed to prompt the kernel to skip the page cache.  That 
 system call should always succeed unless we have an invalid fd as it's free 
 to ignore us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-01-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8581:
---
Reproduced In: 2.1.2
Fix Version/s: 2.1.3

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
  Labels: hadoop
 Fix For: 2.1.3

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-01-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8581:
---
  Tester: Philip Thompson
Assignee: Brandon Williams

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.3

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8579) sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer

2015-01-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8579:
---
 Reviewer: Yuki Morishita
Fix Version/s: 2.1.3
   2.0.12

 sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer
 ---

 Key: CASSANDRA-8579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8579
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jimmy Mårdell
Assignee: Jimmy Mårdell
Priority: Minor
 Fix For: 2.0.12, 2.1.3

 Attachments: cassandra-2.0-8579-1.txt, cassandra-2.1-8579-1.txt


 The sstablemetadata tool only works when running from the source tree. The 
 classpath doesn't get set correctly when running on a deployed environment.
 This bug looks to exist in 2.1 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8528) Add an ExecutionException to the protocol

2015-01-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269375#comment-14269375
 ] 

Sylvain Lebresne commented on CASSANDRA-8528:
-

bq. I'd like to transport the function to the client

Fair enough. As I said, I was mostly nitpicking on the naming.

{quote}
* new error code for broken functions
* new error code for 'generic' execution exception
{quote}

I don't know, I'd rather not over-engineer it.

For broken functions, this is a very edgy case (especially now that we don't 
load classes from the classpath), so it's perfectly fine imho to just reuse 
{{FunctionExecutionException}} (or, possibly even better, change the code so it 
checks for broken functions at query validation and throw an IRE).

Regarding generic execution exception, I didn't said I had something in mind 
for that, only that a more generic name could be handy in the future. But it 
may not, and I don't think we should add something that may never prove useful. 
We can always do it later if we do have a use of it.


 Add an ExecutionException to the protocol
 -

 Key: CASSANDRA-8528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8528
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
  Labels: client-impacting, protocolv4
 Fix For: 3.0

 Attachments: 8528-001.txt


 With the introduction of UDF, we should add an ExecutionException (or 
 FunctionExecutionException or something like that) to the exceptions that can 
 be sent back to client. We can't guarantee that UDFs won't throw and none of 
 our existing exception is terribly adapted to report such event to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-01-08 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269460#comment-14269460
 ] 

Brandon Williams commented on CASSANDRA-8581:
-

I'm confused by the CNFE for OffsetAwareConfigurationLoader.  That's not a 
class that exists in Cassandra.

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.3

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8490) DISTINCT queries with LIMITs or paging are incorrect when partitions are deleted

2015-01-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269347#comment-14269347
 ] 

Sylvain Lebresne commented on CASSANDRA-8490:
-

I don't think the logic in CFS is the one we want because:
* while we don't want to count tombstoned partition towards the limit, we 
probably shouldn't skip them from the result since they may shadow data from 
another node. 
* we want to skip (not count) partition that have no live data, but that also 
means partition with only tombstones, so I think we should use 
{{CF.hasOnlyTombstones(now)}} instead of {{CF.isMarkedForDelete()}}.

So I think the logic should something along the lines of:
{noformat}
rows.add(new Row(rawRow.key, data));
if (!ignoreTombstonedPartitions || !data.hasOnlyTombstones(filter.timestamp))
matched++;
{noformat}
The rest lgtm, so +1 with that change if we agree on it.


 DISTINCT queries with LIMITs or paging are incorrect when partitions are 
 deleted
 

 Key: CASSANDRA-8490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8490
 Project: Cassandra
  Issue Type: Bug
 Environment: Driver version: 2.1.3.
 Cassandra version: 2.0.11/2.1.2.
Reporter: Frank Limstrand
Assignee: Tyler Hobbs
 Fix For: 2.0.12, 2.1.3

 Attachments: 8490-2.0.txt, 8490-trunk.txt


 Using paging demo code from 
 https://github.com/PatrickCallaghan/datastax-paging-demo
 The code creates and populates a table with 1000 entries and pages through 
 them with setFetchSize set to 100. If we then delete one entry with 'cqlsh':
 {noformat}
 cqlsh:datastax_paging_demo delete from datastax_paging_demo.products  where 
 productId = 'P142'; (The specified productid is number 6 in the resultset.)
 {noformat}
 and run the same query (Select * from) again we get:
 {noformat}
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 999
 {noformat}
 which is what we would expect.
 If we then change the select statement in dao/ProductDao.java (line 70) 
 from Select * from  to Select DISTINCT productid from  we get this result:
 {noformat}
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 99
 {noformat}
 So it looks like the tombstone stops the paging behaviour. Is this a bug?
 {noformat}
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,431 Message.java 
 (line 319) Received: QUERY Select DISTINCT productid from 
 datastax_paging_demo.products, v=2
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 98) Fetched 99 live rows
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 115) Got result (99) smaller than page size 
 (100), considering pager exhausted
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8548:
---
Attachment: 0001-make-sure-we-unmark-compacting.patch

 Nodetool Cleanup - java.lang.AssertionError
 ---

 Key: CASSANDRA-8548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8548
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 0001-make-sure-we-unmark-compacting.patch


 Needed to free up some space on a node but getting the dump below when 
 running nodetool cleanup.
 Tried turning on debug to try to obtain additional details in the logs but 
 nothing gets added to the logs when running cleanup. Added: 
 log4j.logger.org.apache.cassandra.db=DEBUG 
 in log4j-server.properties
 See the stack trace below:
 root@cassandra-019:~# nodetool cleanup
 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:228)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2162)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:267)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:108)

[jira] [Updated] (CASSANDRA-8448) Comparison method violates its general contract in AbstractEndpointSnitch

2015-01-08 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8448:

Attachment: 8448.txt

I see, thanks.  So something like this should fix it?

 Comparison method violates its general contract in AbstractEndpointSnitch
 ---

 Key: CASSANDRA-8448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8448
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston
Assignee: Brandon Williams
 Attachments: 8448.txt


 Seen in both 1.2 and 2.0.  The error is occurring here: 
 https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/locator/AbstractEndpointSnitch.java#L49
 {code}
 ERROR [Thrift:9] 2014-12-04 20:12:28,732 CustomTThreadPoolServer.java (line 
 219) Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:352)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:224)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:218)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:202)
   at 
 org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:822)
   at 
 org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:954)
   at com.datastax.bdp.server.DseServer.batch_mutate(DseServer.java:576)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3922)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3906)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.IllegalArgumentException: Comparison method violates its 
 general contract!
   at java.util.TimSort.mergeHi(TimSort.java:868)
   at java.util.TimSort.mergeAt(TimSort.java:485)
   at java.util.TimSort.mergeCollapse(TimSort.java:410)
   at java.util.TimSort.sort(TimSort.java:214)
   at java.util.TimSort.sort(TimSort.java:173)
   at java.util.Arrays.sort(Arrays.java:659)
   at java.util.Collections.sort(Collections.java:217)
   at 
 org.apache.cassandra.locator.AbstractEndpointSnitch.sortByProximity(AbstractEndpointSnitch.java:49)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithScore(DynamicEndpointSnitch.java:157)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithBadness(DynamicEndpointSnitch.java:186)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximity(DynamicEndpointSnitch.java:151)
   at 
 org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1408)
   at 
 org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1402)
   at 
 org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:148)
   at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1223)
   at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1165)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:255)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:225)
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:243)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:69)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:338)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:335)
   at 
 

[jira] [Updated] (CASSANDRA-8579) sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer

2015-01-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8579:
---
Assignee: Jimmy Mårdell

 sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer
 ---

 Key: CASSANDRA-8579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8579
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jimmy Mårdell
Assignee: Jimmy Mårdell
Priority: Minor
 Attachments: cassandra-2.0-8579-1.txt, cassandra-2.1-8579-1.txt


 The sstablemetadata tool only works when running from the source tree. The 
 classpath doesn't get set correctly when running on a deployed environment.
 This bug looks to exist in 2.1 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8580) AssertionErrors after activating unchecked_tombstone_compaction with leveled compaction

2015-01-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8580:
---
Description: 
During our upgrade of Cassandra from version 2.0.7 to 2.1.2 we experienced a 
serious problem regarding the setting unchecked_tombstone_compaction in 
combination with leveled compaction strategy.

In order to prevent tombstone-threshold-warnings we activated the setting for a 
specific table after the upgrade. Some time after that we observed new errors 
in our log files:
{code}
INFO  [CompactionExecutor:184] 2014-12-11 12:36:06,597 CompactionTask.java:136 
- Compacting 
[SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1848-Data.db'),
 SSTableReader(path='/
data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1847-Data.db'),
 
SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1845-Data.db'),
 SSTableReader
(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1846-Data.db')]
ERROR [CompactionExecutor:183] 2014-12-11 12:36:06,613 CassandraDaemon.java:153 
- Exception in thread Thread[CompactionExecutor:183,1,main]
java.lang.AssertionError: 
/data/cassandra/data/metrigo_prod/new_user_data/metrigo_prod-new_user_data-tmplink-ka-705732-Data.db
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:243)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:146)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
{code}
Obviously that error aborted the compaction and after some time the number of 
pending compactions became very high on every node. Of course, this in turn had 
a negative impact on several other metrics.

After reverting the setting we had to restart all nodes. After that compactions 
could finish again and the pending compactions could be worked off.

  was:
During our upgrade of Cassandra from version 2.0.7 to 2.1.2 we experienced a 
serious problem regarding the setting unchecked_tombstone_compaction in 
combination with leveled compaction strategy.

In order to prevent tombstone-threshold-warnings we activated the setting for a 
specific table after the upgrade. Some time after that we observed new errors 
in our log files:

INFO  [CompactionExecutor:184] 2014-12-11 12:36:06,597 CompactionTask.java:136 
- Compacting 
[SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1848-Data.db'),
 SSTableReader(path='/
data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1847-Data.db'),
 
SSTableReader(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1845-Data.db'),
 SSTableReader
(path='/data/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ka-1846-Data.db')]
ERROR [CompactionExecutor:183] 2014-12-11 12:36:06,613 CassandraDaemon.java:153 
- Exception in thread Thread[CompactionExecutor:183,1,main]
java.lang.AssertionError: 
/data/cassandra/data/metrigo_prod/new_user_data/metrigo_prod-new_user_data-tmplink-ka-705732-Data.db
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:243)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:146)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 

[Cassandra Wiki] Update of ContributorsGroup by BrandonWilliams

2015-01-08 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ContributorsGroup page has been changed by BrandonWilliams:
https://wiki.apache.org/cassandra/ContributorsGroup?action=diffrev1=41rev2=42

   * JohnSumsion
   * mriou
   * achilleasa
+  * RussellHatch
  


[jira] [Commented] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-01-08 Thread xiangdong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269582#comment-14269582
 ] 

xiangdong Huang commented on CASSANDRA-8581:


It is a test class.  It is in test/unit folder.

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.3

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8421) Cassandra 2.1.1 Cassandra 2.1.2 UDT not returning value for LIST type as UDT

2015-01-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269689#comment-14269689
 ] 

Benjamin Lerer commented on CASSANDRA-8421:
---

I will look into it next week. Sorry, for the delay.

 Cassandra 2.1.1  Cassandra 2.1.2 UDT not returning value for LIST type as UDT
 --

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
Assignee: Benjamin Lerer
 Fix For: 3.0, 2.1.3

 Attachments: 8421-unittest.txt, entity_data.csv


 I using List and its data type is UDT.
 UDT:
 {code}
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 {code}
 TABLE:
 {code}
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) {code}
 INDEX:
 {code}
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 {code}
 QUERY
 {code}
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 {code}
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269384#comment-14269384
 ] 

Marcus Eriksson commented on CASSANDRA-8548:


This looks like it is also from 2.0, performAllSSTableOperation was removed in 
2.1?

 Nodetool Cleanup - java.lang.AssertionError
 ---

 Key: CASSANDRA-8548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8548
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Marcus Eriksson

 Needed to free up some space on a node but getting the dump below when 
 running nodetool cleanup.
 Tried turning on debug to try to obtain additional details in the logs but 
 nothing gets added to the logs when running cleanup. Added: 
 log4j.logger.org.apache.cassandra.db=DEBUG 
 in log4j-server.properties
 See the stack trace below:
 root@cassandra-019:~# nodetool cleanup
 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:228)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2162)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:267)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:108)
 at 
 

[jira] [Commented] (CASSANDRA-8546) RangeTombstoneList becoming bottleneck on tombstone heavy tasks

2015-01-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269434#comment-14269434
 ] 

Sylvain Lebresne commented on CASSANDRA-8546:
-

bq. Until then though the GapList change could be a minimal impact fix

Maybe, but how minimal can still be somewhat debated. I'll admit that I do feel 
like this is a bit of a edge case (tombstone heavy + reverse queries), and so 
while I'm 100% on board for fixing it eventually, my bar for risking other 
regressions is not terribly high. Typically, I had never heard of brownies 
collections and a very quick look at their change log shows that the 2 last 
versions fixed critical bugs in GapList. I mean, I'm sure it's good, but it's 
not entirely risk-free to add in a stable release either.

Please note that I'd have no problem with that patch for 3.0 as a stop-gap 
until we come up with an even better solution, but as it won't be so useful 
there, the question is wether we're fine with committing this to 2.1. 
Personally, I tend to be conservative when in doubt and so I'm leaning towards 
accepting that performance bottleneck for 2.1, waiting for 3.0 to fix it. Not a 
terribly strong opinion however.

 RangeTombstoneList becoming bottleneck on tombstone heavy tasks
 ---

 Key: CASSANDRA-8546
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8546
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: 2.0.11 / 2.1
Reporter: Dominic Letz
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: cassandra-2.0.11-8546.txt, cassandra-2.1-8546.txt, 
 rangetombstonelist_compaction.png, rangetombstonelist_mutation.png, 
 rangetombstonelist_read.png, tombstone_test.tgz


 I would like to propose a change of the data structure used in the 
 RangeTombstoneList to store and insert tombstone ranges to something with at 
 least O(log N) insert in the middle and at near O(1) and start AND end. Here 
 is why:
 When having tombstone heavy work-loads the current implementation of 
 RangeTombstoneList becomes a bottleneck with slice queries.
 Scanning the number of tombstones up to the default maximum (100k) can take 
 up to 3 minutes of how addInternal() scales on insertion of middle and start 
 elements.
 The attached test shows that with 50k deletes from both sides of a range.
 INSERT 1...11
 flush()
 DELETE 1...5
 DELETE 11...6
 While one direction performs ok (~400ms on my notebook):
 {code}
 SELECT * FROM timeseries WHERE name = 'a' ORDER BY timestamp DESC LIMIT 1
 {code}
 The other direction underperforms (~7seconds on my notebook)
 {code}
 SELECT * FROM timeseries WHERE name = 'a' ORDER BY timestamp ASC LIMIT 1
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8582) Descriptor.fromFilename seems broken for BIG format

2015-01-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8582:

Assignee: T Jake Luciani

 Descriptor.fromFilename seems broken for BIG format
 ---

 Key: CASSANDRA-8582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8582
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: T Jake Luciani

 The problem can be reproduced in {{DescriptorTest}} by adding the following 
 unit test:
 {code}
 @Test
 public void testFromFileNameWithBIGFormat()
 {
 checkFromFilename(new Descriptor(tempDataDir, ksname, cfname, 1, 
 Descriptor.Type.TEMP, SSTableFormat.Type.BIG), false);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8281) CQLSSTableWriter close does not work

2015-01-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8281:
--
Attachment: CASSANDRA-8281-V2-trunk.txt
CASSANDRA-8281-V2-2.1.txt

Both patches make sure that {{Keyspace.setInitialized()}} is called and that 
the commit log is shutdown in {{CQLSSTableWriter}}.

For trunk the patch make sure that {{CQLSSTableWriter}} is not impacted by 
CASSANDRA-8582 and that an Exception in the writer does not prevent the 
{{CQLSSTableWriter}} to close.








 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer
 Fix For: 2.1.3

 Attachments: CASSANDRA-8281-V2-2.1.txt, CASSANDRA-8281-V2-trunk.txt, 
 CASSANDRA-8281.txt


 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-08 Thread Sebastian Estevez (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Estevez updated CASSANDRA-8548:
-
Reproduced In: 2.0.11  (was: 2.0.11, 2.1.2)

 Nodetool Cleanup - java.lang.AssertionError
 ---

 Key: CASSANDRA-8548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8548
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 0001-make-sure-we-unmark-compacting.patch


 Needed to free up some space on a node but getting the dump below when 
 running nodetool cleanup.
 Tried turning on debug to try to obtain additional details in the logs but 
 nothing gets added to the logs when running cleanup. Added: 
 log4j.logger.org.apache.cassandra.db=DEBUG 
 in log4j-server.properties
 See the stack trace below:
 root@cassandra-019:~# nodetool cleanup
 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:228)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2162)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:267)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:108)
 at 
 

[jira] [Commented] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-08 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269469#comment-14269469
 ] 

Sebastian Estevez commented on CASSANDRA-8548:
--

Thanks Marcus for looking into this. You are right, the stack trace above is 
from 2.0.11 not 2.1.x.

 Nodetool Cleanup - java.lang.AssertionError
 ---

 Key: CASSANDRA-8548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8548
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 0001-make-sure-we-unmark-compacting.patch


 Needed to free up some space on a node but getting the dump below when 
 running nodetool cleanup.
 Tried turning on debug to try to obtain additional details in the logs but 
 nothing gets added to the logs when running cleanup. Added: 
 log4j.logger.org.apache.cassandra.db=DEBUG 
 in log4j-server.properties
 See the stack trace below:
 root@cassandra-019:~# nodetool cleanup
 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:228)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2162)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:267)
 at 
 

[jira] [Comment Edited] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-08 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14264794#comment-14264794
 ] 

Sebastian Estevez edited comment on CASSANDRA-8548 at 1/8/15 3:53 PM:
--

Here's another, almost identical, stack trace from another cluster -- 
(correction, this was still 2.0.11)
$ nodetool cleanup xxx
{code}
Exception in thread main java.lang.AssertionError: 
[SSTableReader(path='/mnt/cassandra/data/xxx/network_flows_54402cf35303a0fc3e33/xxx-
...
...
...
xxx-jb-18316-Data.db'), 
SSTableReader(path='/mnt/cassandra/data/xxx/xxx/xxx-xxx-jb-14144-Data.db')]
at 
org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2130)
at 
org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2127)
at 
org.apache.cassandra.db.ColumnFamilyStore.runWithCompactionsDisabled(ColumnFamilyStore.java:2109)
at 
org.apache.cassandra.db.ColumnFamilyStore.markAllCompacting(ColumnFamilyStore.java:2140)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:215)
at 
org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)code
{code}


was (Author: sebastian.este...@datastax.com):
Here's another, almost identical, stack trace from another cluster -- this time 
on 2.1.2
$ nodetool cleanup xxx
{code}
Exception in thread main java.lang.AssertionError: 
[SSTableReader(path='/mnt/cassandra/data/xxx/network_flows_54402cf35303a0fc3e33/xxx-
...
...
...
xxx-jb-18316-Data.db'), 
SSTableReader(path='/mnt/cassandra/data/xxx/xxx/xxx-xxx-jb-14144-Data.db')]
at 
org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2130)
at 

[jira] [Comment Edited] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-08 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267251#comment-14267251
 ] 

Sebastian Estevez edited comment on CASSANDRA-8548 at 1/8/15 3:54 PM:
--

[~andrew.tolbert] What version of c*? So far we've seen this in 2.0.11


was (Author: sebastian.este...@datastax.com):
[~andrew.tolbert] What version of c*? So far we've seen this in 2.0.11 and 
2.1.2.

 Nodetool Cleanup - java.lang.AssertionError
 ---

 Key: CASSANDRA-8548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8548
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 0001-make-sure-we-unmark-compacting.patch


 Needed to free up some space on a node but getting the dump below when 
 running nodetool cleanup.
 Tried turning on debug to try to obtain additional details in the logs but 
 nothing gets added to the logs when running cleanup. Added: 
 log4j.logger.org.apache.cassandra.db=DEBUG 
 in log4j-server.properties
 See the stack trace below:
 root@cassandra-019:~# nodetool cleanup
 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:228)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2162)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at 

  1   2   >