[jira] [Commented] (CASSANDRA-12281) Gossip blocks on startup when another node is bootstrapping

2016-09-28 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531831#comment-15531831
 ] 

Dikang Gu commented on CASSANDRA-12281:
---

How big is your cluster? I see the same problem in one of our big cluster as 
well.

> Gossip blocks on startup when another node is bootstrapping
> ---
>
> Key: CASSANDRA-12281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12281
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Evans
>Assignee: Joel Knighton
>Priority: Minor
> Attachments: restbase1015-a_jstack.txt
>
>
> In our cluster, normal node startup times (after a drain on shutdown) are 
> less than 1 minute.  However, when another node in the cluster is 
> bootstrapping, the same node startup takes nearly 30 minutes to complete, the 
> apparent result of gossip blocking on pending range calculations.
> {noformat}
> $ nodetool-a tpstats
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 0   1840 0
>  0
> ReadStage 0 0   2350 0
>  0
> RequestResponseStage  0 0 53 0
>  0
> ReadRepairStage   0 0  1 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> HintedHandoff 0 0 44 0
>  0
> MiscStage 0 0  0 0
>  0
> CompactionExecutor3 3395 0
>  0
> MemtableReclaimMemory 0 0 30 0
>  0
> PendingRangeCalculator1 2 29 0
>  0
> GossipStage   1  5602164 0
>  0
> MigrationStage0 0  0 0
>  0
> MemtablePostFlush 0 0111 0
>  0
> ValidationExecutor0 0  0 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0 30 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {noformat}
> A full thread dump is attached, but the relevant bit seems to be here:
> {noformat}
> [ ... ]
> "GossipStage:1" #1801 daemon prio=5 os_prio=0 tid=0x7fe4cd54b000 
> nid=0xea9 waiting on condition [0x7fddcf883000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0004c1e922c0> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:174)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:160)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2023)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1682)
>   at 
> org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1182)
>   at org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:1165)
>   at 
> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1128)
>   at 
> 

[jira] [Comment Edited] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-09-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531686#comment-15531686
 ] 

Stefania edited comment on CASSANDRA-12461 at 9/29/16 3:46 AM:
---

It LGTM, with these two changes:

* I ill advised you yesterday regarding replacing the boolean with the 
operating mode, because we are not updating it atomically and we are not 
guarding against unwanted transitions, due to some bug somewhere, e.g. DRAINED 
-> DECOMMISSIONED. I did not want to handle all this and check all the callers 
of {{setMode}} and so I've reintroduced the boolean, I merely renamed it to 
{{isShutdown}}.

* I think we could have races if we check that a node is drained before 
starting a service with two JMX calls. I moved the check in the SS methods and 
made them synchronized.

|[patch|https://github.com/stef1927/cassandra/commits/12461]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12461-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12461-dtest/]|

Let me know if you are OK with these two changes, if so, should we also 
back-port to 3.0?


was (Author: stefania):
It LGTM, with these two changes:

* I ill advised you yesterday regarding replacing the boolean with the 
operating mode, because we are not updating it atomically and we are not 
guarding against unwanted transitions, due to some bug somewhere, e.g. DRAINED 
-> DECOMMISSIONED or DRAINED -> NORMAL. I did not want to handle all this and 
check all the callers of {{setMode}} and so I've reintroduced the boolean, I 
merely renamed it to {{isShutdown}}.

* I think we could have races if we check that a node is drained before 
starting a service with two JMX calls. I move the check in the SS methods and 
made them synchronized.

|[patch|https://github.com/stef1927/cassandra/commits/12461]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12461-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12461-dtest/]|

Let me know if you are OK with these two changes, if so, should we also 
back-port to 3.0?

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Fix For: 3.x
>
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-09-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531686#comment-15531686
 ] 

Stefania commented on CASSANDRA-12461:
--

It LGTM, with these two changes:

* I ill advised you yesterday regarding replacing the boolean with the 
operating mode, because we are not updating it atomically and we are not 
guarding against unwanted transitions, due to some bug somewhere, e.g. DRAINED 
-> DECOMMISSIONED or DRAINED -> NORMAL. I did not want to handle all this and 
check all the callers of {{setMode}} and so I've reintroduced the boolean, I 
merely renamed it to {{isShutdown}}.

* I think we could have races if we check that a node is drained before 
starting a service with two JMX calls. I move the check in the SS methods and 
made them synchronized.

|[patch|https://github.com/stef1927/cassandra/commits/12461]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12461-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12461-dtest/]|

Let me know if you are OK with these two changes, if so, should we also 
back-port to 3.0?

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Fix For: 3.x
>
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12632) Failure in LogTransactionTest.testUnparsableFirstRecord-compression

2016-09-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531483#comment-15531483
 ] 

Stefania commented on CASSANDRA-12632:
--

Committed to 3.0 as 5cebd1fb0d60e23ae5a30231e253806d58d0998e and merged into 
trunk.

> Failure in LogTransactionTest.testUnparsableFirstRecord-compression
> ---
>
> Key: CASSANDRA-12632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12632
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Stefania
> Fix For: 3.0.10, 3.10
>
>
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: 
> [/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Index.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-TOC.txt,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Filter.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Data.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc_txn_compaction_73af4e00-78d3-11e6-9858-93d33dad3001.log]
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1228)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1196)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testCorruptRecord(LogTransactionTest.java:1040)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testUnparsableFirstRecord(LogTransactionTest.java:988)
> {code}
> Example failure:
> http://cassci.datastax.com/job/cassandra-3.9_testall/89/testReport/junit/org.apache.cassandra.db.lifecycle/LogTransactionTest/testUnparsableFirstRecord_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12632) Failure in LogTransactionTest.testUnparsableFirstRecord-compression

2016-09-28 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12632:
-
   Resolution: Fixed
 Reviewer: Edward Capriolo
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.10
   3.0.10
   Status: Resolved  (was: Ready to Commit)

> Failure in LogTransactionTest.testUnparsableFirstRecord-compression
> ---
>
> Key: CASSANDRA-12632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12632
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Stefania
> Fix For: 3.0.10, 3.10
>
>
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: 
> [/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Index.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-TOC.txt,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Filter.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Data.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc_txn_compaction_73af4e00-78d3-11e6-9858-93d33dad3001.log]
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1228)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1196)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testCorruptRecord(LogTransactionTest.java:1040)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testUnparsableFirstRecord(LogTransactionTest.java:988)
> {code}
> Example failure:
> http://cassci.datastax.com/job/cassandra-3.9_testall/89/testReport/junit/org.apache.cassandra.db.lifecycle/LogTransactionTest/testUnparsableFirstRecord_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Fix failure in LogTransactionTest

2016-09-28 Thread stefania
Fix failure in LogTransactionTest

Patch by Stefania Alborghetti; reviewed by Edward Capriolo for CASSANDRA-12632


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5cebd1fb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5cebd1fb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5cebd1fb

Branch: refs/heads/trunk
Commit: 5cebd1fb0d60e23ae5a30231e253806d58d0998e
Parents: 9dd805d
Author: Stefania Alborghetti 
Authored: Tue Sep 13 14:51:56 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Sep 29 09:27:18 2016 +0800

--
 CHANGES.txt   |  1 +
 .../cassandra/db/lifecycle/LogTransactionTest.java| 14 --
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cebd1fb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6edc491..f2f8dac 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * Fix failure in LogTransactionTest (CASSANDRA-12632)
  * Fix potentially incomplete non-frozen UDT values when querying with the
full primary key specified (CASSANDRA-12605)
  * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cebd1fb/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java 
b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
index 0f03baf..56df337 100644
--- a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
+++ b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
@@ -1003,11 +1003,15 @@ public class LogTransactionTest extends 
AbstractTransactionalTest
 assertNotNull(log);
 
 log.trackNew(sstableNew);
-log.obsoleted(sstableOld);
+LogTransaction.SSTableTidier tidier = log.obsoleted(sstableOld);
 
 // Modify the transaction log or disk state for sstableOld
 modifier.accept(log, sstableOld);
 
+// Sync the folder to make sure that later on 
removeUnfinishedLeftovers picks up
+// any changes to the txn files done by the modifier
+assertNull(log.txnFile().syncFolder(null));
+
 assertNull(log.complete(null));
 
 sstableOld.selfRef().release();
@@ -1040,6 +1044,9 @@ public class LogTransactionTest extends 
AbstractTransactionalTest

oldFiles,

log.logFilePaths(;
 }
+
+// make sure to run the tidier to avoid any leaks in the logs
+tidier.run();
 }
 
 @Test
@@ -1068,7 +1075,7 @@ public class LogTransactionTest extends 
AbstractTransactionalTest
 assertNotNull(log);
 
 log.trackNew(sstableNew);
-/*TransactionLog.SSTableTidier tidier =*/ log.obsoleted(sstableOld);
+LogTransaction.SSTableTidier tidier = log.obsoleted(sstableOld);
 
 //modify the old sstable files
 modifier.accept(sstableOld);
@@ -1093,6 +1100,9 @@ public class LogTransactionTest extends 
AbstractTransactionalTest
 assertFiles(dataFolder.getPath(), 
Sets.newHashSet(Iterables.concat(sstableNew.getAllFilePaths(),

sstableOld.getAllFilePaths(),

log.logFilePaths(;
+
+// make sure to run the tidier to avoid any leaks in the logs
+tidier.run();
 }
 
 @Test



[1/3] cassandra git commit: Fix failure in LogTransactionTest

2016-09-28 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 9dd805d0d -> 5cebd1fb0
  refs/heads/trunk 3a79a027c -> 25d4c7baa


Fix failure in LogTransactionTest

Patch by Stefania Alborghetti; reviewed by Edward Capriolo for CASSANDRA-12632


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5cebd1fb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5cebd1fb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5cebd1fb

Branch: refs/heads/cassandra-3.0
Commit: 5cebd1fb0d60e23ae5a30231e253806d58d0998e
Parents: 9dd805d
Author: Stefania Alborghetti 
Authored: Tue Sep 13 14:51:56 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Sep 29 09:27:18 2016 +0800

--
 CHANGES.txt   |  1 +
 .../cassandra/db/lifecycle/LogTransactionTest.java| 14 --
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cebd1fb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6edc491..f2f8dac 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * Fix failure in LogTransactionTest (CASSANDRA-12632)
  * Fix potentially incomplete non-frozen UDT values when querying with the
full primary key specified (CASSANDRA-12605)
  * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cebd1fb/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java 
b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
index 0f03baf..56df337 100644
--- a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
+++ b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java
@@ -1003,11 +1003,15 @@ public class LogTransactionTest extends 
AbstractTransactionalTest
 assertNotNull(log);
 
 log.trackNew(sstableNew);
-log.obsoleted(sstableOld);
+LogTransaction.SSTableTidier tidier = log.obsoleted(sstableOld);
 
 // Modify the transaction log or disk state for sstableOld
 modifier.accept(log, sstableOld);
 
+// Sync the folder to make sure that later on 
removeUnfinishedLeftovers picks up
+// any changes to the txn files done by the modifier
+assertNull(log.txnFile().syncFolder(null));
+
 assertNull(log.complete(null));
 
 sstableOld.selfRef().release();
@@ -1040,6 +1044,9 @@ public class LogTransactionTest extends 
AbstractTransactionalTest

oldFiles,

log.logFilePaths(;
 }
+
+// make sure to run the tidier to avoid any leaks in the logs
+tidier.run();
 }
 
 @Test
@@ -1068,7 +1075,7 @@ public class LogTransactionTest extends 
AbstractTransactionalTest
 assertNotNull(log);
 
 log.trackNew(sstableNew);
-/*TransactionLog.SSTableTidier tidier =*/ log.obsoleted(sstableOld);
+LogTransaction.SSTableTidier tidier = log.obsoleted(sstableOld);
 
 //modify the old sstable files
 modifier.accept(sstableOld);
@@ -1093,6 +1100,9 @@ public class LogTransactionTest extends 
AbstractTransactionalTest
 assertFiles(dataFolder.getPath(), 
Sets.newHashSet(Iterables.concat(sstableNew.getAllFilePaths(),

sstableOld.getAllFilePaths(),

log.logFilePaths(;
+
+// make sure to run the tidier to avoid any leaks in the logs
+tidier.run();
 }
 
 @Test



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-09-28 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25d4c7ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25d4c7ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25d4c7ba

Branch: refs/heads/trunk
Commit: 25d4c7baa5d00d2cc0b6304a544c6099ed24e70f
Parents: 3a79a02 5cebd1f
Author: Stefania Alborghetti 
Authored: Thu Sep 29 09:29:05 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Sep 29 09:29:05 2016 +0800

--
 CHANGES.txt   |  1 +
 .../cassandra/db/lifecycle/LogTransactionTest.java| 14 --
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/25d4c7ba/CHANGES.txt
--
diff --cc CHANGES.txt
index 2c99e9d,f2f8dac..a9e46f7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,77 -1,7 +1,78 @@@
 -3.0.10
 +3.10
 + * Upgrade metrics-reporter dependencies (CASSANDRA-12089)
 + * Tune compaction thread count via nodetool (CASSANDRA-12248)
 + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232)
 + * Include repair session IDs in repair start message (CASSANDRA-12532)
 + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039)
 + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667)
 + * Support optional backpressure strategies at the coordinator 
(CASSANDRA-9318)
 + * Make randompartitioner work with new vnode allocation (CASSANDRA-12647)
 + * Fix cassandra-stress graphing (CASSANDRA-12237)
 + * Allow filtering on partition key columns for queries without secondary 
indexes (CASSANDRA-11031)
 + * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585)
 + * Add JMH benchmarks.jar (CASSANDRA-12586)
 + * Add row offset support to SASI (CASSANDRA-11990)
 + * Cleanup uses of AlterTableStatementColumn (CASSANDRA-12567)
 + * Add keep-alive to streaming (CASSANDRA-11841)
 + * Tracing payload is passed through newSession(..) (CASSANDRA-11706)
 + * avoid deleting non existing sstable files and improve related log messages 
(CASSANDRA-12261)
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 
12550)
 + * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366)
 + * Delay releasing Memtable memory on flush until PostFlush has finished 
running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to SELECT statement (CASSANDRA-10707)
 + * Deprecate memtable_cleanup_threshold and update default for 
memtable_flush_writers (CASSANDRA-12228)
 + * Upgrade to OHC 0.4.4 (CASSANDRA-12133)
 + * Add version command to cassandra-stress (CASSANDRA-12258)
 + * Create compaction-stress tool (CASSANDRA-11844)
 + * Garbage-collecting compaction operation and schema option (CASSANDRA-7019)
 + * Add beta protocol flag for v5 native protocol (CASSANDRA-12142)
 + * Support filtering on non-PRIMARY KEY columns in the CREATE
 +   MATERIALIZED VIEW statement's WHERE clause (CASSANDRA-10368)
 + * Unify STDOUT and SYSTEMLOG logback format (CASSANDRA-12004)
 + * COPY FROM should raise error for non-existing input files (CASSANDRA-12174)
 + * Faster write path (CASSANDRA-12269)
 + * Option to leave omitted 

[jira] [Updated] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-09-28 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12705:
-
Assignee: Stefania
  Status: Patch Available  (was: Open)

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-09-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531398#comment-15531398
 ] 

Stefania commented on CASSANDRA-12705:
--

Thanks for the explanation. Then [this 
branch|https://github.com/stef1927/cassandra/commits/12705] is ready for review.

I will rebase and run CI once trunk is ready for 4.0.

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11138) cassandra-stress tool - clustering key values not distributed

2016-09-28 Thread Alwyn Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531351#comment-15531351
 ] 

Alwyn Davis commented on CASSANDRA-11138:
-

I couldn't actually replicate your results with the patch, unless I set 
{{col2}} to have only have 1 population value (e.g. {{population: 
uniform(1..1)}}).

Assuming that setting isn't in your columnspec, are you able to replicate this 
consistently?

> cassandra-stress tool - clustering key values not distributed
> -
>
> Key: CASSANDRA-11138
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11138
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 2.2.4, Centos 6.5, Java 8
>Reporter: Ralf Steppacher
>  Labels: stress
> Attachments: 11138-trunk.patch
>
>
> I am trying to get the stress tool to generate random values for three 
> clustering keys. I am trying to simulate collecting events per user id (text, 
> partition key). Events have a session type (text), event type (text), and 
> creation time (timestamp) (clustering keys, in that order). For testing 
> purposes I ended up with the following column spec:
> {noformat}
> columnspec:
> - name: created_at
>   cluster: uniform(10..10)
> - name: event_type
>   size: uniform(5..10)
>   population: uniform(1..30)
>   cluster: uniform(1..30)
> - name: session_type
>   size: fixed(5)
>   population: uniform(1..4)
>   cluster: uniform(1..4)
> - name: user_id
>   size: fixed(15)
>   population: uniform(1..100)
> - name: message
>   size: uniform(10..100)
>   population: uniform(1..100B)
> {noformat}
> My expectation was that this would lead to anywhere between 10 and 1200 rows 
> to be created per partition key. But it seems that exactly 10 rows are being 
> created, with the {{created_at}} timestamp being the only variable that is 
> assigned variable values (per partition key). The {{session_type}} and 
> {{event_type}} variables are assigned fixed values. This is even the case if 
> I set the cluster distribution to uniform(30..30) and uniform(4..4) 
> respectively. With this setting I expected 1200 rows per partition key to be 
> created, as announced when running the stress tool, but it is still 10.
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_eventy_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> …
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [1..1] rows (of [1200..1200] 
> total rows in the partitions)
> Improvement over 4 threadCount: 19%
> ...
> {noformat}
> Sample of generated data:
> {noformat}
> cqlsh> select user_id, event_type, session_type, created_at from 
> stresscql.batch_too_large LIMIT 30 ;
> user_id | event_type   | session_type | created_at
> -+--+--+--
>   %\x7f\x03/.d29 08:14:11+
>   %\x7f\x03/.d29 04:04:56+
>   %\x7f\x03/.d29 00:39:23+
>   %\x7f\x03/.d29 19:56:30+
>   %\x7f\x03/.d29 20:46:26+
>   %\x7f\x03/.d29 03:27:17+
>   %\x7f\x03/.d29 23:30:34+
>   %\x7f\x03/.d29 02:41:28+
>   %\x7f\x03/.d29 07:23:48+
>   %\x7f\x03/.d29 23:23:04+
>  N!\x0eUA7^r7d\x06J 17:48:51+
>  N!\x0eUA7^r7d\x06J 06:21:13+
>  N!\x0eUA7^r7d\x06J 03:34:41+
>  N!\x0eUA7^r7d\x06J 05:26:21+
>  N!\x0eUA7^r7d\x06J 01:31:24+
>  N!\x0eUA7^r7d\x06J 14:22:43+
>  N!\x0eUA7^r7d\x06J 14:54:29+
>  N!\x0eUA7^r7d\x06J 13:31:54+
>  N!\x0eUA7^r7d\x06J 06:38:40+
>  N!\x0eUA7^r7d\x06J

[jira] [Commented] (CASSANDRA-12248) Allow tuning compaction thread count at runtime

2016-09-28 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531302#comment-15531302
 ] 

Nate McCall commented on CASSANDRA-12248:
-

Committed in 3a79a027c7db2b8007a8ae4e19002c3edbf63d8e - includes a small unit 
test that does increase then decrease which initially failed in the way 
[~dikanggu] described before adding the LT and GT check to change the order. 
Sorry for the run-around. 

> Allow tuning compaction thread count at runtime
> ---
>
> Key: CASSANDRA-12248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12248
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom van der Woerdt
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.10
>
>
> While bootstrapping new nodes it can take a significant amount of time to 
> catch up on compaction or 2i builds. In these cases it would be convenient to 
> have a nodetool command that allows changing the number of concurrent 
> compaction jobs to the amount of cores on the machine.
> Alternatively, an even better variant of this would be to have a setting 
> "bootstrap_max_concurrent_compactors" which overrides the normal setting 
> during bootstrap only. Saves me from having to write a script that does it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12729) Cassandra-Stress: Use single seed in UUID generation

2016-09-28 Thread Chris Splinter (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Splinter updated CASSANDRA-12729:
---
Fix Version/s: 3.x
   Status: Patch Available  (was: Open)

Patch attached

> Cassandra-Stress: Use single seed in UUID generation
> 
>
> Key: CASSANDRA-12729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12729
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Chris Splinter
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-12729-trunk.patch
>
>
> While testing the [new sequence 
> distribution|https://issues.apache.org/jira/browse/CASSANDRA-12490] for the 
> user module of cassandra-stress I noticed that half of the expected rows (848 
> / 1696) were produced when using a single uuid primary key.
> {code}
> table: player_info_by_uuid
> table_definition: |
>   CREATE TABLE player_info_by_uuid (
> player_uuid uuid,
> player_full_name text,
> team_name text,
> weight double,
> height double,
> position text,
> PRIMARY KEY (player_uuid)
>   )
> columnspec:
>   - name: player_uuid
> size: fixed(32) # no. of chars of UUID
> population: seq(1..1696)  # 53 active players per team, 32 teams = 1696 
> players
> insert:
>   partitions: fixed(1)  # 1 partition per batch
>   batchtype: UNLOGGED   # use unlogged batches
>   select: fixed(1)/1 # no chance of skipping a row when generating inserts
> {code}
> The following debug output showed that we were over-incrementing the seed
> {code}
> SeedManager.next.index: 341824
> SeriesGenerator.Seed.next: 0
> SeriesGenerator.Seed.start: 1
> SeriesGenerator.Seed.totalCount: 20
> SeriesGenerator.Seed.next % totalCount: 0
> SeriesGenerator.Seed.start + (next % totalCount): 1
> PartitionOperation.ready.seed: org.apache.cassandra.stress.generate.Seed@1
> DistributionSequence.nextWithWrap.next: 0
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 0
> DistributionSequence.nextWithWrap.start + (next % totalCount): 1
> DistributionSequence.nextWithWrap.next: 1
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 1
> DistributionSequence.nextWithWrap.start + (next % totalCount): 2
> Generated uuid: --0001--0002
> {code}
> This patch fixes this issue by calling {{identityDistribution.next()}} once 
> [instead of 
> twice|https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/generate/values/UUIDs.java/#L37]
>  when generating UUID's



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Really fix CM.setConcurrentCompactors, include test coverage for such

2016-09-28 Thread zznate
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5e59b238f -> 3a79a027c


Really fix CM.setConcurrentCompactors, include test coverage for such

Fixes 979af884 and b80ef9b25 for CASSANDRA-12248


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a79a027
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a79a027
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a79a027

Branch: refs/heads/trunk
Commit: 3a79a027c7db2b8007a8ae4e19002c3edbf63d8e
Parents: 5e59b23
Author: Nate McCall 
Authored: Thu Sep 29 12:52:24 2016 +1300
Committer: Nate McCall 
Committed: Thu Sep 29 12:52:24 2016 +1300

--
 .../cassandra/db/compaction/CompactionManager.java| 14 --
 .../cassandra/db/compaction/CompactionsTest.java  | 11 +++
 2 files changed, 23 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a79a027/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 148a4fb..2f3b32f 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1865,8 +1865,18 @@ public class CompactionManager implements 
CompactionManagerMBean
 
 public void setConcurrentCompactors(int value)
 {
-executor.setMaximumPoolSize(value);
-executor.setCorePoolSize(value);
+if (value > executor.getCorePoolSize())
+{
+// we are increasing the value
+executor.setMaximumPoolSize(value);
+executor.setCorePoolSize(value);
+}
+else if (value < executor.getCorePoolSize())
+{
+// we are reducing the value
+executor.setCorePoolSize(value);
+executor.setMaximumPoolSize(value);
+}
 }
 
 public int getCoreCompactorThreads()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a79a027/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index 198b01b..cc81263 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@ -641,4 +641,15 @@ public class CompactionsTest
200, 
209,
300, 
301)));
 }
+
+@Test
+public void testConcurrencySettings()
+{
+CompactionManager.instance.setConcurrentCompactors(2);
+assertEquals(2, CompactionManager.instance.getCoreCompactorThreads());
+CompactionManager.instance.setConcurrentCompactors(3);
+assertEquals(3, CompactionManager.instance.getCoreCompactorThreads());
+CompactionManager.instance.setConcurrentCompactors(1);
+assertEquals(1, CompactionManager.instance.getCoreCompactorThreads());
+}
 }



[jira] [Assigned] (CASSANDRA-12107) Fix range scans for table with live static rows

2016-09-28 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak reassigned CASSANDRA-12107:
-

Assignee: Sharvanath Pathak

> Fix range scans for table with live static rows
> ---
>
> Key: CASSANDRA-12107
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12107
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sharvanath Pathak
>Assignee: Sharvanath Pathak
> Fix For: 3.0.9, 3.8
>
> Attachments: 12107-3.0.txt, repro
>
>
> We were seeing some weird behaviour with limit based scan queries. In 
> particular, we see the following:
> {noformat}
> $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM 
> files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2"
> Consistency level set to LOCAL_QUORUM.
>  uuid | system.token(uuid)
> --+--
>  6b470c3e43ee06d1 | -9218823070349964862
>  484b091ca97803cd | -8954822859271125729
> (2 rows)
> $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM 
> files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1"
> Consistency level set to LOCAL_QUORUM.
>  uuid | system.token(uuid)
> --+--
>  c348aaec2f1e4b85 | -9218781105444826588
> {noformat}
> In the table uuid is partition key, and it has a clustering key as well.
> So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. 
> After some investigation, it seems to me like the issue is in the way 
> DataLimits handles static rows. Here is a patch for trunk 
> (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc)
>  which seems to fix it for me. Please take a look, seems like a pretty 
> critical issue to me.
> I have forked the dtests for it as well. However, since trunk has some 
> failures already, I'm not fully sure how to infer the results.
> http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/
> http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12729) Cassandra-Stress: Use single seed in UUID generation

2016-09-28 Thread Chris Splinter (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Splinter updated CASSANDRA-12729:
---
Attachment: CASSANDRA-12729-trunk.patch

> Cassandra-Stress: Use single seed in UUID generation
> 
>
> Key: CASSANDRA-12729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12729
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Chris Splinter
>Priority: Minor
> Attachments: CASSANDRA-12729-trunk.patch
>
>
> While testing the [new sequence 
> distribution|https://issues.apache.org/jira/browse/CASSANDRA-12490] for the 
> user module of cassandra-stress I noticed that half of the expected rows (848 
> / 1696) were produced when using a single uuid primary key.
> {code}
> table: player_info_by_uuid
> table_definition: |
>   CREATE TABLE player_info_by_uuid (
> player_uuid uuid,
> player_full_name text,
> team_name text,
> weight double,
> height double,
> position text,
> PRIMARY KEY (player_uuid)
>   )
> columnspec:
>   - name: player_uuid
> size: fixed(32) # no. of chars of UUID
> population: seq(1..1696)  # 53 active players per team, 32 teams = 1696 
> players
> insert:
>   partitions: fixed(1)  # 1 partition per batch
>   batchtype: UNLOGGED   # use unlogged batches
>   select: fixed(1)/1 # no chance of skipping a row when generating inserts
> {code}
> The following debug output showed that we were over-incrementing the seed
> {code}
> SeedManager.next.index: 341824
> SeriesGenerator.Seed.next: 0
> SeriesGenerator.Seed.start: 1
> SeriesGenerator.Seed.totalCount: 20
> SeriesGenerator.Seed.next % totalCount: 0
> SeriesGenerator.Seed.start + (next % totalCount): 1
> PartitionOperation.ready.seed: org.apache.cassandra.stress.generate.Seed@1
> DistributionSequence.nextWithWrap.next: 0
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 0
> DistributionSequence.nextWithWrap.start + (next % totalCount): 1
> DistributionSequence.nextWithWrap.next: 1
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 1
> DistributionSequence.nextWithWrap.start + (next % totalCount): 2
> Generated uuid: --0001--0002
> {code}
> This patch fixes this issue by calling {{identityDistribution.next()}} once 
> [instead of 
> twice|https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/generate/values/UUIDs.java/#L37]
>  when generating UUID's



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12729) Cassandra-Stress: Use single seed in UUID generation

2016-09-28 Thread Chris Splinter (JIRA)
Chris Splinter created CASSANDRA-12729:
--

 Summary: Cassandra-Stress: Use single seed in UUID generation
 Key: CASSANDRA-12729
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12729
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Chris Splinter
Priority: Minor


While testing the [new sequence 
distribution|https://issues.apache.org/jira/browse/CASSANDRA-12490] for the 
user module of cassandra-stress I noticed that half of the expected rows (848 / 
1696) were produced when using a single uuid primary key.
{code}
table: player_info_by_uuid
table_definition: |
  CREATE TABLE player_info_by_uuid (
player_uuid uuid,
player_full_name text,
team_name text,
weight double,
height double,
position text,
PRIMARY KEY (player_uuid)
  )

columnspec:
  - name: player_uuid
size: fixed(32) # no. of chars of UUID
population: seq(1..1696)  # 53 active players per team, 32 teams = 1696 
players

insert:
  partitions: fixed(1)  # 1 partition per batch
  batchtype: UNLOGGED   # use unlogged batches
  select: fixed(1)/1 # no chance of skipping a row when generating inserts
{code}

The following debug output showed that we were over-incrementing the seed
{code}
SeedManager.next.index: 341824
SeriesGenerator.Seed.next: 0
SeriesGenerator.Seed.start: 1
SeriesGenerator.Seed.totalCount: 20
SeriesGenerator.Seed.next % totalCount: 0
SeriesGenerator.Seed.start + (next % totalCount): 1
PartitionOperation.ready.seed: org.apache.cassandra.stress.generate.Seed@1
DistributionSequence.nextWithWrap.next: 0
DistributionSequence.nextWithWrap.start: 1
DistributionSequence.nextWithWrap.totalCount: 20
DistributionSequence.nextWithWrap.next % totalCount: 0
DistributionSequence.nextWithWrap.start + (next % totalCount): 1
DistributionSequence.nextWithWrap.next: 1
DistributionSequence.nextWithWrap.start: 1
DistributionSequence.nextWithWrap.totalCount: 20
DistributionSequence.nextWithWrap.next % totalCount: 1
DistributionSequence.nextWithWrap.start + (next % totalCount): 2
Generated uuid: --0001--0002
{code}

This patch fixes this issue by calling {{identityDistribution.next()}} once 
[instead of 
twice|https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/generate/values/UUIDs.java/#L37]
 when generating UUID's



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12728) Handling partially written hint files

2016-09-28 Thread Sharvanath Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531256#comment-15531256
 ] 

Sharvanath Pathak commented on CASSANDRA-12728:
---

[~iamaleksey] Can you take a look?

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. Also, the CRC file does not exist in 
> this case (probably because it crashed while writing the hints file). May be 
> ignoring and cleaning up such partially written hint files can be a way to 
> fix this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12728) Handling partially written hint files

2016-09-28 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-12728:
--
Description: 
{noformat}
ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
({})
org.apache.cassandra.io.FSReadError: java.io.EOFException
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_77]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_77]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_77]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
Caused by: java.io.EOFException: null
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
... 15 common frames omitted
{noformat}

We've found out that the hint file was truncated because there was a hard 
reboot around the time of last write to the file. I think we basically need to 
handle partially written hint files. Also, the CRC file does not exist in this 
case (probably because it crashed while writing the hints file). May be 
ignoring and cleaning up such partially written hint files can be a way to fix 
this?

  was:
{noformat}
ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
({})
org.apache.cassandra.io.FSReadError: java.io.EOFException
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 

[jira] [Updated] (CASSANDRA-12728) Handling partially written hint files

2016-09-28 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-12728:
--
Description: 
{noformat}
ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
({})
org.apache.cassandra.io.FSReadError: java.io.EOFException
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
 [apache-cassandra-3.0.6.jar:3.0.6]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_77]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_77]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_77]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
Caused by: java.io.EOFException: null
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
 ~[apache-cassandra-3.0.6.jar:3.0.6]
... 15 common frames omitted
{noformat}

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> 

[jira] [Created] (CASSANDRA-12728) Handling partially written hint files

2016-09-28 Thread Sharvanath Pathak (JIRA)
Sharvanath Pathak created CASSANDRA-12728:
-

 Summary: Handling partially written hint files
 Key: CASSANDRA-12728
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
 Project: Cassandra
  Issue Type: Bug
Reporter: Sharvanath Pathak






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12248) Allow tuning compaction thread count at runtime

2016-09-28 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531208#comment-15531208
 ] 

Nate McCall edited comment on CASSANDRA-12248 at 9/28/16 11:25 PM:
---

Well yeah, that would have been the sane way to do it :)

Thanks [~dikanggu] - i'll merge that in shortly.


was (Author: zznate):
Well yeah, that would have been the sane way to do it. 

Thanks [~dikanggu] - i'll merge that in shortly.

> Allow tuning compaction thread count at runtime
> ---
>
> Key: CASSANDRA-12248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12248
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom van der Woerdt
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.10
>
>
> While bootstrapping new nodes it can take a significant amount of time to 
> catch up on compaction or 2i builds. In these cases it would be convenient to 
> have a nodetool command that allows changing the number of concurrent 
> compaction jobs to the amount of cores on the machine.
> Alternatively, an even better variant of this would be to have a setting 
> "bootstrap_max_concurrent_compactors" which overrides the normal setting 
> during bootstrap only. Saves me from having to write a script that does it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12248) Allow tuning compaction thread count at runtime

2016-09-28 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531208#comment-15531208
 ] 

Nate McCall commented on CASSANDRA-12248:
-

Well yeah, that would have been the sane way to do it. 

Thanks [~dikanggu] - i'll merge that in shortly.

> Allow tuning compaction thread count at runtime
> ---
>
> Key: CASSANDRA-12248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12248
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom van der Woerdt
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.10
>
>
> While bootstrapping new nodes it can take a significant amount of time to 
> catch up on compaction or 2i builds. In these cases it would be convenient to 
> have a nodetool command that allows changing the number of concurrent 
> compaction jobs to the amount of cores on the machine.
> Alternatively, an even better variant of this would be to have a setting 
> "bootstrap_max_concurrent_compactors" which overrides the normal setting 
> during bootstrap only. Saves me from having to write a script that does it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12248) Allow tuning compaction thread count at runtime

2016-09-28 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531178#comment-15531178
 ] 

T Jake Luciani commented on CASSANDRA-12248:


Yes this




> Allow tuning compaction thread count at runtime
> ---
>
> Key: CASSANDRA-12248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12248
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom van der Woerdt
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.10
>
>
> While bootstrapping new nodes it can take a significant amount of time to 
> catch up on compaction or 2i builds. In these cases it would be convenient to 
> have a nodetool command that allows changing the number of concurrent 
> compaction jobs to the amount of cores on the machine.
> Alternatively, an even better variant of this would be to have a setting 
> "bootstrap_max_concurrent_compactors" which overrides the normal setting 
> during bootstrap only. Saves me from having to write a script that does it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12248) Allow tuning compaction thread count at runtime

2016-09-28 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531153#comment-15531153
 ] 

Dikang Gu commented on CASSANDRA-12248:
---

[~zznate], I think [~tjake] means something like this: 
https://github.com/DikangGu/cassandra/commit/eea231de9200a55f14f87dc97ec8acbdaa3f8663.
 When we are reducing the number of compactors, we need to set the core pool 
size first, other wise, it will throw exception like this:
{code}
Exception in thread "main" java.lang.IllegalArgumentException
at 
java.util.concurrent.ThreadPoolExecutor.setMaximumPoolSize(ThreadPoolExecutor.java:1667)
{code}

> Allow tuning compaction thread count at runtime
> ---
>
> Key: CASSANDRA-12248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12248
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom van der Woerdt
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.10
>
>
> While bootstrapping new nodes it can take a significant amount of time to 
> catch up on compaction or 2i builds. In these cases it would be convenient to 
> have a nodetool command that allows changing the number of concurrent 
> compaction jobs to the amount of cores on the machine.
> Alternatively, an even better variant of this would be to have a setting 
> "bootstrap_max_concurrent_compactors" which overrides the normal setting 
> during bootstrap only. Saves me from having to write a script that does it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11678) cassandra crush when enable hints_compression

2016-09-28 Thread Sharvanath Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531121#comment-15531121
 ] 

Sharvanath Pathak commented on CASSANDRA-11678:
---

We have reproduced the same case, and found out that the hint file truncated 
because there was a hard reboot around the time of last write to the file. I 
think we basically need to handle partially written hint files. Also, the CRC 
file does not exist in this case (probably because it crashed while writing the 
hints file). May be ignoring the hints file if the CRC file is absent can be a 
way to fix this?

> cassandra crush when enable hints_compression
> -
>
> Key: CASSANDRA-11678
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11678
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Local Write-Read Paths
> Environment: Centos 7
>Reporter: Weijian Lin
>Assignee: Blake Eggleston
>Priority: Critical
>
> When I enable hints_compression and set the compression class to
> LZ4Compressor,the
> cassandra (v3.05, V3.5.0) will crush。That is a bug, or any conf is wrong?
> *Exception in V 3.5.0 *
> {code}
> ERROR [HintsDispatcher:2] 2016-04-26 15:02:56,970
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file
> abc4dda2-b551-427e-bb0b-e383d4a392e1-1461654138963-1.hints: file is
> corrupted ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:284)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:254)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_65]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readByte(RebufferingInputStream.java:146)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readPrimitiveSlowly(RebufferingInputStream.java:108)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readInt(RebufferingInputStream.java:188)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:297)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:280)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> ... 15 common frames omitted
> {code}
> *Exception in V 3.0.5 *
> {code}
> ERROR [HintsDispatcher:2] 2016-04-26 15:54:46,294
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file
> 8603be13-6878-4de3-8bc3-a7a7146b0376-1461657251205-1.hints: file is
> corrupted ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> 

[jira] [Comment Edited] (CASSANDRA-11678) cassandra crush when enable hints_compression

2016-09-28 Thread Sharvanath Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531121#comment-15531121
 ] 

Sharvanath Pathak edited comment on CASSANDRA-11678 at 9/28/16 10:55 PM:
-

We have reproduced the same case, and found out that the hint file was 
truncated because there was a hard reboot around the time of last write to the 
file. I think we basically need to handle partially written hint files. Also, 
the CRC file does not exist in this case (probably because it crashed while 
writing the hints file). May be ignoring the hints file if the CRC file is 
absent can be a way to fix this?


was (Author: sharvanath):
We have reproduced the same case, and found out that the hint file truncated 
because there was a hard reboot around the time of last write to the file. I 
think we basically need to handle partially written hint files. Also, the CRC 
file does not exist in this case (probably because it crashed while writing the 
hints file). May be ignoring the hints file if the CRC file is absent can be a 
way to fix this?

> cassandra crush when enable hints_compression
> -
>
> Key: CASSANDRA-11678
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11678
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Local Write-Read Paths
> Environment: Centos 7
>Reporter: Weijian Lin
>Assignee: Blake Eggleston
>Priority: Critical
>
> When I enable hints_compression and set the compression class to
> LZ4Compressor,the
> cassandra (v3.05, V3.5.0) will crush。That is a bug, or any conf is wrong?
> *Exception in V 3.5.0 *
> {code}
> ERROR [HintsDispatcher:2] 2016-04-26 15:02:56,970
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file
> abc4dda2-b551-427e-bb0b-e383d4a392e1-1461654138963-1.hints: file is
> corrupted ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:284)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:254)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_65]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readByte(RebufferingInputStream.java:146)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readPrimitiveSlowly(RebufferingInputStream.java:108)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readInt(RebufferingInputStream.java:188)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:297)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:280)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> ... 15 common frames omitted
> {code}
> *Exception in V 3.0.5 *
> {code}
> ERROR [HintsDispatcher:2] 2016-04-26 15:54:46,294
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file
> 

[jira] [Reopened] (CASSANDRA-11678) cassandra crush when enable hints_compression

2016-09-28 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak reopened CASSANDRA-11678:
---

> cassandra crush when enable hints_compression
> -
>
> Key: CASSANDRA-11678
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11678
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Local Write-Read Paths
> Environment: Centos 7
>Reporter: Weijian Lin
>Assignee: Blake Eggleston
>Priority: Critical
>
> When I enable hints_compression and set the compression class to
> LZ4Compressor,the
> cassandra (v3.05, V3.5.0) will crush。That is a bug, or any conf is wrong?
> *Exception in V 3.5.0 *
> {code}
> ERROR [HintsDispatcher:2] 2016-04-26 15:02:56,970
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file
> abc4dda2-b551-427e-bb0b-e383d4a392e1-1461654138963-1.hints: file is
> corrupted ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:284)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:254)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.5.0.jar:3.5.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_65]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readByte(RebufferingInputStream.java:146)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readPrimitiveSlowly(RebufferingInputStream.java:108)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readInt(RebufferingInputStream.java:188)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:297)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:280)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> ... 15 common frames omitted
> {code}
> *Exception in V 3.0.5 *
> {code}
> ERROR [HintsDispatcher:2] 2016-04-26 15:54:46,294
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file
> 8603be13-6878-4de3-8bc3-a7a7146b0376-1461657251205-1.hints: file is
> corrupted ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> 

[jira] [Updated] (CASSANDRA-12727) "nodetool disablebinary/thrift"—the documentation does not clearly say whether this works as a drain or an immediate cut off

2016-09-28 Thread Alexis Wilke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexis Wilke updated CASSANDRA-12727:
-
Description: 
The documentation, as found here:

https://docs.datastax.com/en/cassandra/3.x/cassandra/tools/toolsDisableBinary.html

Says:

bq. Description 
bq. Disables the binary protocol, also known as the native transport.

It is not clear to me whether this means existing connections will be drained 
before they get closed, or whether those connections all get closed 
immediately, whatever their state.

  was:
The documentation, as found here:

https://docs.datastax.com/en/cassandra/3.x/cassandra/tools/toolsDisableBinary.html

Says:

.bq Description 
.bq Disables the binary protocol, also known as the native transport.

It is not clear to me whether this means existing connections will be drained 
before they get closed, or whether those connections all get closed 
immediately, whatever their state.


> "nodetool disablebinary/thrift"—the documentation does not clearly say 
> whether this works as a drain or an immediate cut off
> 
>
> Key: CASSANDRA-12727
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12727
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Alexis Wilke
>Priority: Minor
>
> The documentation, as found here:
> https://docs.datastax.com/en/cassandra/3.x/cassandra/tools/toolsDisableBinary.html
> Says:
> bq. Description 
> bq. Disables the binary protocol, also known as the native transport.
> It is not clear to me whether this means existing connections will be drained 
> before they get closed, or whether those connections all get closed 
> immediately, whatever their state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12727) "nodetool disablebinary/thrift"—the documentation does not clearly say whether this works as a drain or an immediate cut off

2016-09-28 Thread Alexis Wilke (JIRA)
Alexis Wilke created CASSANDRA-12727:


 Summary: "nodetool disablebinary/thrift"—the documentation does 
not clearly say whether this works as a drain or an immediate cut off
 Key: CASSANDRA-12727
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12727
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation and Website
Reporter: Alexis Wilke
Priority: Minor


The documentation, as found here:

https://docs.datastax.com/en/cassandra/3.x/cassandra/tools/toolsDisableBinary.html

Says:

.bq Description 
.bq Disables the binary protocol, also known as the native transport.

It is not clear to me whether this means existing connections will be drained 
before they get closed, or whether those connections all get closed 
immediately, whatever their state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11138) cassandra-stress tool - clustering key values not distributed

2016-09-28 Thread Yap Sok Ann (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530531#comment-15530531
 ] 

Yap Sok Ann edited comment on CASSANDRA-11138 at 9/28/16 7:02 PM:
--

Encounter similar problem with both current trunk (with or without the patch) 
and 2.1.15.

With the following sample config:
{code:none}
table: table1
table_definition: |
  CREATE TABLE table1 (
pk1 text,
pk2 int,
col1 timestamp,
col2 text,
col3 blob,
PRIMARY KEY ((pk1, pk2), col1, col2)
  ); 

columnspec:
  - name: col1
cluster: uniform(1..3)

  - name: col2
cluster: uniform(1..4)
{code}

after insert, there will be duplicate values for {{col2}} *and* {{col3}}:
{code:sql}
cqlsh:stress> select * from table1 where pk1 = 'VFk.mZLR' and pk2 = 1772149447; 
 

 pk1  | pk2| col1 | col2 | col3
--++--+--+
 VFk.mZLR | 1772149447 | 1994-05-17 08:23:01+ | QyCJtb6` | 
0x5728b1b79dd2372a
 VFk.mZLR | 1772149447 | 2010-11-24 11:19:30+ | QyCJtb6` | 
0x5728b1b79dd2372a

(2 rows)
{code}

If I just remove {{col2}} from clustering key, then there is no duplicate:
{code:sql}
cqlsh:stress> select * from table1 where pk1 = 'VFk.mZLR' and pk2 = 1772149447;

 pk1  | pk2| col1 | col2 | col3
--++--+--+
 VFk.mZLR | 1772149447 | 1994-05-17 08:23:01+ |  '{9j\(; | 0x1080f88c325e
 VFk.mZLR | 1772149447 | 2010-11-24 11:19:30+ | sA0wlY>' | 0x763588f2f5a8

(2 rows)
{code}

Is this how it's supposed to work? Of particular concern is how {{col3}} 
remains the same and thus becomes highly compressible.


was (Author: sayap):
Encounter similar problem with the current trunk, with or without the patch.

With the following sample config:
{code:none}
table: table1
table_definition: |
  CREATE TABLE table1 (
pk1 text,
pk2 int,
col1 timestamp,
col2 text,
col3 blob,
PRIMARY KEY ((pk1, pk2), col1, col2)
  ); 

columnspec:
  - name: col1
cluster: uniform(1..3)

  - name: col2
cluster: uniform(1..4)
{code}

after insert, there will be duplicate values for {{col2}} *and* {{col3}}:
{code:sql}
cqlsh:stress> select * from table1 where pk1 = 'VFk.mZLR' and pk2 = 1772149447; 
 

 pk1  | pk2| col1 | col2 | col3
--++--+--+
 VFk.mZLR | 1772149447 | 1994-05-17 08:23:01+ | QyCJtb6` | 
0x5728b1b79dd2372a
 VFk.mZLR | 1772149447 | 2010-11-24 11:19:30+ | QyCJtb6` | 
0x5728b1b79dd2372a

(2 rows)
{code}

If I just remove {{col2}} from clustering key, then there is no duplicate:
{code:sql}
cqlsh:stress> select * from table1 where pk1 = 'VFk.mZLR' and pk2 = 1772149447;

 pk1  | pk2| col1 | col2 | col3
--++--+--+
 VFk.mZLR | 1772149447 | 1994-05-17 08:23:01+ |  '{9j\(; | 0x1080f88c325e
 VFk.mZLR | 1772149447 | 2010-11-24 11:19:30+ | sA0wlY>' | 0x763588f2f5a8

(2 rows)
{code}

> cassandra-stress tool - clustering key values not distributed
> -
>
> Key: CASSANDRA-11138
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11138
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 2.2.4, Centos 6.5, Java 8
>Reporter: Ralf Steppacher
>  Labels: stress
> Attachments: 11138-trunk.patch
>
>
> I am trying to get the stress tool to generate random values for three 
> clustering keys. I am trying to simulate collecting events per user id (text, 
> partition key). Events have a session type (text), event type (text), and 
> creation time (timestamp) (clustering keys, in that order). For testing 
> purposes I ended up with the following column spec:
> {noformat}
> columnspec:
> - name: created_at
>   cluster: uniform(10..10)
> - name: event_type
>   size: uniform(5..10)
>   population: uniform(1..30)
>   cluster: uniform(1..30)
> - name: session_type
>   size: fixed(5)
>   population: uniform(1..4)
>   cluster: uniform(1..4)
> - name: user_id
>   size: fixed(15)
>   population: uniform(1..100)
> - name: message
>   size: uniform(10..100)
>   population: uniform(1..100B)
> {noformat}
> My expectation was that this would lead to anywhere between 10 and 1200 rows 
> to be created per partition key. But it seems that exactly 10 rows are being 
> created, with the {{created_at}} timestamp being the only variable that is 
> assigned variable values (per partition key). The {{session_type}} and 
> {{event_type}} variables are assigned fixed values. This is even the case if 
> I set the cluster 

[jira] [Commented] (CASSANDRA-11138) cassandra-stress tool - clustering key values not distributed

2016-09-28 Thread Yap Sok Ann (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530531#comment-15530531
 ] 

Yap Sok Ann commented on CASSANDRA-11138:
-

Encounter similar problem with the current trunk, with or without the patch.

With the following sample config:
{code:none}
table: table1
table_definition: |
  CREATE TABLE table1 (
pk1 text,
pk2 int,
col1 timestamp,
col2 text,
col3 blob,
PRIMARY KEY ((pk1, pk2), col1, col2)
  ); 

columnspec:
  - name: col1
cluster: uniform(1..3)

  - name: col2
cluster: uniform(1..4)
{code}

after insert, there will be duplicate values for {{col2}} *and* {{col3}}:
{code:sql}
cqlsh:stress> select * from table1 where pk1 = 'VFk.mZLR' and pk2 = 1772149447; 
 

 pk1  | pk2| col1 | col2 | col3
--++--+--+
 VFk.mZLR | 1772149447 | 1994-05-17 08:23:01+ | QyCJtb6` | 
0x5728b1b79dd2372a
 VFk.mZLR | 1772149447 | 2010-11-24 11:19:30+ | QyCJtb6` | 
0x5728b1b79dd2372a

(2 rows)
{code}

If I just remove {{col2}} from clustering key, then there is no duplicate:
{code:sql}
cqlsh:stress> select * from table1 where pk1 = 'VFk.mZLR' and pk2 = 1772149447;

 pk1  | pk2| col1 | col2 | col3
--++--+--+
 VFk.mZLR | 1772149447 | 1994-05-17 08:23:01+ |  '{9j\(; | 0x1080f88c325e
 VFk.mZLR | 1772149447 | 2010-11-24 11:19:30+ | sA0wlY>' | 0x763588f2f5a8

(2 rows)
{code}

> cassandra-stress tool - clustering key values not distributed
> -
>
> Key: CASSANDRA-11138
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11138
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 2.2.4, Centos 6.5, Java 8
>Reporter: Ralf Steppacher
>  Labels: stress
> Attachments: 11138-trunk.patch
>
>
> I am trying to get the stress tool to generate random values for three 
> clustering keys. I am trying to simulate collecting events per user id (text, 
> partition key). Events have a session type (text), event type (text), and 
> creation time (timestamp) (clustering keys, in that order). For testing 
> purposes I ended up with the following column spec:
> {noformat}
> columnspec:
> - name: created_at
>   cluster: uniform(10..10)
> - name: event_type
>   size: uniform(5..10)
>   population: uniform(1..30)
>   cluster: uniform(1..30)
> - name: session_type
>   size: fixed(5)
>   population: uniform(1..4)
>   cluster: uniform(1..4)
> - name: user_id
>   size: fixed(15)
>   population: uniform(1..100)
> - name: message
>   size: uniform(10..100)
>   population: uniform(1..100B)
> {noformat}
> My expectation was that this would lead to anywhere between 10 and 1200 rows 
> to be created per partition key. But it seems that exactly 10 rows are being 
> created, with the {{created_at}} timestamp being the only variable that is 
> assigned variable values (per partition key). The {{session_type}} and 
> {{event_type}} variables are assigned fixed values. This is even the case if 
> I set the cluster distribution to uniform(30..30) and uniform(4..4) 
> respectively. With this setting I expected 1200 rows per partition key to be 
> created, as announced when running the stress tool, but it is still 10.
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_eventy_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> …
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [1..1] rows (of [1200..1200] 
> total rows in the partitions)
> Improvement over 4 threadCount: 19%
> ...
> {noformat}
> Sample of generated data:
> {noformat}
> cqlsh> select user_id, event_type, session_type, created_at from 
> stresscql.batch_too_large LIMIT 30 ;
> user_id | event_type   | session_type | created_at
> -+--+--+--
>   %\x7f\x03/.d29 08:14:11+
>   %\x7f\x03/.d29 04:04:56+
>   %\x7f\x03/.d29 00:39:23+
>   %\x7f\x03/.d29 19:56:30+
>   %\x7f\x03/.d29 20:46:26+
>   %\x7f\x03/.d29 03:27:17+
>   %\x7f\x03/.d29 23:30:34+
>  

[jira] [Commented] (CASSANDRA-12478) cassandra stress still uses CFMetaData.compile()

2016-09-28 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530436#comment-15530436
 ] 

Paulo Motta commented on CASSANDRA-12478:
-

The new patch still has the problem:
{noformat}
org.apache.cassandra.exceptions.ConfigurationException: Unable to check disk 
space available to tools/bin/../../data/commitlog. Perhaps the Cassandra user 
does not have the necessary permissions
at 
org.apache.cassandra.config.DatabaseDescriptor.applySimpleConfig(DatabaseDescriptor.java:472)
at 
org.apache.cassandra.config.DatabaseDescriptor.toolInitialization(DatabaseDescriptor.java:177)
at 
org.apache.cassandra.config.DatabaseDescriptor.toolInitialization(DatabaseDescriptor.java:146)
at org.apache.cassandra.stress.Stress.run(Stress.java:75)
at org.apache.cassandra.stress.Stress.main(Stress.java:62)
Caused by: java.nio.file.AccessDeniedException: tools/bin/../../data/commitlog
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileStore.devFor(UnixFileStore.java:57)
at sun.nio.fs.UnixFileStore.(UnixFileStore.java:64)
at sun.nio.fs.LinuxFileStore.(LinuxFileStore.java:44)
at 
sun.nio.fs.LinuxFileSystemProvider.getFileStore(LinuxFileSystemProvider.java:51)
at 
sun.nio.fs.LinuxFileSystemProvider.getFileStore(LinuxFileSystemProvider.java:39)
at 
sun.nio.fs.UnixFileSystemProvider.getFileStore(UnixFileSystemProvider.java:368)
at java.nio.file.Files.getFileStore(Files.java:1461)
at 
org.apache.cassandra.config.DatabaseDescriptor.guessFileStore(DatabaseDescriptor.java:987)
at 
org.apache.cassandra.config.DatabaseDescriptor.applySimpleConfig(DatabaseDescriptor.java:467)
... 4 more
{noformat}

Perhaps you need to change from {{toolInitialization}} to 
{{clientInitialization}}, as suggested by [~snazy]. But I think that even in 
tool mode, we shouldn't be checking commit log space, so I think the correct 
fix is to skip that check if in client or tool mode. WDYT [~snazy]?

> cassandra stress still uses CFMetaData.compile()
> 
>
> Key: CASSANDRA-12478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12478
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Denis Ranger
>  Labels: stress
> Fix For: 3.0.x
>
> Attachments: 
> 0001-Replaced-using-CFMetaData.compile-in-cassandra-stres.patch, 
> cassandra-stress-trunk.patch, cassandra-stress-v2.patch
>
>
> Using CFMetaData.compile() on a client tool causes permission problems. To 
> reproduce:
> * Start cassandra under user _cassandra_
> * Run {{chmod -R go-rwx /var/lib/cassandra}} to deny access to other users.
> * Use a non-root user to run {{cassandra-stress}} 
> This produces an access denied message on {{/var/lib/cassandra/commitlog}}.
> The attached fix uses client-mode functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-09-28 Thread mshuler
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e59b238
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e59b238
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e59b238

Branch: refs/heads/trunk
Commit: 5e59b238fb7f36a835e6aa22cdc8df635a25abd8
Parents: e7fb063 9dd805d
Author: Michael Shuler 
Authored: Wed Sep 28 13:11:21 2016 -0500
Committer: Michael Shuler 
Committed: Wed Sep 28 13:11:21 2016 -0500

--

--




[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-09-28 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9dd805d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9dd805d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9dd805d0

Branch: refs/heads/trunk
Commit: 9dd805d0dcd172db54783450b5964931d26154b0
Parents: f2c5ad7 2383935
Author: Michael Shuler 
Authored: Wed Sep 28 13:10:44 2016 -0500
Committer: Michael Shuler 
Committed: Wed Sep 28 13:10:44 2016 -0500

--

--




[2/6] cassandra git commit: Bump base.version to 2.2.9

2016-09-28 Thread mshuler
Bump base.version to 2.2.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23839352
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23839352
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23839352

Branch: refs/heads/cassandra-3.0
Commit: 238393520cbde257265d68a2057dff9374105c5b
Parents: 738a579
Author: Michael Shuler 
Authored: Wed Sep 28 13:10:06 2016 -0500
Committer: Michael Shuler 
Committed: Wed Sep 28 13:10:06 2016 -0500

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23839352/build.xml
--
diff --git a/build.xml b/build.xml
index 42b2919..d9f8198 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>



[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-09-28 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9dd805d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9dd805d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9dd805d0

Branch: refs/heads/cassandra-3.0
Commit: 9dd805d0dcd172db54783450b5964931d26154b0
Parents: f2c5ad7 2383935
Author: Michael Shuler 
Authored: Wed Sep 28 13:10:44 2016 -0500
Committer: Michael Shuler 
Committed: Wed Sep 28 13:10:44 2016 -0500

--

--




[3/6] cassandra git commit: Bump base.version to 2.2.9

2016-09-28 Thread mshuler
Bump base.version to 2.2.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23839352
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23839352
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23839352

Branch: refs/heads/trunk
Commit: 238393520cbde257265d68a2057dff9374105c5b
Parents: 738a579
Author: Michael Shuler 
Authored: Wed Sep 28 13:10:06 2016 -0500
Committer: Michael Shuler 
Committed: Wed Sep 28 13:10:06 2016 -0500

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23839352/build.xml
--
diff --git a/build.xml b/build.xml
index 42b2919..d9f8198 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>



[1/6] cassandra git commit: Bump base.version to 2.2.9

2016-09-28 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 738a57992 -> 238393520
  refs/heads/cassandra-3.0 f2c5ad743 -> 9dd805d0d
  refs/heads/trunk e7fb0639d -> 5e59b238f


Bump base.version to 2.2.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23839352
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23839352
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23839352

Branch: refs/heads/cassandra-2.2
Commit: 238393520cbde257265d68a2057dff9374105c5b
Parents: 738a579
Author: Michael Shuler 
Authored: Wed Sep 28 13:10:06 2016 -0500
Committer: Michael Shuler 
Committed: Wed Sep 28 13:10:06 2016 -0500

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23839352/build.xml
--
diff --git a/build.xml b/build.xml
index 42b2919..d9f8198 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>



[jira] [Updated] (CASSANDRA-12720) Permissions on aggregate functions are not removed on drop

2016-09-28 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12720:

Summary: Permissions on aggregate functions are not removed on drop  (was: 
Permissions on aggregate functions are not remove on drop)

> Permissions on aggregate functions are not removed on drop
> --
>
> Key: CASSANDRA-12720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12720
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> When a user defined aggregate is dropped, either directly or when it's 
> enclosing keyspace is dropped, permissions granted on it are not cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12478) cassandra stress still uses CFMetaData.compile()

2016-09-28 Thread Denis Ranger (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Ranger updated CASSANDRA-12478:
-
Attachment: cassandra-stress-v2.patch

> cassandra stress still uses CFMetaData.compile()
> 
>
> Key: CASSANDRA-12478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12478
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Denis Ranger
>  Labels: stress
> Fix For: 3.0.x
>
> Attachments: 
> 0001-Replaced-using-CFMetaData.compile-in-cassandra-stres.patch, 
> cassandra-stress-trunk.patch, cassandra-stress-v2.patch
>
>
> Using CFMetaData.compile() on a client tool causes permission problems. To 
> reproduce:
> * Start cassandra under user _cassandra_
> * Run {{chmod -R go-rwx /var/lib/cassandra}} to deny access to other users.
> * Use a non-root user to run {{cassandra-stress}} 
> This produces an access denied message on {{/var/lib/cassandra/commitlog}}.
> The attached fix uses client-mode functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r1762691 - in /cassandra/site: publish/download/index.html src/_data/releases.yaml

2016-09-28 Thread mshuler
Author: mshuler
Date: Wed Sep 28 17:17:50 2016
New Revision: 1762691

URL: http://svn.apache.org/viewvc?rev=1762691=rev
Log:
Update download page for 2.2.8 release

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/_data/releases.yaml

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1762691=1762690=1762691=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Wed Sep 28 17:17:50 2016
@@ -105,7 +105,7 @@ released against the most recent bug fix
 
 
   Apache Cassandra 3.0 is supported until May 2017. The 
latest release is http://www.apache.org/dyn/closer.lua/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz;>3.0.9
 (http://www.apache.org/dist/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz.sha1;>sha1),
 released on 2016-09-20.
-  Apache Cassandra 2.2 is supported until November 2016. 
The latest release is http://www.apache.org/dyn/closer.lua/cassandra/2.2.7/apache-cassandra-2.2.7-bin.tar.gz;>2.2.7
 (http://www.apache.org/dist/cassandra/2.2.7/apache-cassandra-2.2.7-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.2.7/apache-cassandra-2.2.7-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.2.7/apache-cassandra-2.2.7-bin.tar.gz.sha1;>sha1),
 released on 2016-07-05.
+  Apache Cassandra 2.2 is supported until November 2016. 
The latest release is http://www.apache.org/dyn/closer.lua/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz;>2.2.8
 (http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.sha1;>sha1),
 released on 2016-09-28.
   Apache Cassandra 2.1 is supported until November 2016 
with critical fixes only. The latest release is
 http://www.apache.org/dyn/closer.lua/cassandra/2.1.15/apache-cassandra-2.1.15-bin.tar.gz;>2.1.15
 (http://www.apache.org/dist/cassandra/2.1.15/apache-cassandra-2.1.15-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.1.15/apache-cassandra-2.1.15-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.1.15/apache-cassandra-2.1.15-bin.tar.gz.sha1;>sha1),
 released on 2016-07-05.
 

Modified: cassandra/site/src/_data/releases.yaml
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/_data/releases.yaml?rev=1762691=1762690=1762691=diff
==
--- cassandra/site/src/_data/releases.yaml (original)
+++ cassandra/site/src/_data/releases.yaml Wed Sep 28 17:17:50 2016
@@ -11,8 +11,8 @@ latest:
   date: 2016-09-20
 
 "2.2":
-  name: 2.2.7
-  date: 2016-07-05
+  name: 2.2.8
+  date: 2016-09-28
 
 "2.1":
   name: 2.1.15




[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530203#comment-15530203
 ] 

Philip Thompson commented on CASSANDRA-12700:
-

Jeff, we can maybe get to it in several weeks? There's quite a number of "C* 
tickets to repro" on my backlog.

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 

[jira] [Commented] (CASSANDRA-12725) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-09-28 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530205#comment-15530205
 ] 

Philip Thompson commented on CASSANDRA-12725:
-

This test hasn't changed, so I'm wondering if sstablemetadata changed in trunk? 
Someone will need to look into this, for now, doesnt seem like an urgent failure

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> 
>
> Key: CASSANDRA-12725
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12725
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/406/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> {code}
> Error Message
> Subprocess sstablemetadata on keyspace: keyspace1, column_family: None exited 
> with non-zero status; exit status: 1; 
> stdout: 
> usage: Usage: sstablemetadata [--gc_grace_seconds n] 
> Dump contents of given SSTable to standard output in JSON format.
> --gc_grace_secondsThe gc_grace_seconds to use when
>calculating droppable tombstones
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 366, in sstable_marking_test_not_intersecting_all_ranges
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 366, in 
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1021, in 
> run_sstablemetadata
> return handle_external_tool_process(p, "sstablemetadata on keyspace: {}, 
> column_family: {}".format(keyspace, column_families))
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1983, in 
> handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12693) Add the JMX metrics about the total number of hints we have delivered

2016-09-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530199#comment-15530199
 ] 

Aleksey Yeschenko commented on CASSANDRA-12693:
---

Ouch. Forgot to do that. Thanks.

> Add the JMX metrics about the total number of hints we have delivered
> -
>
> Key: CASSANDRA-12693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12693
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch
>
>
> I find there are no metrics about the number of hints we have delivered, I 
> think it would be great to have the metrics, so that we have better 
> estimation about the progress of hints replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12724) dtest failure in cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test

2016-09-28 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530179#comment-15530179
 ] 

Philip Thompson commented on CASSANDRA-12724:
-

https://github.com/riptano/cassandra-dtest/pull/1214

> dtest failure in 
> cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test
> 
>
> Key: CASSANDRA-12724
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12724
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/57/testReport/cql_tests/LWTTester/conditional_updates_on_static_columns_with_null_values_test
> {code}
> Error Message
> Expected [[False, None]] from DELETE s1 FROM 
> conditional_deletes_on_static_with_null WHERE a = 2 IF s2 IN (10,20,30), but 
> got [[False]]
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cql_tests.py", line 1289, in 
> conditional_deletes_on_static_columns_with_null_values_test
> assert_one(session, "DELETE s1 FROM {} WHERE a = 2 IF s2 IN 
> (10,20,30)".format(table_name), [False, None])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12724) dtest failure in cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test

2016-09-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12724:

Assignee: Alex Petrov  (was: Philip Thompson)

> dtest failure in 
> cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test
> 
>
> Key: CASSANDRA-12724
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12724
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/57/testReport/cql_tests/LWTTester/conditional_updates_on_static_columns_with_null_values_test
> {code}
> Error Message
> Expected [[False, None]] from DELETE s1 FROM 
> conditional_deletes_on_static_with_null WHERE a = 2 IF s2 IN (10,20,30), but 
> got [[False]]
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cql_tests.py", line 1289, in 
> conditional_deletes_on_static_columns_with_null_values_test
> assert_one(session, "DELETE s1 FROM {} WHERE a = 2 IF s2 IN 
> (10,20,30)".format(table_name), [False, None])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12693) Add the JMX metrics about the total number of hints we have delivered

2016-09-28 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514938#comment-15514938
 ] 

Dikang Gu edited comment on CASSANDRA-12693 at 9/28/16 4:48 PM:


[~iamaleksey] Thanks for the review! Addressed your comments, and here is a new 
commit. 


was (Author: dikanggu):
[~iamaleksey] Thanks for the review! Addressed your comments, and here is a new 
commit. 
https://github.com/DikangGu/cassandra/commit/7a4232d01d8b16302a7f6981a9ca3e039ba0ea89

> Add the JMX metrics about the total number of hints we have delivered
> -
>
> Key: CASSANDRA-12693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12693
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch
>
>
> I find there are no metrics about the number of hints we have delivered, I 
> think it would be great to have the metrics, so that we have better 
> estimation about the progress of hints replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12693) Add the JMX metrics about the total number of hints we have delivered

2016-09-28 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530194#comment-15530194
 ] 

Dikang Gu commented on CASSANDRA-12693:
---

[~iamaleksey], just change the inc() to be mark(). 
https://github.com/DikangGu/cassandra/commit/5307b8073d37e0988daa7e9d8510905c6d50c63d

> Add the JMX metrics about the total number of hints we have delivered
> -
>
> Key: CASSANDRA-12693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12693
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch
>
>
> I find there are no metrics about the number of hints we have delivered, I 
> think it would be great to have the metrics, so that we have better 
> estimation about the progress of hints replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12693) Add the JMX metrics about the total number of hints we have delivered

2016-09-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530186#comment-15530186
 ] 

Aleksey Yeschenko commented on CASSANDRA-12693:
---

No, just your +1 (:

Will commit after I run CI.

> Add the JMX metrics about the total number of hints we have delivered
> -
>
> Key: CASSANDRA-12693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12693
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch
>
>
> I find there are no metrics about the number of hints we have delivered, I 
> think it would be great to have the metrics, so that we have better 
> estimation about the progress of hints replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12726) dtest failure in upgrade_tests.paging_test.TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x.test_with_allow_filtering

2016-09-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-12726.
-
Resolution: Duplicate

> dtest failure in 
> upgrade_tests.paging_test.TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x.test_with_allow_filtering
> --
>
> Key: CASSANDRA-12726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12726
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest_upgrade/58/testReport/upgrade_tests.paging_test/TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x/test_with_allow_filtering
> This is happening on many trunk upgrade tests. See : 
> http://cassci.datastax.com/job/trunk_dtest_upgrade/58/testReport/
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:d45f323eb972c6fec146e5cfa84fdc47eb8aa5eb
> Unexpected error in node2 log, error: 
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-09-28 04:30:11,223 
> CassandraDaemon.java:217 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.RuntimeException: Unknown column cdc during deserialization
>   at 
> org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:433) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:407)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:192)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:668)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:656)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:341)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:350)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:610)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:593)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.7.jar:3.7]
> Unexpected error in node2 log, error: 
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-09-28 04:30:11,270 
> CassandraDaemon.java:217 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.RuntimeException: Unknown column cdc during deserialization
>   at 
> org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:433) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:407)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:192)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:668)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:656)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:341)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:350)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:610)
>  

[jira] [Updated] (CASSANDRA-12726) dtest failure in upgrade_tests.paging_test.TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x.test_with_allow_filtering

2016-09-28 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12726:
--
Description: 
example failure:

http://cassci.datastax.com/job/trunk_dtest_upgrade/58/testReport/upgrade_tests.paging_test/TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x/test_with_allow_filtering

This is happening on many trunk upgrade tests. See : 
http://cassci.datastax.com/job/trunk_dtest_upgrade/58/testReport/

{code}
Standard Output
http://git-wip-us.apache.org/repos/asf/cassandra.git 
git:d45f323eb972c6fec146e5cfa84fdc47eb8aa5eb
Unexpected error in node2 log, error: 
ERROR [MessagingService-Incoming-/127.0.0.1] 2016-09-28 04:30:11,223 
CassandraDaemon.java:217 - Exception in thread 
Thread[MessagingService-Incoming-/127.0.0.1,5,main]
java.lang.RuntimeException: Unknown column cdc during deserialization
at 
org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:433) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:407)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:192)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:668)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:656)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:341)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:350)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:610)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:593)
 ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[apache-cassandra-3.7.jar:3.7]
Unexpected error in node2 log, error: 
ERROR [MessagingService-Incoming-/127.0.0.1] 2016-09-28 04:30:11,270 
CassandraDaemon.java:217 - Exception in thread 
Thread[MessagingService-Incoming-/127.0.0.1,5,main]
java.lang.RuntimeException: Unknown column cdc during deserialization
at 
org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:433) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:407)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:192)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:668)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:656)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:341)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:350)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:610)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:593)
 ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[apache-cassandra-3.7.jar:3.7]
{code}

  was:
example failure:


[jira] [Created] (CASSANDRA-12726) dtest failure in upgrade_tests.paging_test.TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x.test_with_allow_filtering

2016-09-28 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12726:
-

 Summary: dtest failure in 
upgrade_tests.paging_test.TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x.test_with_allow_filtering
 Key: CASSANDRA-12726
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12726
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log

example failure:

http://cassci.datastax.com/job/trunk_dtest_upgrade/58/testReport/upgrade_tests.paging_test/TestPagingWithModifiersNodes2RF1_Upgrade_current_3_x_To_indev_3_x/test_with_allow_filtering

This is happening on many trunk upgrade tests.

{code}
Standard Output
http://git-wip-us.apache.org/repos/asf/cassandra.git 
git:d45f323eb972c6fec146e5cfa84fdc47eb8aa5eb
Unexpected error in node2 log, error: 
ERROR [MessagingService-Incoming-/127.0.0.1] 2016-09-28 04:30:11,223 
CassandraDaemon.java:217 - Exception in thread 
Thread[MessagingService-Incoming-/127.0.0.1,5,main]
java.lang.RuntimeException: Unknown column cdc during deserialization
at 
org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:433) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:407)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:192)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:668)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:656)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:341)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:350)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:610)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:593)
 ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[apache-cassandra-3.7.jar:3.7]
Unexpected error in node2 log, error: 
ERROR [MessagingService-Incoming-/127.0.0.1] 2016-09-28 04:30:11,270 
CassandraDaemon.java:217 - Exception in thread 
Thread[MessagingService-Incoming-/127.0.0.1,5,main]
java.lang.RuntimeException: Unknown column cdc during deserialization
at 
org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:433) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:407)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:192)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:668)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:656)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:341)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:350)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:610)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:593)
 ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
 ~[apache-cassandra-3.7.jar:3.7]
at 

[jira] [Commented] (CASSANDRA-12693) Add the JMX metrics about the total number of hints we have delivered

2016-09-28 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530168#comment-15530168
 ] 

Dikang Gu commented on CASSANDRA-12693:
---

[~iamaleksey], sure, it looks great to me! Do you want me to send you a new 
commit?

> Add the JMX metrics about the total number of hints we have delivered
> -
>
> Key: CASSANDRA-12693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12693
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch
>
>
> I find there are no metrics about the number of hints we have delivered, I 
> think it would be great to have the metrics, so that we have better 
> estimation about the progress of hints replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12725) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-09-28 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12725:
-

 Summary: dtest failure in 
repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
 Key: CASSANDRA-12725
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12725
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/406/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges

{code}
Error Message

Subprocess sstablemetadata on keyspace: keyspace1, column_family: None exited 
with non-zero status; exit status: 1; 
stdout: 
usage: Usage: sstablemetadata [--gc_grace_seconds n] 
Dump contents of given SSTable to standard output in JSON format.
--gc_grace_secondsThe gc_grace_seconds to use when
   calculating droppable tombstones
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File 
"/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", line 
366, in sstable_marking_test_not_intersecting_all_ranges
for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for node 
in cluster.nodelist()):
  File 
"/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", line 
366, in 
for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for node 
in cluster.nodelist()):
  File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1021, in 
run_sstablemetadata
return handle_external_tool_process(p, "sstablemetadata on keyspace: {}, 
column_family: {}".format(keyspace, column_families))
  File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1983, in 
handle_external_tool_process
raise ToolError(cmd_args, rc, out, err)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12397) Altering a column's type breaks commitlog replay

2016-09-28 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530155#comment-15530155
 ] 

Carl Yeksigian commented on CASSANDRA-12397:


The longer term fix is removing support for altering types (CASSANDRA-12443) - 
if that ticket does end up in 3.0, we won't need to handle the CL replay 
separately as it should no longer be a problem.

> Altering a column's type breaks commitlog replay
> 
>
> Key: CASSANDRA-12397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12397
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Carl Yeksigian
>Assignee: Stefania
>
> When switching from a fixed-length column to a variable-length column, 
> replaying the commitlog on restart will have the same issue as 
> CASSANDRA-11820. Seems like it is related to the schema being flushed and 
> used when restarted, but commitlogs having been written in the old format.
> {noformat}
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /tmp/mutation4816372620457789996dat.  This may be caused by replaying a 
> mutation against a table with the same name but incompatible schema.  
> Exception follows: java.io.IOError: java.io.EOFException: EOF after 259 bytes 
> out of 3336
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:409)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:342)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:201)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:84)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:139)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:177) 
> [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:158)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:316) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:591)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:720) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12724) dtest failure in cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test

2016-09-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-12724:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in 
> cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test
> 
>
> Key: CASSANDRA-12724
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12724
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/57/testReport/cql_tests/LWTTester/conditional_updates_on_static_columns_with_null_values_test
> {code}
> Error Message
> Expected [[False, None]] from DELETE s1 FROM 
> conditional_deletes_on_static_with_null WHERE a = 2 IF s2 IN (10,20,30), but 
> got [[False]]
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cql_tests.py", line 1289, in 
> conditional_deletes_on_static_columns_with_null_values_test
> assert_one(session, "DELETE s1 FROM {} WHERE a = 2 IF s2 IN 
> (10,20,30)".format(table_name), [False, None])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12724) dtest failure in cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test

2016-09-28 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12724:
-

 Summary: dtest failure in 
cql_tests.LWTTester.conditional_updates_on_static_columns_with_null_values_test
 Key: CASSANDRA-12724
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12724
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-3.9_dtest/57/testReport/cql_tests/LWTTester/conditional_updates_on_static_columns_with_null_values_test

{code}
Error Message

Expected [[False, None]] from DELETE s1 FROM 
conditional_deletes_on_static_with_null WHERE a = 2 IF s2 IN (10,20,30), but 
got [[False]]
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/cql_tests.py", line 1289, in 
conditional_deletes_on_static_columns_with_null_values_test
assert_one(session, "DELETE s1 FROM {} WHERE a = 2 IF s2 IN 
(10,20,30)".format(table_name), [False, None])
  File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
assert_one
assert list_res == [expected], "Expected {} from {}, but got 
{}".format([expected], query, list_res)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530143#comment-15530143
 ] 

Jeff Jirsa edited comment on CASSANDRA-12700 at 9/28/16 4:25 PM:
-

[~beobal] thanks for the feedback - the decision in my mind was "make it work 
safely" or "make it fail nicer" - I went with "make it work safely", but since 
we don't understand how it got this way, I agree with you that we should be 
noisy about it. 

Pushed a new change to be more conservative - instead of silently correcting 
(by inferring false), we'll throw an RTE and log a message at {{WARN}} to let 
the operator know that something's very wrong. This is the new behavior:

{code}
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
AuthenticationFailed('Remote end requires authentication.',)})
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh -u cassandra -p cassandra
Connected to test at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.10-SNAPSHOT | CQL spec 3.4.3 | Native protocol v4]
Use HELP for help.
cassandra@cqlsh> select * from system_auth.roles;

 role  | can_login | is_superuser | member_of | salted_hash
---+---+--+---+--
 cassandra |  True | True |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.

(1 rows)
cassandra@cqlsh> insert into system_auth.roles(role, salted_hash) 
values('test', '$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.');
cassandra@cqlsh> select * from system_auth.roles;

 role  | can_login | is_superuser | member_of | salted_hash
---+---+--+---+--
  test |  null | null |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.
 cassandra |  True | True |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.

(2 rows)
cassandra@cqlsh> ^D
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh -u test -p cassandra
Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
AuthenticationFailed('Failed to authenticate to 127.0.0.1: Error from server: 
code= [Server error] message="java.lang.RuntimeException: Invalid metadata 
has been detected for role test"',)})
MLSEA-JJIRSA01:~ jjirsa$
{code}

I'll update the dtest to match shortly.

[~beobal] - if you guys have resources to try to repro, I won't mind seeing 
that Assignee field change to you or your test engineering folks.



was (Author: jjirsa):
[~beobal] thanks for the feedback - the decision in my mind was "make it work 
safely" or "make it fail nicer" - I went with "make it work safely", but since 
we don't understand how it got this way, I agree with you that we should be 
noisy about it. 

Pushed a new change to be more conservative - instead of silently correcting 
(by inferring false), we'll throw an RTE and log a message at {{WARN}} to let 
the operator know that something's very wrong. This is the new behavior:

{quote}
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
AuthenticationFailed('Remote end requires authentication.',)})
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh -u cassandra -p cassandra
Connected to test at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.10-SNAPSHOT | CQL spec 3.4.3 | Native protocol v4]
Use HELP for help.
cassandra@cqlsh> select * from system_auth.roles;

 role  | can_login | is_superuser | member_of | salted_hash
---+---+--+---+--
 cassandra |  True | True |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.

(1 rows)
cassandra@cqlsh> insert into system_auth.roles(role, salted_hash) 
values('test', '$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.');
cassandra@cqlsh> select * from system_auth.roles;

 role  | can_login | is_superuser | member_of | salted_hash
---+---+--+---+--
  test |  null | null |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.
 cassandra |  True | True |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.

(2 rows)
cassandra@cqlsh> ^D
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh -u test -p cassandra
Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
AuthenticationFailed('Failed to authenticate to 127.0.0.1: Error from server: 
code= [Server error] message="java.lang.RuntimeException: Invalid metadata 
has been detected for role test"',)})
MLSEA-JJIRSA01:~ jjirsa$
{quote}

I'll update the dtest to match shortly.

[~beobal] - if you 

[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530143#comment-15530143
 ] 

Jeff Jirsa commented on CASSANDRA-12700:


[~beobal] thanks for the feedback - the decision in my mind was "make it work 
safely" or "make it fail nicer" - I went with "make it work safely", but since 
we don't understand how it got this way, I agree with you that we should be 
noisy about it. 

Pushed a new change to be more conservative - instead of silently correcting 
(by inferring false), we'll throw an RTE and log a message at {{WARN}} to let 
the operator know that something's very wrong. This is the new behavior:

{quote}
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
AuthenticationFailed('Remote end requires authentication.',)})
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh -u cassandra -p cassandra
Connected to test at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.10-SNAPSHOT | CQL spec 3.4.3 | Native protocol v4]
Use HELP for help.
cassandra@cqlsh> select * from system_auth.roles;

 role  | can_login | is_superuser | member_of | salted_hash
---+---+--+---+--
 cassandra |  True | True |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.

(1 rows)
cassandra@cqlsh> insert into system_auth.roles(role, salted_hash) 
values('test', '$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.');
cassandra@cqlsh> select * from system_auth.roles;

 role  | can_login | is_superuser | member_of | salted_hash
---+---+--+---+--
  test |  null | null |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.
 cassandra |  True | True |  null | 
$2a$10$2.WsdTj8JoDCUHuVVe367Oth8XA3JYn1jTDX03eaEBkxwRTOuQKB.

(2 rows)
cassandra@cqlsh> ^D
MLSEA-JJIRSA01:~ jjirsa$ ccm node1 cqlsh -u test -p cassandra
Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
AuthenticationFailed('Failed to authenticate to 127.0.0.1: Error from server: 
code= [Server error] message="java.lang.RuntimeException: Invalid metadata 
has been detected for role test"',)})
MLSEA-JJIRSA01:~ jjirsa$
{quote}

I'll update the dtest to match shortly.

[~beobal] - if you guys have resources to try to repro, I won't mind seeing 
that Assignee field change to you or your test engineering folks.


> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> 

[jira] [Resolved] (CASSANDRA-12723) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-09-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-12723.
-
Resolution: Fixed
  Assignee: Philip Thompson  (was: DS Test Eng)

Pushed the fix four minutes ago

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-12723
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12723
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/818/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> {code}
> Error Message
> 'TestMaterializedViews' object has no attribute '_settled_stages'
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 367, in clustering_column_test
> self._insert_data(session)
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 96, 
> in _insert_data
> self._settle_nodes()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 85, 
> in _settle_nodes
> while attempts > 0 and not self._settled_stages(node):
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12723) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-09-28 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12723:
-

 Summary: dtest failure in 
materialized_views_test.TestMaterializedViews.clustering_column_test
 Key: CASSANDRA-12723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12723
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-3.0_dtest/818/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test

{code}
Error Message

'TestMaterializedViews' object has no attribute '_settled_stages'
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 367, 
in clustering_column_test
self._insert_data(session)
  File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 96, 
in _insert_data
self._settle_nodes()
  File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 85, 
in _settle_nodes
while attempts > 0 and not self._settled_stages(node):
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12722) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_1_x.in_order_by_without_selecting_test

2016-09-28 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12722:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_1_x.in_order_by_without_selecting_test
 Key: CASSANDRA-12722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12722
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node2.log

example failure:

http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/10/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_1_x/in_order_by_without_selecting_test

{code}
Error Message

Expected [[3], [4], [5], [0], [1], [2]] from SELECT v FROM test WHERE k IN (1, 
0), but got [[0], [1], [2], [3], [4], [5]]
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in 
wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 4200, 
in in_order_by_without_selecting_test
assert_all(cursor, "SELECT v FROM test WHERE k IN (1, 0)", [[3], [4], [5], 
[0], [1], [2]])
  File "/home/automaton/cassandra-dtest/tools/assertions.py", line 169, in 
assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12721) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test

2016-09-28 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12721:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test
 Key: CASSANDRA-12721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12721
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node2.log

example failure:

http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/10/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x/limit_multiget_test

{code}
Error Message

Expected [[48, 'http://foo.com', 42]] from SELECT * FROM clicks WHERE userid IN 
(48, 2) LIMIT 1, but got [[2, u'http://foo.com', 42]]
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 359, 
in limit_multiget_test
assert_one(cursor, "SELECT * FROM clicks WHERE userid IN (48, 2) LIMIT 1", 
[48, 'http://foo.com', 42])
  File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
assert_one
assert list_res == [expected], "Expected {} from {}, but got 
{}".format([expected], query, list_res)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12672) Automate Nodetool Documentation

2016-09-28 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-12672:
---
Status: Ready to Commit  (was: Patch Available)

> Automate Nodetool Documentation
> ---
>
> Key: CASSANDRA-12672
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12672
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Andrew Baker
>Priority: Minor
>  Labels: documentaion, lhf, nodetool
> Fix For: 3.x
>
>
> nodetool.rst has this:
> todo:: Try to autogenerate this from Nodetool’s help.
> creating a ticket to submit this work against



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12693) Add the JMX metrics about the total number of hints we have delivered

2016-09-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529995#comment-15529995
 ] 

Aleksey Yeschenko commented on CASSANDRA-12693:
---

I've attached a v2 with more consistent names and counters replaced with more 
useful meters - otherwise not changing much. Can you have a look and tell me 
wdyt, [~dikanggu]?

> Add the JMX metrics about the total number of hints we have delivered
> -
>
> Key: CASSANDRA-12693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12693
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch
>
>
> I find there are no metrics about the number of hints we have delivered, I 
> think it would be great to have the metrics, so that we have better 
> estimation about the progress of hints replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12693) Add the JMX metrics about the total number of hints we have delivered

2016-09-28 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12693:
--
Attachment: 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch

> Add the JMX metrics about the total number of hints we have delivered
> -
>
> Key: CASSANDRA-12693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12693
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-about-total-number-of-hints-we-have-.patch
>
>
> I find there are no metrics about the number of hints we have delivered, I 
> think it would be great to have the metrics, so that we have better 
> estimation about the progress of hints replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12720) Permissions on aggregate functions are not remove on drop

2016-09-28 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12720:

Status: Patch Available  (was: Open)

The problem was that {{AuthMigrationListener}} didn't override 
{{MigrationListener::onDropAggregate}}. I've pushed patches for 2.2/3.0/trunk 
and opened a [dtest pull 
request|https://github.com/riptano/cassandra-dtest/pull/1350] to add coverage.

||branch||testall||dtest||
|[12720-2.2|https://github.com/beobal/cassandra/tree/12720-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12720-2.2-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12720-2.2-dtest]|
|[12720-3.0|https://github.com/beobal/cassandra/tree/12720-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12720-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12720-3.0-dtest]|
|[12720-trunk|https://github.com/beobal/cassandra/tree/12720-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12720-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12720-trunk-dtest]|


> Permissions on aggregate functions are not remove on drop
> -
>
> Key: CASSANDRA-12720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12720
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> When a user defined aggregate is dropped, either directly or when it's 
> enclosing keyspace is dropped, permissions granted on it are not cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12720) Permissions on aggregate functions are not remove on drop

2016-09-28 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-12720:
---

 Summary: Permissions on aggregate functions are not remove on drop
 Key: CASSANDRA-12720
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12720
 Project: Cassandra
  Issue Type: Bug
  Components: Distributed Metadata
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.2.x, 3.0.x, 3.x


When a user defined aggregate is dropped, either directly or when it's 
enclosing keyspace is dropped, permissions granted on it are not cleaned up.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11985) 2.0.x leaks file handles (Again)

2016-09-28 Thread Amit Singh Chowdhery (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529895#comment-15529895
 ] 

Amit Singh Chowdhery commented on CASSANDRA-11985:
--

We tried finding fixes for tracking open files but didn't get much what we 
expected. So Sylvain Lebresne could you please provide few JIRA tickets related.

It will of great help to us.
Waiting for getting positive response from you .

> 2.0.x leaks file handles (Again)
> 
>
> Key: CASSANDRA-11985
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11985
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Unix kernel-2.6.32-431.56.1.el6.x86_64, Cassandra 2.0.14
>Reporter: Amit Singh Chowdhery
>Priority: Critical
>  Labels: Compaction
>
> We are running Cassandra 2.0.14 in production environment and disk usage is 
> very high. On investigating it further we found that there are around 4-5 
> files(~ 150 GB) in stuck mode.
> Command Fired : lsof /var/lib/cassandra | grep -i deleted 
> Output : 
> java12158 cassandra  308r   REG   8,16  34396638044 12727268 
> /var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-16481-Data.db
>  (deleted)
> java12158 cassandra  327r   REG   8,16 101982374806 12715102 
> /var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-126861-Data.db
>  (deleted)
> java12158 cassandra  339r   REG   8,16  12966304784 12714010 
> /var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-213548-Data.db
>  (deleted)
> java12158 cassandra  379r   REG   8,16  15323318036 12714957 
> /var/lib/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-jb-182936-Data.db
>  (deleted)
> we are not able to see these files in any directory. This is somewhat similar 
> to CASSANDRA-6275 which is fixed but still issue is there on higher version. 
> Also in logs no error related to compaction is reported.
> so could any one of you please provide any suggestion how to counter this. 
> Restarting Cassandra is one solution but this issue keeps on occurring so we 
> cannot restart production machine is not recommended so frequently.
> Also we know that this version is not supported but there is high probability 
> that it can occur in higher version too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9038) Atomic batches and single row atomicity appear to have no test coverage

2016-09-28 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529903#comment-15529903
 ] 

Ariel Weisberg commented on CASSANDRA-9038:
---

The scope of this ticket is a bit mixed since single row atomicity has nothing 
to do with atomic batches.

I think you can only prove something is thread safe with static analysis and a 
language that supports validation. 

The goal would be to find the easy to find interleavings as well as to catch 
any future changes that break it in a big way.

Some of the linked tests would work as units tests and some would work as 
dtests. That would make it easier to slot into the existing test 
infrastructure. Since there is a dependency on Farsandra dtests are probably 
the easiest way.

I think the goal is to force the various code paths that implement logged 
batched to occur. So if atomic batches work around failure, induce failure and 
then check that the batch completes as promised once the failure condition 
clears. Even a quick test that works is great because it will lay out the 
plumbing for more complex testing.

I recall adding byteman support to dtests so you can use that to create failure 
conditions.

> Atomic batches and single row atomicity appear to have no test coverage
> ---
>
> Key: CASSANDRA-9038
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9038
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>
> Leaving the solution to this up to the assignee. It seems like this is a 
> guarantee that should be checked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12598) BailErrorStragery alike for ANTLR grammar parsing

2016-09-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529730#comment-15529730
 ] 

Aleksey Yeschenko commented on CASSANDRA-12598:
---

[~Bereng] ping

> BailErrorStragery alike for ANTLR grammar parsing
> -
>
> Key: CASSANDRA-12598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12598
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Berenguer Blasi
>Assignee: Berenguer Blasi
> Fix For: 3.x
>
>
> CQL parsing is missing a mechanism similar to 
> http://www.antlr.org/api/Java/org/antlr/v4/runtime/BailErrorStrategy.html
> This solves:
> - Stopping parsing instead of continuing when we've got already an error 
> which is wasteful.
> - Any skipped java code tied to 'recovered' missing tokens might later cause 
> java exceptions (think non-init variables, non incremented integers (div by 
> zero), etc.) which will bubble up directly and will hide properly formatted 
> error messages to the user with no indication on what went wrong at all. Just 
> a cryptic NPE i.e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12478) cassandra stress still uses CFMetaData.compile()

2016-09-28 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529830#comment-15529830
 ] 

Paulo Motta commented on CASSANDRA-12478:
-

[~DenisRanger] patch looks good, but does not apply to trunk due to conflict 
with CASSANDRA-12667. Can you rebase and provide a fixed patch? I will submit 
CI runs as sooon as you provide a new patch to make sure a new rebase will not 
be necessary.

Thanks!

> cassandra stress still uses CFMetaData.compile()
> 
>
> Key: CASSANDRA-12478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12478
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Denis Ranger
>  Labels: stress
> Fix For: 3.0.x
>
> Attachments: 
> 0001-Replaced-using-CFMetaData.compile-in-cassandra-stres.patch, 
> cassandra-stress-trunk.patch
>
>
> Using CFMetaData.compile() on a client tool causes permission problems. To 
> reproduce:
> * Start cassandra under user _cassandra_
> * Run {{chmod -R go-rwx /var/lib/cassandra}} to deny access to other users.
> * Use a non-root user to run {{cassandra-stress}} 
> This produces an access denied message on {{/var/lib/cassandra/commitlog}}.
> The attached fix uses client-mode functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r15830 - in /release/cassandra: 2.2.8/ debian/dists/22x/ debian/dists/22x/main/binary-amd64/ debian/dists/22x/main/binary-i386/ debian/dists/22x/main/source/ debian/pool/main/c/cassandra/

2016-09-28 Thread mshuler
Author: mshuler
Date: Wed Sep 28 13:35:54 2016
New Revision: 15830

Log:
Apache Cassandra 2.2.8 Release

Added:
release/cassandra/2.2.8/
release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz   (with props)
release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc
release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.md5
release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.sha1
release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.md5
release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.sha1
release/cassandra/2.2.8/apache-cassandra-2.2.8-src.tar.gz   (with props)
release/cassandra/2.2.8/apache-cassandra-2.2.8-src.tar.gz.asc
release/cassandra/2.2.8/apache-cassandra-2.2.8-src.tar.gz.asc.md5
release/cassandra/2.2.8/apache-cassandra-2.2.8-src.tar.gz.asc.sha1
release/cassandra/2.2.8/apache-cassandra-2.2.8-src.tar.gz.md5
release/cassandra/2.2.8/apache-cassandra-2.2.8-src.tar.gz.sha1

release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_2.2.8_all.deb   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.8.diff.gz   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.8.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.8.orig.tar.gz  
 (with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.8.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.8_all.deb   
(with props)
Modified:
release/cassandra/debian/dists/22x/InRelease
release/cassandra/debian/dists/22x/Release
release/cassandra/debian/dists/22x/Release.gpg
release/cassandra/debian/dists/22x/main/binary-amd64/Packages
release/cassandra/debian/dists/22x/main/binary-amd64/Packages.gz
release/cassandra/debian/dists/22x/main/binary-amd64/Release
release/cassandra/debian/dists/22x/main/binary-i386/Packages
release/cassandra/debian/dists/22x/main/binary-i386/Packages.gz
release/cassandra/debian/dists/22x/main/binary-i386/Release
release/cassandra/debian/dists/22x/main/source/Release
release/cassandra/debian/dists/22x/main/source/Sources.gz

Added: release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz
==
Binary file - no diff available.

Propchange: release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc
==
--- release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc (added)
+++ release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc Wed Sep 28 
13:35:54 2016
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v1
+
+iQIcBAABCAAGBQJX5ae7AAoJEKJ4t4H+Syva4LMP/ia3FRdis0PV9Fb7zzZkCtNZ
+PKNYdXDK2JCNU2ksPEddcDkY/0QmEjnMwXX1BhkDMbJB0j32Ckx1+f3ecDoS/unW
+igHg3Zn7njqVJn0xGgGjcjf1lcszQE++/ioJFK5xXZkY59N2O13cbGbH6GKRBi+k
+V4snpJH5IteqrRLeSAYnGwClQJP+oCowxjmuU2lnh2fo4vo8TaIobNSUOLEED3b7
+nkyfbzbkg/qwst1RKjHmfQ+NGNh7JlZaYbcEmrNEo+/xS+M6wDSXO7BFAAvZJctx
+AoYlPawX9bN3Wap4bc2g/IfRDY3zu+ltnfWUswZiwswfWwpeqetjjCni+bXpq4G2
+wIN22UTp26gAIver3MbhVh1hABGcobjJaMSYG4SUBSpwXQKvp98Sa/JTEkiOnzEO
+tpPNp4fabEYSlsC2/kmWUN1dN97rK5S9PFoNjoBkluMKLOXN2+WVQpMov1aAIFVX
+GZuId+tU9jRLX/K4XFiqnKigNGBtl7KGgWIS7JaRUQGyY6SbJ1nXvr27dfCtuaVg
+LRT39OddfUkSjC0srhVbOaK4EpDvpdLpSsTTfo8oTC7zNaMtTymRymkAYCuJLm8S
+k2m79fU8x2eLZfYbVdaRyWKKE2aeWn0XPGS9euD7xkMBQ/nME0CM7Pwp+kp+0UZ8
+lE+jCbTkVHN6ZWPj5aHG
+=Lk74
+-END PGP SIGNATURE-

Added: release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.md5
==
--- release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.md5 (added)
+++ release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.md5 Wed Sep 
28 13:35:54 2016
@@ -0,0 +1 @@
+9576d20b4200750365796a1b6746fb01
\ No newline at end of file

Added: release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.sha1
==
--- release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.sha1 (added)
+++ release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc.sha1 Wed Sep 
28 13:35:54 2016
@@ -0,0 +1 @@
+3a15bc2a94d52bf253182abd20b57eab525aa738
\ No newline at end of file

Added: release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.md5
==
--- release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.md5 (added)
+++ release/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.md5 Wed Sep 28 
13:35:54 2016
@@ -0,0 +1 @@
+0202dd59a7967f4fab8643820b4d66ec
\ No newline at end of file

Added: 

[jira] [Commented] (CASSANDRA-12089) Update metrics-reporter dependencies

2016-09-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529461#comment-15529461
 ] 

Michał Matłoka commented on CASSANDRA-12089:


[~tjake] thanks! :)

> Update metrics-reporter dependencies
> 
>
> Key: CASSANDRA-12089
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12089
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Michał Matłoka
>Priority: Minor
> Fix For: 3.10
>
> Attachments: 12089-trunk.txt
>
>
> Proposal to update the metrics-reporter jars.
> Upcoming versions (>=3.0.2) of 
> [metrics-reporter-config|https://github.com/addthis/metrics-reporter-config] 
> should support prometheus and maybe also riemann (in v3).
> Relevant PRs:
> https://github.com/addthis/metrics-reporter-config/pull/26
> https://github.com/addthis/metrics-reporter-config/pull/27
> reporter-config 3.0.2+ can also be used in 2.1. Therefore it would be nice to 
> have also update the jars in 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12089) Update metrics-reporter dependencies

2016-09-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12089:
---
   Resolution: Fixed
Fix Version/s: (was: 2.1.x)
   3.10
   Status: Resolved  (was: Patch Available)

committed {{e7fb0639db614bd5ee0200fe87d567ecd731ed64}}

Thanks and sorry for the delay

> Update metrics-reporter dependencies
> 
>
> Key: CASSANDRA-12089
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12089
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Michał Matłoka
>Priority: Minor
> Fix For: 3.10
>
> Attachments: 12089-trunk.txt
>
>
> Proposal to update the metrics-reporter jars.
> Upcoming versions (>=3.0.2) of 
> [metrics-reporter-config|https://github.com/addthis/metrics-reporter-config] 
> should support prometheus and maybe also riemann (in v3).
> Relevant PRs:
> https://github.com/addthis/metrics-reporter-config/pull/26
> https://github.com/addthis/metrics-reporter-config/pull/27
> reporter-config 3.0.2+ can also be used in 2.1. Therefore it would be nice to 
> have also update the jars in 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Upgrade metrics-reporter-config to 3.0.3

2016-09-28 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk c7f6ba8a4 -> e7fb0639d


Upgrade metrics-reporter-config to 3.0.3

Patch by Michał Matłoka; reviewed by tjake for CASSANDRA-12089


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7fb0639
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7fb0639
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7fb0639

Branch: refs/heads/trunk
Commit: e7fb0639db614bd5ee0200fe87d567ecd731ed64
Parents: c7f6ba8
Author: Michal Matloka 
Authored: Wed Sep 7 20:05:20 2016 +0200
Committer: T Jake Luciani 
Committed: Wed Sep 28 08:12:03 2016 -0400

--
 CHANGES.txt |   1 +
 build.xml   |   2 +-
 lib/licenses/reporter-config-base-3.0.0.txt | 177 ---
 lib/licenses/reporter-config-base-3.0.3.txt | 177 +++
 lib/licenses/reporter-config3-3.0.0.txt | 177 ---
 lib/licenses/reporter-config3-3.0.3.txt | 177 +++
 lib/reporter-config-base-3.0.0.jar  | Bin 23633 -> 0 bytes
 lib/reporter-config-base-3.0.3.jar  | Bin 0 -> 29055 bytes
 lib/reporter-config3-3.0.0.jar  | Bin 14379 -> 0 bytes
 lib/reporter-config3-3.0.3.jar  | Bin 0 -> 33167 bytes
 10 files changed, 356 insertions(+), 355 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7fb0639/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7a5d73a..2c99e9d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Upgrade metrics-reporter dependencies (CASSANDRA-12089)
  * Tune compaction thread count via nodetool (CASSANDRA-12248)
  * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232)
  * Include repair session IDs in repair start message (CASSANDRA-12532)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7fb0639/build.xml
--
diff --git a/build.xml b/build.xml
index 6954e04..1808808 100644
--- a/build.xml
+++ b/build.xml
@@ -426,7 +426,7 @@
   
   
   
-  
+  
   
   
   

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7fb0639/lib/licenses/reporter-config-base-3.0.0.txt
--
diff --git a/lib/licenses/reporter-config-base-3.0.0.txt 
b/lib/licenses/reporter-config-base-3.0.0.txt
deleted file mode 100644
index 430d42b..000
--- a/lib/licenses/reporter-config-base-3.0.0.txt
+++ /dev/null
@@ -1,177 +0,0 @@
-
-  Apache License
-Version 2.0, January 2004
- http://www.apache.org/licenses/
-
-TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-1. Definitions.
-
-   "License" shall mean the terms and conditions for use, reproduction,
-   and distribution as defined by Sections 1 through 9 of this document.
-
-   "Licensor" shall mean the copyright owner or entity authorized by
-   the copyright owner that is granting the License.
-
-   "Legal Entity" shall mean the union of the acting entity and all
-   other entities that control, are controlled by, or are under common
-   control with that entity. For the purposes of this definition,
-   "control" means (i) the power, direct or indirect, to cause the
-   direction or management of such entity, whether by contract or
-   otherwise, or (ii) ownership of fifty percent (50%) or more of the
-   outstanding shares, or (iii) beneficial ownership of such entity.
-
-   "You" (or "Your") shall mean an individual or Legal Entity
-   exercising permissions granted by this License.
-
-   "Source" form shall mean the preferred form for making modifications,
-   including but not limited to software source code, documentation
-   source, and configuration files.
-
-   "Object" form shall mean any form resulting from mechanical
-   transformation or translation of a Source form, including but
-   not limited to compiled object code, generated documentation,
-   and conversions to other media types.
-
-   "Work" shall mean the work of authorship, whether in Source or
-   Object form, made available under the License, as indicated by a
-   copyright notice that is included in or attached to the work
-   (an example is provided in the Appendix below).
-
-   "Derivative Works" shall mean any work, whether in Source or Object
-   form, that is based on (or derived from) the Work and for which the
-   editorial revisions, annotations, elaborations, or other modifications
-   represent, as a whole, an original work of authorship. For the 

[jira] [Updated] (CASSANDRA-12089) Update metrics-reporter dependencies

2016-09-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12089:
---
Assignee: Michał Matłoka

> Update metrics-reporter dependencies
> 
>
> Key: CASSANDRA-12089
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12089
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Michał Matłoka
>Priority: Minor
> Fix For: 2.1.x
>
> Attachments: 12089-trunk.txt
>
>
> Proposal to update the metrics-reporter jars.
> Upcoming versions (>=3.0.2) of 
> [metrics-reporter-config|https://github.com/addthis/metrics-reporter-config] 
> should support prometheus and maybe also riemann (in v3).
> Relevant PRs:
> https://github.com/addthis/metrics-reporter-config/pull/26
> https://github.com/addthis/metrics-reporter-config/pull/27
> reporter-config 3.0.2+ can also be used in 2.1. Therefore it would be nice to 
> have also update the jars in 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12089) Update metrics-reporter dependencies

2016-09-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12089:
---
Assignee: (was: Robert Stupp)

> Update metrics-reporter dependencies
> 
>
> Key: CASSANDRA-12089
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12089
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Priority: Minor
> Fix For: 2.1.x
>
> Attachments: 12089-trunk.txt
>
>
> Proposal to update the metrics-reporter jars.
> Upcoming versions (>=3.0.2) of 
> [metrics-reporter-config|https://github.com/addthis/metrics-reporter-config] 
> should support prometheus and maybe also riemann (in v3).
> Relevant PRs:
> https://github.com/addthis/metrics-reporter-config/pull/26
> https://github.com/addthis/metrics-reporter-config/pull/27
> reporter-config 3.0.2+ can also be used in 2.1. Therefore it would be nice to 
> have also update the jars in 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12421) Add the option to only gossip manual severity, not severity from IOWait.

2016-09-28 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-12421.
---
   Resolution: Won't Fix
Fix Version/s: (was: 3.x)

> Add the option to only gossip manual severity, not severity from IOWait.
> 
>
> Key: CASSANDRA-12421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12421
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>
> Similar to CASSANDRA-11737, but I'd like to still respect the manual 
> severity, and ignore the severity calculated from IOWait/Compaction, since in 
> general disk throughput is not a problem for flash card.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12421) Add the option to only gossip manual severity, not severity from IOWait.

2016-09-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529275#comment-15529275
 ] 

Aleksey Yeschenko commented on CASSANDRA-12421:
---

Sorry for not catching the trunk detail from your comments (that it's not 
affected). With trunk not affected, and new features not going into 
2.1/2.2/3.0, I'm going to close this one as "Won't Fix". Thanks.

> Add the option to only gossip manual severity, not severity from IOWait.
> 
>
> Key: CASSANDRA-12421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12421
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
>
> Similar to CASSANDRA-11737, but I'd like to still respect the manual 
> severity, and ignore the severity calculated from IOWait/Compaction, since in 
> general disk throughput is not a problem for flash card.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12696) Allow to change logging levels based on components

2016-09-28 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-12696:
---
Attachment: 12696-trunk.patch

> Allow to change logging levels based on components
> --
>
> Key: CASSANDRA-12696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12696
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
>  Labels: lhf
> Attachments: 12696-trunk.patch
>
>
> Currently users are able to dynamically change logging configuration by using 
> {{nodetool setlogginglevel  }}. Unfortunately this requires to 
> know a bit about the Cassandra package hierarchy and gathering all the 
> involved packages/classes can be tedious, especially in troubleshooting 
> situations. What I'd like to have is a way to tell a user to "_when X 
> happens, enable debug logs for bootstrapping/repair/compactions/.._" by 
> simply running e.g. {{nodetool setlogginglevel bootstrap DEBUG}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12696) Allow to change logging levels based on components

2016-09-28 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-12696:
---
Labels: lhf  (was: )
Status: Patch Available  (was: In Progress)

> Allow to change logging levels based on components
> --
>
> Key: CASSANDRA-12696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12696
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
>  Labels: lhf
> Attachments: 12696-trunk.patch
>
>
> Currently users are able to dynamically change logging configuration by using 
> {{nodetool setlogginglevel  }}. Unfortunately this requires to 
> know a bit about the Cassandra package hierarchy and gathering all the 
> involved packages/classes can be tedious, especially in troubleshooting 
> situations. What I'd like to have is a way to tell a user to "_when X 
> happens, enable debug logs for bootstrapping/repair/compactions/.._" by 
> simply running e.g. {{nodetool setlogginglevel bootstrap DEBUG}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-09-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529160#comment-15529160
 ] 

Aleksey Yeschenko commented on CASSANDRA-12705:
---

For major upgrades we don't really care. Nodes with different messaging 
versions don't exchange schema between each other, in either direction. So 
unlike the cdc mess we had to live through (minor upgrade), here, all is 'easy'.

This will change *after* 4.0 though, with new schema protocols. Not going to 
affect this ticket.

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-09-28 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12705:
--
Reviewer: Aleksey Yeschenko

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10303) streaming for 'nodetool rebuild' fails after adding a datacenter

2016-09-28 Thread techpyaasa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529039#comment-15529039
 ] 

techpyaasa commented on CASSANDRA-10303:


[~yukim] Please check this.

> streaming for 'nodetool rebuild' fails after adding a datacenter 
> -
>
> Key: CASSANDRA-10303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10303
> Project: Cassandra
>  Issue Type: Bug
> Environment: jdk1.7
> cassandra 2.1.8
>Reporter: zhaoyan
>
> we add another datacenter.
> use nodetool rebuild DC1
> stream from some node of old datacenter always hang up with these exception:
> {code}
> ERROR [Thread-1472] 2015-09-10 19:24:53,091 CassandraDaemon.java:223 - 
> Exception in thread Thread[Thread-1472,5,RMI Runtime]
> java.lang.RuntimeException: java.io.IOException: Connection timed out
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
> Caused by: java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_60]
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_60]
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:59) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.7.0_60]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:172)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> ... 1 common frames omitted
> {code}
> i must restart node to stop current rebuild, and rebuild agagin and again to 
> success



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12689) All MutationStage threads blocked, kills server

2016-09-28 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529034#comment-15529034
 ] 

Benjamin Roth commented on CASSANDRA-12689:
---

I was able to tack that bug down and prove it with a negative dtest and was 
able to prove my fix with a positive dtest. 
(https://gist.github.com/brstgt/339d20994828794c8f374bc987b7b6d7)

To be able to run that tests I had to do some hacks 
(https://github.com/Jaumo/cassandra/commit/6b6806b9ba60c9b7111f00451aec4c6182199702)
 so that there is only a single mutation worker and to fail to aquire MV locks 
in a determined order. The test is not deterministic if there is more than 1 
worker, because then race conditions will pop up.

>From my point of view there is a solid proof of my theory. Please apply my 
>patch. I deployed it already on our production system and it also seems to 
>work there - at least there were no more deadlocks under high load.

Ah, btw: That bug is as old as MVs are.

If there are questions, please get back to me.

> All MutationStage threads blocked, kills server
> ---
>
> Key: CASSANDRA-12689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12689
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Under heavy load (e.g. due to repair during normal operations), a lot of 
> NullPointerExceptions occur in MutationStage. Unfortunately, the log is not 
> very chatty, trace is missing:
> 2016-09-22T06:29:47+00:00 cas6 [MutationStage-1] 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService Uncaught 
> exception on thread Thread[MutationStage-1,5,main]: {}
> 2016-09-22T06:29:47+00:00 cas6 #011java.lang.NullPointerException: null
> Then, after some time, in most cases ALL threads in MutationStage pools are 
> completely blocked. This leads to piling up pending tasks until server runs 
> OOM and is completely unresponsive due to GC. Threads will NEVER unblock 
> until server restart. Even if load goes completely down, all hints are 
> paused, and no compaction or repair is running. Only restart helps.
> I can understand that pending tasks in MutationStage may pile up under heavy 
> load, but tasks should be processed and dequeud after load goes down. This is 
> definitively not the case. This looks more like a an unhandled exception 
> leading to a stuck lock.
> Stack trace from jconsole, all Threads in MutationStage show same trace.
> Name: MutationStage-48
> State: WAITING on java.util.concurrent.CompletableFuture$Signaller@fcc8266
> Total blocked: 137  Total waited: 138.513
> Stack trace: 
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> org.apache.cassandra.hints.Hint.apply(Hint.java:96)
> org.apache.cassandra.hints.HintVerbHandler.doVerb(HintVerbHandler.java:91)
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10303) streaming for 'nodetool rebuild' fails after adding a datacenter

2016-09-28 Thread techpyaasa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529032#comment-15529032
 ] 

techpyaasa commented on CASSANDRA-10303:


I'm facing similar exception during 'nodetool rebuild' when trying to add new 
data center(DC3) in existing c*-2.0.17 cluster which has already 2 data centers 
DC1 , DC2.(Each DC has 3 groups ,with each group has 3 nodes , total 9 nodes 
per DC and approx 700GB data per node with RF-3 on all DCs)

{quote}
ERROR [STREAM-OUT-/xxx.xxx.198.191] 2016-09-27 00:28:10,327 StreamSession.java 
(line 461) [Stream #30852870-8472-11e6-b043-3f260c696828] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:311)
at java.lang.Thread.run(Thread.java:745)
 INFO [STREAM-OUT-/xxx.xxx.198.191] 2016-09-27 00:28:10,347 
StreamResultFuture.java (line 186) [Stream 
#30852870-8472-11e6-b043-3f260c696828] Session with /xxx.xxx.198.191 is complete
ERROR [STREAM-OUT-/xxx.xxx.198.191] 2016-09-27 00:28:10,347 StreamSession.java 
(line 461) [Stream #30852870-8472-11e6-b043-3f260c696828] Streaming error 
occurred
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:319)
at java.lang.Thread.run(Thread.java:745)
ERROR [STREAM-IN-/xxx.xxx.198.191] 2016-09-27 00:28:10,461 StreamSession.java 
(line 461) [Stream #30852870-8472-11e6-b043-3f260c696828] Streaming error 
occurred
java.lang.RuntimeException: Outgoing stream handler has been closed
at 
org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:126)
at 
org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:524)
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:413)
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:245)
at java.lang.Thread.run(Thread.java:745)
{quote}

"sysctl -w net.ipv4.tcp_keepalive_time=60 net.ipv4.tcp_keepalive_probes=3 
net.ipv4.tcp_keepalive_intvl=10" 
Does setting this would fix this issue ? And if so , is this enough to set this 
on new nodes on which are going to run 'nodetool rebuild' or need to change 
this values on all existing nodes from which data is going to get streamed?

Thanks in advance.





> streaming for 'nodetool rebuild' fails after adding a datacenter 
> -
>
> Key: CASSANDRA-10303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10303
> Project: Cassandra
>  Issue Type: Bug
> Environment: jdk1.7
> cassandra 2.1.8
>Reporter: zhaoyan
>
> we add another datacenter.
> use nodetool rebuild DC1
> stream from some node of old datacenter always hang up with these exception:
> {code}
> ERROR [Thread-1472] 2015-09-10 19:24:53,091 CassandraDaemon.java:223 - 
> Exception in thread Thread[Thread-1472,5,RMI Runtime]
> java.lang.RuntimeException: java.io.IOException: Connection timed out
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
> Caused by: java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_60]
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.read(IOUtil.java:197) 

[jira] [Commented] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-09-28 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529017#comment-15529017
 ] 

Alex Petrov commented on CASSANDRA-12461:
-

Thank you for the detailed review. 
I've made the corresponding fixes and rebased. I didn't squash commits, mostly 
to make it easier to look over.

bq. shall we rename inShutdownHook to something like draining or drained?

Sure, renamed.

bq. there is still one unprotected call to logger.warn in drain(), line 4462, 
and a call in runShutdownHooks()

The one in {{4462}} (I assume that's a line number after rebase, [this one in 
original 
file|https://github.com/ifesdjeen/cassandra/blob/c49dd44710f5ba816eb0cd414ef31efa914f7776/src/java/org/apache/cassandra/service/StorageService.java#L4449]).
 I've only "turned off" the logging that is related strictly to draining. I 
might be missing something however.

bq. let's update the documentation for drain()

True, it was incorrect. Fixed now. Stating differences with a normal shutdown 
hook now is too technical (both logging and windows timers are unrelated to 
core of what Cassandra does), so I left it out.

bq. shall we catch any exceptions in drain() to ensure the post shutdown hooks 
are run also if there is an exception?

You're right, it's a good idea to do that. I've also changed catching 
{{Exception}} to catching {{Throwable}} (with {{Throwables}} as you described) 
when running hooks in order to avoid unintended errors breaking drain.

bq. the documentation says that the post shutdown hooks should only be called 
on final shutdown, not on drain, is this still the case?

Since there's just one process: either shutdown or drain (you can't re-run the 
drain code after termination), it's not true anymore. Actually, that was the 
reason to provide the second part of patch.

bq. here is a proposal for some extra work (feel free to turn it down): 
refactor setMode() to accept an optional log level instead of a boolean, when 
the optional is empty it should not log, so we could call this method also from 
the shutdown hook and possibly replace inShutdownHook with operationMode, 
provided this becomes volatile. I would also add an override since most of the 
times it is called with the boolean set to true, so the override would have the 
logging level set to INFO.

I like the idea a lot, and I've implemented at least a part of it (using 
{{DRAINED}} instead of an additional boolean). I think that transitioning to 
{{DRAINING}} / {{DRAINED}} state on shutdown is also correct. 
As regards log level, I could not find a good intuitive way to pass log level, 
as slf4j API uses method calls instead of levels, both native, log4j and apache 
logging levels do not translate fully to slf4j method calls. Hope it's still 
ok.. 

bq. given that drain() is synchronized, can we not just look at inShutdownHook 
(or operationMode) at the beginning of drain(), to solve CASSANDRA-12509? Is 
there anything else I am missing about it?

Actually my intention was that this issue fully contains #12509 already. I will 
add the corresponding link.

bq. the StorageService import is unused in the Enabled*.java files

Cleaned up.

bq. we could replace runShutdownHooks(); with Throwables.perform(null, 
postShutdownHooks.stream().map(h -> h::run));, logging any non-null returned 
exceptions, if not in final shutdown. This would not only avoid one method 
implementation, but also make the log call visible in drain().

I've made it log exceptions even if it's a final shutdown. It might be useful 
to have them logged in both cases.

I've triggered a CI, will report as soon as I have passed tests.

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Fix For: 3.x
>
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12632) Failure in LogTransactionTest.testUnparsableFirstRecord-compression

2016-09-28 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12632:

Status: Ready to Commit  (was: Patch Available)

> Failure in LogTransactionTest.testUnparsableFirstRecord-compression
> ---
>
> Key: CASSANDRA-12632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12632
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: 
> [/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Index.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-TOC.txt,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Filter.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Data.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc_txn_compaction_73af4e00-78d3-11e6-9858-93d33dad3001.log]
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1228)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1196)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testCorruptRecord(LogTransactionTest.java:1040)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testUnparsableFirstRecord(LogTransactionTest.java:988)
> {code}
> Example failure:
> http://cassci.datastax.com/job/cassandra-3.9_testall/89/testReport/junit/org.apache.cassandra.db.lifecycle/LogTransactionTest/testUnparsableFirstRecord_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528925#comment-15528925
 ] 

Sam Tunnicliffe commented on CASSANDRA-12700:
-

bq. I didn't used CREATE ROLE but instead used CREATE USER

That's cool, it's just a syntax thing really, under the hood {{CREATE USER}} is 
translated to {{CREATE ROLE}} with the {{LOGIN}} option set to true. 
So as I mentioned, my concern here is how can we come to read a row with a null 
value?

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> 

[jira] [Commented] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-09-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528924#comment-15528924
 ] 

Stefania commented on CASSANDRA-12705:
--

I'm not sure what needs to be done in order to handle changes to the schema 
tables during a major upgrade, [~iamaleksey]?

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-09-28 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528903#comment-15528903
 ] 

Nate McCall commented on CASSANDRA-12705:
-

bq. the handling of the schema migration during upgrade is missing.

[~Stefania] Thanks for effort so far - should this be patch available, or are 
more changes coming?

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Rajesh Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528860#comment-15528860
 ] 

Rajesh Radhakrishnan commented on CASSANDRA-12700:
--

Thanks Sam and Jeff, I was using the older Datastax Cassandra 
documentation(2.x) which is replaced by this 
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/create_user_r.html

Here there is a note at the beginning which I quote "Note: CREATE USER is 
supported for backwards compatibility. Authentication and authorization for 
Cassandra 2.2 and later are based on ROLES, and CREATE ROLE should be used."

I didn't used CREATE ROLE but instead used CREATE USER. 

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR 

[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Rajesh Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528845#comment-15528845
 ] 

Rajesh Radhakrishnan commented on CASSANDRA-12700:
--

Thank you Jeff!
I did created the user 'cassandra_test' using the following CQL:

CREATE USER cassandra_user WITH PASSWORD '' SUPERUSER;

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  

[jira] [Commented] (CASSANDRA-12397) Altering a column's type breaks commitlog replay

2016-09-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528843#comment-15528843
 ] 

Stefania commented on CASSANDRA-12397:
--

The problem is that {{PartitionUpdateSerializer}} does not serialize the column 
types because it calls 
{{UnfilteredRowIteratorSerializer.serializer.serialize(iter, null, out, 
version, update.rowCount())}}, which calls 
{{SerializationHeader.serializer.serializeForMessaging(header, selection, out, 
hasStatic)}}, which only serializes the column names. 

CASSANDRA-12461 fixes the exception that I've reproduced above because it 
recycles commit log segments in the shutdown hook, although there is still one 
CL file left.  However, this wouldn't cover the case of a crash just after a 
column is modified, in which case we would not be able to replay the mutation 
in the CL after startup. When a table is modified, a flush is scheduled in 
{{cfs.reload()}}, so there is a short window where the process may crash and be 
unable to restart. I'm not so sure what we can do about this other than 
changing CL format to include column types like we do for sstables.



> Altering a column's type breaks commitlog replay
> 
>
> Key: CASSANDRA-12397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12397
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Carl Yeksigian
>Assignee: Stefania
>
> When switching from a fixed-length column to a variable-length column, 
> replaying the commitlog on restart will have the same issue as 
> CASSANDRA-11820. Seems like it is related to the schema being flushed and 
> used when restarted, but commitlogs having been written in the old format.
> {noformat}
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /tmp/mutation4816372620457789996dat.  This may be caused by replaying a 
> mutation against a table with the same name but incompatible schema.  
> Exception follows: java.io.IOError: java.io.EOFException: EOF after 259 bytes 
> out of 3336
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:409)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:342)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:201)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:84)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:139)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:177) 
> [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:158)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:316) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:591)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:720) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528829#comment-15528829
 ] 

Sam Tunnicliffe commented on CASSANDRA-12700:
-

bq.do you recall what mechanism you used to create these users/roles?

This is definitely the most important question here, as on the "normal" path 
using {{CREATE/ALTER ROLE}} it shouldn't be possible to construct an insert or 
update mutation in which any field is unset (including data migration for users 
created on pre-2.2 clusters). My concern is that if this isn't due to a flaw in 
the design/impl of {{CassandraRoleManager}} (though it most certainly could be) 
then it might be indicative of a deeper issue which may affect reads from other 
tables. So I'd say it was worth trying to figure out, or even repro, the root 
cause. 

Assuming for now that the a given role was created or modified by directly 
updating the roles table (or if it was in some other way corrupted for that 
matter), I don't agree that silently "correcting" missing values is the right 
thing to do. In that scenario, I'd argue that we should loudly inform the 
user/operator that the system metadata is in a screwed up state by logging a 
warning and throwing an appropriate exception (i.e. not an NPE). 


> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   

[jira] [Updated] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-09-28 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12700:

Reviewer: Sam Tunnicliffe

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  

[jira] [Updated] (CASSANDRA-12719) typo in cql examples

2016-09-28 Thread suisuihan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

suisuihan updated CASSANDRA-12719:
--

hi, in http://cassandra.apache.org/doc/latest/cql/ddl.html#partition-key, the 
data definition of "The primary key", the definination of table t primary key 
is wrong.
~~~ 
CREATE TABLE t (
a int,
b int,
c int,
PRIMARY KEY (a, c, d)
);

SELECT * FROM t;
   a | b | c
  ---+---+---
   0 | 0 | 4 // row 1
   0 | 1 | 9 // row 2
   0 | 2 | 2 // row 3
   0 | 3 | 3 // row 4
~~~
I think the primary key should be (a, b, c) from context.
~~~
PRIMARY KEY (a, b, c)
~~~

> typo in cql examples
> 
>
> Key: CASSANDRA-12719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: suisuihan
>Priority: Trivial
>
> Data Definition example use wrong definition



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12719) typo in cql examples

2016-09-28 Thread suisuihan (JIRA)
suisuihan created CASSANDRA-12719:
-

 Summary: typo in cql examples
 Key: CASSANDRA-12719
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12719
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation and Website
Reporter: suisuihan
Priority: Trivial


Data Definition example use wrong definition



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >