[jira] [Commented] (CASSANDRA-13068) Fully expired sstable not dropped when running out of disk space

2017-06-20 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056710#comment-16056710
 ] 

Krishna Dattu Koneru commented on CASSANDRA-13068:
--

Thanks [~krummas] for review and running tests!

Here are latest branches with the patch.

*  
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...krishna-koneru:cassandra-3.0-13068]
* 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...krishna-koneru:cassandra-3.11-13068]
* 
[trunk|https://github.com/apache/cassandra/compare/trunk...krishna-koneru:cassandra-trunk-13068]

> Fully expired sstable not dropped when running out of disk space
> 
>
> Key: CASSANDRA-13068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13068
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Lerh Chuan Low
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If a fully expired sstable is larger than the remaining disk space we won't 
> run the compaction that can drop the sstable (ie, in our disk space check 
> should not include the fully expired sstables)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13626) Check hashed password matches expected bcrypt hash format before checking

2017-06-20 Thread Jeff Jirsa (JIRA)
Jeff Jirsa created CASSANDRA-13626:
--

 Summary: Check hashed password matches expected bcrypt hash format 
before checking
 Key: CASSANDRA-13626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13626
 Project: Cassandra
  Issue Type: Bug
  Components: Auth
Reporter: Jeff Jirsa
Assignee: Jeff Jirsa
Priority: Minor
 Fix For: 3.0.x, 3.11.x, 4.x


We use {{Bcrypt.checkpw}} in the auth subsystem, but do a reasonably poor job 
of guaranteeing that the hashed password we send to it is really a hashed 
password, and {{checkpw}} does an even worse job of failing nicely. We should 
at least sanity check the hash complies with the expected format prior to 
validating.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13625) Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9

2017-06-20 Thread Joaquin Casares (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056665#comment-16056665
 ] 

Joaquin Casares commented on CASSANDRA-13625:
-

Here's a PR for the single-file fix: 
https://github.com/apache/cassandra/pull/124.

> Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9
> --
>
> Key: CASSANDRA-13625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joaquin Casares
>  Labels: lhf
> Fix For: 2.2.10
>
>
> {{max_value_size_in_mb}} is currently in the 2.2.9 cassandra.yaml, but does 
> not make reference of the config in any place within its codebase:
> https://github.com/apache/cassandra/blob/cassandra-2.2.9/conf/cassandra.yaml#L888-L891
> CASSANDRA-9530, which introduced {{max_value_size_in_mb}}, has it's Fix 
> Version/s marked as 3.0.7, 3.7, and 3.8.
> Let's remove the {{max_value_size_in_mb}} from the cassandra.yaml.
> {NOFORMAT}
> ~/repos/cassandra[(HEAD detached at cassandra-2.2.9)] (joaquin)$ grep -r 
> max_value_size_in_mb .
> conf/cassandra.yaml:# max_value_size_in_mb: 256
> {NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13625) Remove unused cassandra.yaml setting, max_value_size_in_mb, from 2.2.9

2017-06-20 Thread Joaquin Casares (JIRA)
Joaquin Casares created CASSANDRA-13625:
---

 Summary: Remove unused cassandra.yaml setting, 
max_value_size_in_mb, from 2.2.9
 Key: CASSANDRA-13625
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13625
 Project: Cassandra
  Issue Type: Bug
Reporter: Joaquin Casares
 Fix For: 2.2.10


{{max_value_size_in_mb}} is currently in the 2.2.9 cassandra.yaml, but does not 
make reference of the config in any place within its codebase:

https://github.com/apache/cassandra/blob/cassandra-2.2.9/conf/cassandra.yaml#L888-L891

CASSANDRA-9530, which introduced {{max_value_size_in_mb}}, has it's Fix 
Version/s marked as 3.0.7, 3.7, and 3.8.

Let's remove the {{max_value_size_in_mb}} from the cassandra.yaml.

{NOFORMAT}
~/repos/cassandra[(HEAD detached at cassandra-2.2.9)] (joaquin)$ grep -r 
max_value_size_in_mb .
conf/cassandra.yaml:# max_value_size_in_mb: 256
{NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13557) allow different NUMACTL_ARGS to be passed in

2017-06-20 Thread Matt Byrd (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056599#comment-16056599
 ] 

Matt Byrd commented on CASSANDRA-13557:
---

|3.0|3.11|Trunk|
|[branch|https://github.com/Jollyplum/cassandra/tree/13557]|[branch|https://github.com/Jollyplum/cassandra/tree/13557-3.11]|[branch|https://github.com/Jollyplum/cassandra/tree/13557]|
|[testall|https://circleci.com/gh/Jollyplum/cassandra/19#tests/containers/3]|[testall|https://circleci.com/gh/Jollyplum/cassandra/20]|[testall|https://circleci.com/gh/Jollyplum/cassandra/6]|

> allow different NUMACTL_ARGS to be passed in
> 
>
> Key: CASSANDRA-13557
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13557
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Matt Byrd
>Assignee: Matt Byrd
>Priority: Minor
> Fix For: 4.x
>
>
> Currently in bin/cassandra the following is hardcoded:
> NUMACTL_ARGS="--interleave=all"
> Ideally users of cassandra/bin could pass in a different set of NUMACTL_ARGS 
> if they wanted to say bind the process to a socket for cpu/memory reasons, 
> rather than having to comment out/modify this line in the deployed 
> cassandra/bin. e.g as described in:
> https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html
> This could be done by just having the default be set to "--interleave=all" 
> but pickup any value which has already been set for the variable NUMACTL_ARGS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-06-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056563#comment-16056563
 ] 

Jeff Jirsa commented on CASSANDRA-12961:


[~calonso] - if you're still interested, I think it's simpler than that.

In the above code block, look at:

{code}
CompactionCandidate l0Compaction = 
getSTCSInL0CompactionCandidate();
{code}

If you move that before the loop, we can still keep the check where it is:

{code}
if (l0Compaction != null)
return l0Compaction;
{code}

At each level, without having to actually call all the way down to the 
(relatively expensive) 
{{SizeTieredCompactionStrategy.createSSTableAndLengthPairs}} / 
{{SizeTieredCompactionStrategy.getBuckets}}



> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Priority: Trivial
>  Labels: lhf
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13603) Change repair midpoint logging from CASSANDRA-13052

2017-06-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056539#comment-16056539
 ] 

Jeff Jirsa edited comment on CASSANDRA-13603 at 6/20/17 9:57 PM:
-

{quote}
We already check node instanceof Leaf before recursively calling 
differenceHelper() with a sub-range. I'd suggest to do the same in 
difference(). It just doesn't make sense to call differenceHelper() with two 
leaf nodes, doesn't it?
{quote}

I think you're right - pushed a new commit to each branch that checks to see if 
either node is {{instanceof Leaf}} and avoids that tree traversal. Also marked 
{{MerkleTree.differenceHelper}} as {{@VisibleForTesting}}

I think this makes the midpoint check below irrelevant, but may protect us from 
another bug later? Considering switching it to an assert rather than the block 
that exists now?

{code}
if (midpoint.equals(active.left) || midpoint.equals(active.right))
{code} 






was (Author: jjirsa):
{quote}
We already check node instanceof Leaf before recursively calling 
differenceHelper() with a sub-range. I'd suggest to do the same in 
difference(). It just doesn't make sense to call differenceHelper() with two 
leaf nodes, doesn't it?
{quote}

I think you're right - pushed a new commit to each branch that checks to see if 
either node is {{instanceof Leaf}} and avoids that tree traversal. Also marked 
{{MerkleTree.differenceHelper}} as {{@VisibleForTesting}}

I think this makes the midpoint check below irrelevant, but may protect us from 
another bug later? On the fence with removing it - seems like we shouldn't hit 
us, but maybe is saves us in a future release?  

{code}
if (midpoint.equals(active.left) || midpoint.equals(active.right))
{code} 





> Change repair midpoint logging from  CASSANDRA-13052
> 
>
> Key: CASSANDRA-13603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13603
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> In CASSANDRA-13052 , we changed the way we handle repairs on small ranges to 
> make them more sane in general, but {{MerkleTree.differenceHelper}} now 
> erroneously logs at error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13603) Change repair midpoint logging from CASSANDRA-13052

2017-06-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056539#comment-16056539
 ] 

Jeff Jirsa commented on CASSANDRA-13603:


{quote}
We already check node instanceof Leaf before recursively calling 
differenceHelper() with a sub-range. I'd suggest to do the same in 
difference(). It just doesn't make sense to call differenceHelper() with two 
leaf nodes, doesn't it?
{quote}

I think you're right - pushed a new commit to each branch that checks to see if 
either node is {{instanceof Leaf}} and avoids that tree traversal. Also marked 
{{MerkleTree.differenceHelper}} as {{@VisibleForTesting}}

I think this makes the midpoint check below irrelevant, but may protect us from 
another bug later? On the fence with removing it - seems like we shouldn't hit 
us, but maybe is saves us in a future release?  

{code}
if (midpoint.equals(active.left) || midpoint.equals(active.right))
{code} 





> Change repair midpoint logging from  CASSANDRA-13052
> 
>
> Key: CASSANDRA-13603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13603
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> In CASSANDRA-13052 , we changed the way we handle repairs on small ranges to 
> make them more sane in general, but {{MerkleTree.differenceHelper}} now 
> erroneously logs at error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13618) CassandraRoleManager setup task improvement

2017-06-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13618:
---
Status: Patch Available  (was: In Progress)

> CassandraRoleManager setup task improvement
> ---
>
> Key: CASSANDRA-13618
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13618
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> {{CassandraRoleManager}} blocks some functionality during setup, using a 
> delay added in CASSANDRA-9761 . Unfortunately, this setup is scheduled for 
> 10s after startup, and may not be necessary, meaning immediately after 
> startup some auth related queries may not behave as intended. We can skip 
> this delay without any additional risk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13624) upgradesstables crashes with OOM when upgrading sstables with lots of range tombstones

2017-06-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056483#comment-16056483
 ] 

Jeff Jirsa commented on CASSANDRA-13624:


Do you not see OOMs during normal operation? Normal compaction works as 
intended on 2.0 and 2.1? 


> upgradesstables crashes with OOM when upgrading sstables with lots of range 
> tombstones
> --
>
> Key: CASSANDRA-13624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13624
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robin Mahony
>
> CASSANDRA-7953 can lead to range tombstones not properly being compacted. 
> When trying to upgrade from Cassandra 2.0.X to 2.1.X, running upgradesstables 
> (either via nodetool OR the offline version) does not work, as it crashes 
> with OOM. This essentially means if you are running a version of Cassandra 
> 2.0.X for a period of time where lots of range tombstones have been 
> generated; you will be unable to upgrade to Cassandra 2.1.X following the 
> normal procedures. Hence upgrade from 2.0.X to 2.1.X is essentially broken if 
> you hit CASSANDRA-7953.
> I hit this while trying to upgrade to Cassandra 2.1.17.
> Offline Version:
> # sstableupgrade storagegrid s3_usage_delta 
> Found 15 sstables that need upgrading. 
> Upgrading 
> SSTableReader(path='/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-jb-20230-Data.db')
>  
> ERROR 22:38:24,626 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6821300e) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@244363601:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79531
>  was not released before the reference was garbage collected 
> ERROR 22:38:24,631 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@2891457f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1230033823:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79532
>  was not released before the reference was garbage collected 
> Exception in thread "ScheduledTasks:1" Exception in thread 
> "metrics-meter-tick-thread-2" java.lang.OutOfMemoryError: Java heap space 
> at java.lang.Class.getName0(Native Method) 
> at java.lang.Class.getName(Class.java:642) 
> at java.lang.Throwable.toString(Throwable.java:479) 
> at java.lang.Throwable.(Throwable.java:311) 
> at java.lang.Exception.(Exception.java:102) 
> at java.util.concurrent.ExecutionException.(ExecutionException.java:90) 
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.extractThrowable(DebuggableThreadPoolExecutor.java:246)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.logExceptionsAfterExecute(DebuggableThreadPoolExecutor.java:210)
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.afterExecute(DebuggableScheduledThreadPoolExecutor.java:89)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> java.lang.OutOfMemoryError: Java heap space 
> java.lang.OutOfMemoryError: Java heap space 
> ERROR 22:41:21,080 Error in ThreadPoolExecutor 
> java.lang.OutOfMemoryError: Java heap space 
> ERROR 22:41:21,080 JVM state determined to be unstable. Exiting forcefully 
> due to: 
> java.lang.OutOfMemoryError: Java heap space 
> root@DC1-SN-10-224-6-066:/var/local/cassandra/data/0 #



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-13624) upgradesstables crashes with OOM when upgrading sstables with lots of range tombstones

2017-06-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13624:
---
Comment: was deleted

(was: Do you get the same OOM with the online version as you do with the 
offline?)

> upgradesstables crashes with OOM when upgrading sstables with lots of range 
> tombstones
> --
>
> Key: CASSANDRA-13624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13624
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robin Mahony
>
> CASSANDRA-7953 can lead to range tombstones not properly being compacted. 
> When trying to upgrade from Cassandra 2.0.X to 2.1.X, running upgradesstables 
> (either via nodetool OR the offline version) does not work, as it crashes 
> with OOM. This essentially means if you are running a version of Cassandra 
> 2.0.X for a period of time where lots of range tombstones have been 
> generated; you will be unable to upgrade to Cassandra 2.1.X following the 
> normal procedures. Hence upgrade from 2.0.X to 2.1.X is essentially broken if 
> you hit CASSANDRA-7953.
> I hit this while trying to upgrade to Cassandra 2.1.17.
> Offline Version:
> # sstableupgrade storagegrid s3_usage_delta 
> Found 15 sstables that need upgrading. 
> Upgrading 
> SSTableReader(path='/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-jb-20230-Data.db')
>  
> ERROR 22:38:24,626 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6821300e) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@244363601:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79531
>  was not released before the reference was garbage collected 
> ERROR 22:38:24,631 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@2891457f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1230033823:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79532
>  was not released before the reference was garbage collected 
> Exception in thread "ScheduledTasks:1" Exception in thread 
> "metrics-meter-tick-thread-2" java.lang.OutOfMemoryError: Java heap space 
> at java.lang.Class.getName0(Native Method) 
> at java.lang.Class.getName(Class.java:642) 
> at java.lang.Throwable.toString(Throwable.java:479) 
> at java.lang.Throwable.(Throwable.java:311) 
> at java.lang.Exception.(Exception.java:102) 
> at java.util.concurrent.ExecutionException.(ExecutionException.java:90) 
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.extractThrowable(DebuggableThreadPoolExecutor.java:246)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.logExceptionsAfterExecute(DebuggableThreadPoolExecutor.java:210)
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.afterExecute(DebuggableScheduledThreadPoolExecutor.java:89)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> java.lang.OutOfMemoryError: Java heap space 
> java.lang.OutOfMemoryError: Java heap space 
> ERROR 22:41:21,080 Error in ThreadPoolExecutor 
> java.lang.OutOfMemoryError: Java heap space 
> ERROR 22:41:21,080 JVM state determined to be unstable. Exiting forcefully 
> due to: 
> java.lang.OutOfMemoryError: Java heap space 
> root@DC1-SN-10-224-6-066:/var/local/cassandra/data/0 #



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13624) upgradesstables crashes with OOM when upgrading sstables with lots of range tombstones

2017-06-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056479#comment-16056479
 ] 

Jeff Jirsa commented on CASSANDRA-13624:


Do you get the same OOM with the online version as you do with the offline?

> upgradesstables crashes with OOM when upgrading sstables with lots of range 
> tombstones
> --
>
> Key: CASSANDRA-13624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13624
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robin Mahony
>
> CASSANDRA-7953 can lead to range tombstones not properly being compacted. 
> When trying to upgrade from Cassandra 2.0.X to 2.1.X, running upgradesstables 
> (either via nodetool OR the offline version) does not work, as it crashes 
> with OOM. This essentially means if you are running a version of Cassandra 
> 2.0.X for a period of time where lots of range tombstones have been 
> generated; you will be unable to upgrade to Cassandra 2.1.X following the 
> normal procedures. Hence upgrade from 2.0.X to 2.1.X is essentially broken if 
> you hit CASSANDRA-7953.
> I hit this while trying to upgrade to Cassandra 2.1.17.
> Offline Version:
> # sstableupgrade storagegrid s3_usage_delta 
> Found 15 sstables that need upgrading. 
> Upgrading 
> SSTableReader(path='/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-jb-20230-Data.db')
>  
> ERROR 22:38:24,626 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6821300e) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@244363601:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79531
>  was not released before the reference was garbage collected 
> ERROR 22:38:24,631 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@2891457f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1230033823:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79532
>  was not released before the reference was garbage collected 
> Exception in thread "ScheduledTasks:1" Exception in thread 
> "metrics-meter-tick-thread-2" java.lang.OutOfMemoryError: Java heap space 
> at java.lang.Class.getName0(Native Method) 
> at java.lang.Class.getName(Class.java:642) 
> at java.lang.Throwable.toString(Throwable.java:479) 
> at java.lang.Throwable.(Throwable.java:311) 
> at java.lang.Exception.(Exception.java:102) 
> at java.util.concurrent.ExecutionException.(ExecutionException.java:90) 
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.extractThrowable(DebuggableThreadPoolExecutor.java:246)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.logExceptionsAfterExecute(DebuggableThreadPoolExecutor.java:210)
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.afterExecute(DebuggableScheduledThreadPoolExecutor.java:89)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> java.lang.OutOfMemoryError: Java heap space 
> java.lang.OutOfMemoryError: Java heap space 
> ERROR 22:41:21,080 Error in ThreadPoolExecutor 
> java.lang.OutOfMemoryError: Java heap space 
> ERROR 22:41:21,080 JVM state determined to be unstable. Exiting forcefully 
> due to: 
> java.lang.OutOfMemoryError: Java heap space 
> root@DC1-SN-10-224-6-066:/var/local/cassandra/data/0 #



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13624) upgradesstables crashes with OOM when upgrading sstables with lots of range tombstones

2017-06-20 Thread Robin Mahony (JIRA)
Robin Mahony created CASSANDRA-13624:


 Summary: upgradesstables crashes with OOM when upgrading sstables 
with lots of range tombstones
 Key: CASSANDRA-13624
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13624
 Project: Cassandra
  Issue Type: Bug
Reporter: Robin Mahony


CASSANDRA-7953 can lead to range tombstones not properly being compacted. When 
trying to upgrade from Cassandra 2.0.X to 2.1.X, running upgradesstables 
(either via nodetool OR the offline version) does not work, as it crashes with 
OOM. This essentially means if you are running a version of Cassandra 2.0.X for 
a period of time where lots of range tombstones have been generated; you will 
be unable to upgrade to Cassandra 2.1.X following the normal procedures. Hence 
upgrade from 2.0.X to 2.1.X is essentially broken if you hit CASSANDRA-7953.

I hit this while trying to upgrade to Cassandra 2.1.17.

Offline Version:

# sstableupgrade storagegrid s3_usage_delta 
Found 15 sstables that need upgrading. 
Upgrading 
SSTableReader(path='/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-jb-20230-Data.db')
 
ERROR 22:38:24,626 LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@6821300e) to class 
org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@244363601:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79531
 was not released before the reference was garbage collected 
ERROR 22:38:24,631 LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@2891457f) to class 
org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1230033823:/var/local/cassandra/data/0/storagegrid/s3_usage_delta/storagegrid-s3_usage_delta-ka-79532
 was not released before the reference was garbage collected 
Exception in thread "ScheduledTasks:1" Exception in thread 
"metrics-meter-tick-thread-2" java.lang.OutOfMemoryError: Java heap space 
at java.lang.Class.getName0(Native Method) 
at java.lang.Class.getName(Class.java:642) 
at java.lang.Throwable.toString(Throwable.java:479) 
at java.lang.Throwable.(Throwable.java:311) 
at java.lang.Exception.(Exception.java:102) 
at java.util.concurrent.ExecutionException.(ExecutionException.java:90) 
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.extractThrowable(DebuggableThreadPoolExecutor.java:246)
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.logExceptionsAfterExecute(DebuggableThreadPoolExecutor.java:210)
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.afterExecute(DebuggableScheduledThreadPoolExecutor.java:89)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 
java.lang.OutOfMemoryError: Java heap space 
java.lang.OutOfMemoryError: Java heap space 
ERROR 22:41:21,080 Error in ThreadPoolExecutor 
java.lang.OutOfMemoryError: Java heap space 
ERROR 22:41:21,080 JVM state determined to be unstable. Exiting forcefully due 
to: 
java.lang.OutOfMemoryError: Java heap space 
root@DC1-SN-10-224-6-066:/var/local/cassandra/data/0 #



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-11916) Exception In Compaction Executor - java.lang.IllegalArgumentException: null

2017-06-20 Thread Bhuvan Rawal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhuvan Rawal resolved CASSANDRA-11916.
--
Resolution: Won't Fix

> Exception In Compaction Executor - java.lang.IllegalArgumentException: null
> ---
>
> Key: CASSANDRA-11916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Bhuvan Rawal
>Assignee: Benjamin Lerer
>Priority: Critical
> Fix For: 3.0.x
>
>
> We are using Cassandra 3.0.3 with Level ordered compaction strategy with 
> default compression. While doing some load tests, I can observe these 
> messages after near fixed intervals of 15-20 seconds each on just one node 
> amongst 6 node cluster: 
> ERROR [CompactionExecutor:23] 2016-05-29 01:29:42,643 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:23,1,main]
> java.lang.IllegalArgumentException: null
>   at java.nio.Buffer.position(Buffer.java:244) ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:114)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:183)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:111)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:302)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.seekToCurrentRangeStart(BigTableScanner.java:181)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.access$200(BigTableScanner.java:51)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:280)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]



--
This message was sent by Atlassian JIRA

[jira] [Commented] (CASSANDRA-11916) Exception In Compaction Executor - java.lang.IllegalArgumentException: null

2017-06-20 Thread Bhuvan Rawal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056154#comment-16056154
 ] 

Bhuvan Rawal commented on CASSANDRA-11916:
--

[~blerer] We upgraded to a later version of 3.0.X series and this issue did not 
occur, probably fixed somewhere down the line.

> Exception In Compaction Executor - java.lang.IllegalArgumentException: null
> ---
>
> Key: CASSANDRA-11916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Bhuvan Rawal
>Assignee: Benjamin Lerer
>Priority: Critical
> Fix For: 3.0.x
>
>
> We are using Cassandra 3.0.3 with Level ordered compaction strategy with 
> default compression. While doing some load tests, I can observe these 
> messages after near fixed intervals of 15-20 seconds each on just one node 
> amongst 6 node cluster: 
> ERROR [CompactionExecutor:23] 2016-05-29 01:29:42,643 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:23,1,main]
> java.lang.IllegalArgumentException: null
>   at java.nio.Buffer.position(Buffer.java:244) ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:114)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:183)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:111)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:302)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.seekToCurrentRangeStart(BigTableScanner.java:181)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.access$200(BigTableScanner.java:51)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:280)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> 

[jira] [Commented] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2017-06-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056149#comment-16056149
 ] 

Jeff Jirsa commented on CASSANDRA-12924:


3.11.0 already went to vote; unless a member of the PMC objects, it's unlikely 
we'll be able to include it. However, it should be easy enough to fix after the 
fact - just drop in the new jar at runtime, yes?

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> 

[jira] [Updated] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2017-06-20 Thread Andres March (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres March updated CASSANDRA-12924:
-
Status: Patch Available  (was: Open)

https://github.com/apache/cassandra/pull/123

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> 

[jira] [Commented] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2017-06-20 Thread Andres March (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056133#comment-16056133
 ] 

Andres March commented on CASSANDRA-12924:
--

https://github.com/apache/cassandra/pull/123

This would be really nice to include in 3.11

metrics-core update is required.

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]

[jira] [Resolved] (CASSANDRA-13551) Trivial format error in StorageProxy

2017-06-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa resolved CASSANDRA-13551.

Resolution: Fixed

Thanks Ariel for the review and the nudge.

Committed to trunk as {{f21202e83f308ea22cd430499da60aebbfa8ffbc}}


> Trivial format error in StorageProxy
> 
>
> Key: CASSANDRA-13551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 4.0
>
>
> Maybe I should just ninja it: 
> {code}
> diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
> b/src/java/org/apache/cassandra/service/StorageProxy.java
> index ea082d5..1ab8dd6 100644
> --- a/src/java/org/apache/cassandra/service/StorageProxy.java
> +++ b/src/java/org/apache/cassandra/service/StorageProxy.java
> @@ -1319,7 +1319,7 @@ public class StorageProxy implements StorageProxyMBean
>  }
>  catch (Exception ex)
>  {
> -logger.error("Failed to apply mutation locally : {}", 
> ex);
> +logger.error("Failed to apply mutation locally", ex);
>  }
>  }
> @@ -1345,7 +1345,7 @@ public class StorageProxy implements StorageProxyMBean
>  catch (Exception ex)
>  {
>  if (!(ex instanceof WriteTimeoutException))
> -logger.error("Failed to apply mutation locally : 
> {}", ex);
> +logger.error("Failed to apply mutation locally", ex);
>  handler.onFailure(FBUtilities.getBroadcastAddress());
>  }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13551) Trivial format error in StorageProxy

2017-06-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13551:
---
Fix Version/s: 4.0

> Trivial format error in StorageProxy
> 
>
> Key: CASSANDRA-13551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 4.0
>
>
> Maybe I should just ninja it: 
> {code}
> diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
> b/src/java/org/apache/cassandra/service/StorageProxy.java
> index ea082d5..1ab8dd6 100644
> --- a/src/java/org/apache/cassandra/service/StorageProxy.java
> +++ b/src/java/org/apache/cassandra/service/StorageProxy.java
> @@ -1319,7 +1319,7 @@ public class StorageProxy implements StorageProxyMBean
>  }
>  catch (Exception ex)
>  {
> -logger.error("Failed to apply mutation locally : {}", 
> ex);
> +logger.error("Failed to apply mutation locally", ex);
>  }
>  }
> @@ -1345,7 +1345,7 @@ public class StorageProxy implements StorageProxyMBean
>  catch (Exception ex)
>  {
>  if (!(ex instanceof WriteTimeoutException))
> -logger.error("Failed to apply mutation locally : 
> {}", ex);
> +logger.error("Failed to apply mutation locally", ex);
>  handler.onFailure(FBUtilities.getBroadcastAddress());
>  }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Trivial format error in StorageProxy

2017-06-20 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/trunk 40cc2ece6 -> f21202e83


Trivial format error in StorageProxy

Patch by Jeff Jirsa; Reviewed by Ariel Weisberg for CASSANDRA-13551


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f21202e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f21202e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f21202e8

Branch: refs/heads/trunk
Commit: f21202e83f308ea22cd430499da60aebbfa8ffbc
Parents: 40cc2ec
Author: Jeff Jirsa 
Authored: Tue Jun 20 10:05:10 2017 -0700
Committer: Jeff Jirsa 
Committed: Tue Jun 20 10:05:10 2017 -0700

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f21202e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index efca8c4..0968de9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -76,6 +76,7 @@
  * Add histogram for delay to deliver hints (CASSANDRA-13234)
  * Fix cqlsh automatic protocol downgrade regression (CASSANDRA-13307)
  * Changing `max_hint_window_in_ms` at runtime (CASSANDRA-11720)
+ * Trivial format error in StorageProxy (CASSANDRA-13551)
 
 
 3.11.0

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f21202e8/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 65aed6f..c106fd1 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -598,7 +598,7 @@ public class StorageProxy implements StorageProxyMBean
 catch (Exception ex)
 {
 if (!(ex instanceof WriteTimeoutException))
-logger.error("Failed to apply paxos commit locally : 
{}", ex);
+logger.error("Failed to apply paxos commit locally : 
", ex);
 
responseHandler.onFailure(FBUtilities.getBroadcastAddress(), 
RequestFailureReason.UNKNOWN);
 }
 }
@@ -1362,7 +1362,7 @@ public class StorageProxy implements StorageProxyMBean
 }
 catch (Exception ex)
 {
-logger.error("Failed to apply mutation locally : {}", ex);
+logger.error("Failed to apply mutation locally : ", ex);
 }
 }
 
@@ -1388,7 +1388,7 @@ public class StorageProxy implements StorageProxyMBean
 catch (Exception ex)
 {
 if (!(ex instanceof WriteTimeoutException))
-logger.error("Failed to apply mutation locally : {}", 
ex);
+logger.error("Failed to apply mutation locally : ", 
ex);
 handler.onFailure(FBUtilities.getBroadcastAddress(), 
RequestFailureReason.UNKNOWN);
 }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13583) test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test

2017-06-20 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13583:

Fix Version/s: 4.x
   Status: Patch Available  (was: Open)

https://github.com/krummas/cassandra/commits/marcuse/13583

problem was that we ignored the sourceFound variable if localhost was in the 
map, but we need to still check that since localhost is filtered away when 
doing rebuild

> test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test
> --
>
> Key: CASSANDRA-13583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
> Fix For: 4.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/524/testReport/rebuild_test/TestRebuild/disallow_rebuild_from_nonreplica_test
> {noformat}
> Error Message
> ToolError not raised
>  >> begin captured logging << 
> dtest: DEBUG: Python driver version in use: 3.10
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-0tUjhX
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.cluster: INFO: New Cassandra host  discovered
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrappedtestrebuild
> f(obj)
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 357, in 
> disallow_rebuild_from_nonreplica_test
> node1.nodetool('rebuild -ks ks1 -ts (%s,%s] -s %s' % (node3_token, 
> node1_token, node3_address))
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11916) Exception In Compaction Executor - java.lang.IllegalArgumentException: null

2017-06-20 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055902#comment-16055902
 ] 

Benjamin Lerer commented on CASSANDRA-11916:


[~bhuvanrawal] do you still have this issue?

> Exception In Compaction Executor - java.lang.IllegalArgumentException: null
> ---
>
> Key: CASSANDRA-11916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Bhuvan Rawal
>Assignee: Benjamin Lerer
>Priority: Critical
> Fix For: 3.0.x
>
>
> We are using Cassandra 3.0.3 with Level ordered compaction strategy with 
> default compression. While doing some load tests, I can observe these 
> messages after near fixed intervals of 15-20 seconds each on just one node 
> amongst 6 node cluster: 
> ERROR [CompactionExecutor:23] 2016-05-29 01:29:42,643 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:23,1,main]
> java.lang.IllegalArgumentException: null
>   at java.nio.Buffer.position(Buffer.java:244) ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:114)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:183)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:111)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:302)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.seekToCurrentRangeStart(BigTableScanner.java:181)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.access$200(BigTableScanner.java:51)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:280)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) 

[jira] [Comment Edited] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055571#comment-16055571
 ] 

Andrés de la Peña edited comment on CASSANDRA-10130 at 6/20/17 2:00 PM:


bq. I think we should probably run exceptions during 2i rebuild failure 
(logAndMarkIndexesFailed) via the JVMStabilityInspector

Done 
[here|https://github.com/adelapena/cassandra/compare/ca7da30f621e742f85b6a7b1f66d320ba224a6a4...adelapena:0f6972eacdab6b0c81e00d8c0c59968106d3f462],
 with tests for both create and rebuild.

Here is the full patch with CI results:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10130-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|


was (Author: adelapena):
bq. I think we should probably run exceptions during 2i rebuild failure 
(logAndMarkIndexesFailed) via the JVMStabilityInspector

Done 
[here|https://github.com/adelapena/cassandra/commit/6fd2f1802a3969148e344b0b2b3c7bec4ee3b014],
 with tests for both create and rebuild.

Here is the full patch with CI results:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:6fd2f1802a3969148e344b0b2b3c7bec4ee3b014]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13623) Official cassandra docker image: Connections closing and timing out inserting/deleting data

2017-06-20 Thread a8775 (JIRA)
a8775 created CASSANDRA-13623:
-

 Summary: Official cassandra docker image: Connections closing and 
timing out inserting/deleting data 
 Key: CASSANDRA-13623
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13623
 Project: Cassandra
  Issue Type: Bug
 Environment: Official cassandra docker image 3.10 running under 
Windows10, no extra configuration
Reporter: a8775


The problem looks like https://github.com/docker-library/cassandra/issues/101

After some time the update is timing out eg. updating one row in a loop every 
500ms, the problem is after about 12h. This has never happen when working with 
native installation of Cassandra.

The orginal issue is about "Docker for Mac" but this time is on Windows10. I 
couldn't find the reference for a similar problem under Windows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-11200) CompactionExecutor thread error brings down JVM in 3.0.3

2017-06-20 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-11200:
--

Assignee: Benjamin Lerer

> CompactionExecutor thread error brings down JVM in 3.0.3
> 
>
> Key: CASSANDRA-11200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11200
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: debian jesse latest release, updated Feb. 20th
>Reporter: Jason Kania
>Assignee: Benjamin Lerer
>Priority: Critical
>
> When launching Cassandra 3.0.3, with java version "1.8.0_74", Cassandra 
> writes the following to the debug file before a segmentation fault occurs 
> bringing down the JVM - the problem is repeatable.
> DEBUG [CompactionExecutor:1] 2016-02-20 18:26:16,892 CompactionTask.java:146 
> - Compacting (56f677c0-d829-11e5-b23a-25dbd4d727f6) 
> [/var/lib/cassandra/data/sensordb/periodicReading/ma-367-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-368-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-371-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-370-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-369-big-Data.db:level=0, ]
> The JVM error that occurs is the following:
> \#
> \# A fatal error has been detected by the Java Runtime Environment:
> \#
> \#  SIGBUS (0x7) at pc=0x7fa8a1052150, pid=12179, tid=140361951868672
> \#
> \# JRE version: Java(TM) SE Runtime Environment (8.0_74-b02) (build 
> 1.8.0_74-b02)
> \# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.74-b02 mixed mode 
> linux-amd64 compressed oops)
> \# Problematic frame:
> \# v  ~StubRoutines::jbyte_disjoint_arraycopy
> \#
> \# Core dump written. Default location: /tmp/core or core.12179
> \#
> \# If you would like to submit a bug report, please visit:
> \#   http://bugreport.java.com/bugreport/crash.jsp
> \#
> ---  T H R E A D  ---
> Current thread (0x7fa89c56ac20):  JavaThread "CompactionExecutor:1" 
> daemon [_thread_in_Java, id=12323, 
> stack(0x7fa89043f000,0x7fa89048)]
> siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 
> 0x7fa838988002
> Even if all of the files associated with "ma-[NNN]*" are removed, the JVM 
> dies with the same error after the next group of "ma-[NNN]*" are eventually 
> written out and compacted.
> Though this may be strictly a JVM problem, I have seen the issue in Oracle 
> JVM 8.0_65 and 8.0_74 and I raise it in case this problem is due to JNI usage 
> of an external compression library or some direct memory usage.
> I have a core dump if that is helpful to anyone.
> Bug CASSANDRA-11201 may also be related although when the exception 
> referenced in the bug occurs, the JVM remains alive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055571#comment-16055571
 ] 

Andrés de la Peña commented on CASSANDRA-10130:
---

bq. I think we should probably run exceptions during 2i rebuild failure 
(logAndMarkIndexesFailed) via the JVMStabilityInspector

Done 
[here|https://github.com/adelapena/cassandra/commit/6fd2f1802a3969148e344b0b2b3c7bec4ee3b014],
 with tests for both create and rebuild.

Here is the full patch with CI results:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:6fd2f1802a3969148e344b0b2b3c7bec4ee3b014]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-11201) Compaction memory fault in 3.0.3

2017-06-20 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-11201:
--

Assignee: Benjamin Lerer

> Compaction memory fault in 3.0.3
> 
>
> Key: CASSANDRA-11201
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11201
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: debian jesse latest release, updated Feb. 20th
>Reporter: Jason Kania
>Assignee: Benjamin Lerer
>
> I have been encountering the following errors periodically on the system:
> ERROR [CompactionExecutor:6] 2016-02-20 16:54:09,069 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:6,1,main]
> java.lang.InternalError: a fault occurred in a recent unsafe memory access 
> operation in compiled Java code
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:366)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:376)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.seekToCurrentRangeStart(BigTableScanner.java:175)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.access$200(BigTableScanner.java:51)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:280)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_65]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> This problem persisted after several reboots and even when most other 
> applications on the system were terminated to provide more memory 
> availability.
> The problem also occurs when running 'nodetool compact'.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11200) CompactionExecutor thread error brings down JVM in 3.0.3

2017-06-20 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055471#comment-16055471
 ] 

Benjamin Lerer commented on CASSANDRA-11200:


Since CASSANDRA-7039, direct {{ByteBuffers}} are used directly by LZ4. I had 
some offline discussion with [~barnie] and we believe that if the OS can't read 
an mmapped page the code might ends up blaming LZ4.
It seems that this type of segmentation fault is often caused by memory mapped 
file issue.
The underlying problem might be a memory issue or a disk fault. If the problem 
is due to a disk fault, you should not be able to copy the SSTable file.

Now, I think that we should improve the way the error is being reported to be 
able to know straight a way that the issue has occured during decompression and 
which SSTable caused the problem.   

> CompactionExecutor thread error brings down JVM in 3.0.3
> 
>
> Key: CASSANDRA-11200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11200
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: debian jesse latest release, updated Feb. 20th
>Reporter: Jason Kania
>Priority: Critical
>
> When launching Cassandra 3.0.3, with java version "1.8.0_74", Cassandra 
> writes the following to the debug file before a segmentation fault occurs 
> bringing down the JVM - the problem is repeatable.
> DEBUG [CompactionExecutor:1] 2016-02-20 18:26:16,892 CompactionTask.java:146 
> - Compacting (56f677c0-d829-11e5-b23a-25dbd4d727f6) 
> [/var/lib/cassandra/data/sensordb/periodicReading/ma-367-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-368-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-371-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-370-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-369-big-Data.db:level=0, ]
> The JVM error that occurs is the following:
> \#
> \# A fatal error has been detected by the Java Runtime Environment:
> \#
> \#  SIGBUS (0x7) at pc=0x7fa8a1052150, pid=12179, tid=140361951868672
> \#
> \# JRE version: Java(TM) SE Runtime Environment (8.0_74-b02) (build 
> 1.8.0_74-b02)
> \# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.74-b02 mixed mode 
> linux-amd64 compressed oops)
> \# Problematic frame:
> \# v  ~StubRoutines::jbyte_disjoint_arraycopy
> \#
> \# Core dump written. Default location: /tmp/core or core.12179
> \#
> \# If you would like to submit a bug report, please visit:
> \#   http://bugreport.java.com/bugreport/crash.jsp
> \#
> ---  T H R E A D  ---
> Current thread (0x7fa89c56ac20):  JavaThread "CompactionExecutor:1" 
> daemon [_thread_in_Java, id=12323, 
> stack(0x7fa89043f000,0x7fa89048)]
> siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 
> 0x7fa838988002
> Even if all of the files associated with "ma-[NNN]*" are removed, the JVM 
> dies with the same error after the next group of "ma-[NNN]*" are eventually 
> written out and compacted.
> Though this may be strictly a JVM problem, I have seen the issue in Oracle 
> JVM 8.0_65 and 8.0_74 and I raise it in case this problem is due to JNI usage 
> of an external compression library or some direct memory usage.
> I have a core dump if that is helpful to anyone.
> Bug CASSANDRA-11201 may also be related although when the exception 
> referenced in the bug occurs, the JVM remains alive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-06-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055459#comment-16055459
 ] 

Andrés de la Peña commented on CASSANDRA-13614:
---

Excellent suggestions:

bq. Can you add a notice to the nodetool command help stating that {{This will 
be reduced proportionally to the number of nodes in the cluster}}, similar to 
the YAML notice.

Done 
[here|https://github.com/adelapena/cassandra/commit/c4431afc3c3e6fd598a3c0be9292f75868259141].

bq. Also, perhaps you could add a debug log on {{BatchLogManager.setRate}} 
stating the actual rate that is being set, given it varies with the number of 
nodes.

Done 
[here|https://github.com/adelapena/cassandra/commit/1ef3e3bbd6e272e2d2f34fd734dbf4629e762a90].
 I have only logged the effective change, do you think it is enough? I have 
also updated the 
[dtest|https://github.com/adelapena/cassandra-dtest/commit/360ae39190028397f4760734deca0ceb792ad071]
 to check the log message.

bq. I think we should also include the {{nodetool getbatchlogreplaythrottlekb}} 
command as well given the JMX method is already there so the extra effort is 
minimum and could be handy for admins. I see there are some setters without 
their corresponding getters on nodetool, but I think we should always include 
both setters and getters. WDYT?

Makes sense, done 
[here|https://github.com/adelapena/cassandra/commit/9620273dde5737a00b7d9742621068822f49502b].
 Now that we have a nodetool command to read the set throttle, I have split the 
dtest in one test for nodetool and other for JMX, 
[here|https://github.com/adelapena/cassandra-dtest/commit/832fcb5a443d759e682b01aadb15fd106b79].

Here is the full patch with the CI results:
||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:13614-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13614-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13614-trunk-dtest/]|

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8148) sstableloader needs ability to stream to private IP

2017-06-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055331#comment-16055331
 ] 

Hannu Kröger commented on CASSANDRA-8148:
-

In our case the problem seems that sstableloader assumes streaming IP addresses 
are the same as for rpc_address (or broadcast_rpc_address). So that's a bug in 
the logic. I would change this to bug.

> sstableloader needs ability to stream to private IP
> ---
>
> Key: CASSANDRA-8148
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8148
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Yuki Morishita
>Priority: Minor
>
> sstableloader gets where to stream from the contacting node, but destinations 
> returned are all broadcast address.
> It is nice if we can somehow tell sstableloader to stream to private IP 
> instead.
> To do this, we have to find the way to load cassandra topology to 
> sstableloader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-20 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055293#comment-16055293
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13581:
---

yes [~spo...@gmail.com]  , the initial PR #117 was merged which only contains 
initial part of plugin support , the rest is there in PR#118 . Can you please 
let me know if plugin page link can be pushed as well ?

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-20 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055289#comment-16055289
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/20/17 7:44 AM:


Hello Jonathan, thanks to keep us up to date :)
On my side, I have deployed the patch I mentioned earlier and at first glance 
it is running fine. For now, I lack the time to analyse the new behavior 
further and more in depth but I will do it in the upcoming weeks.
I will keep the thread informed.


was (Author: rgerard):
Hello Jonathan, thanks to keep us up to date :)
On my side, I have deployed the patch I mentioned earlier and at first glance 
it is running fine. For now, I lack the time to analyse the new behavior 
further and more in depth but I will do it in the upcoming weeks. So I will 
keep the thread informed.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-20 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055289#comment-16055289
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

Hello Jonathan, thanks to keep us up to date :)
On my side, I have deployed the patch I mentioned earlier and at first glance 
it is running fine. For now, I lack the time to analyse the new behavior 
further and more in depth but I will do it in the upcoming weeks. So I will 
keep the thread informed.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13622) Better config validation/documentation

2017-06-20 Thread Kurt Greaves (JIRA)
Kurt Greaves created CASSANDRA-13622:


 Summary: Better config validation/documentation
 Key: CASSANDRA-13622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
 Project: Cassandra
  Issue Type: Bug
Reporter: Kurt Greaves
Priority: Minor


There are a number of properties in the yaml that are "in_mb", however resolve 
to bytes when calculated in {{DatabaseDescriptor.java}}, but are stored in 
int's. This means that their maximum values are 2047, as any higher when 
converted to bytes overflows the int.

Where possible/reasonable we should convert these to be long's, and stored as 
long's. If there is no reason for the value to ever be >2047 we should at least 
document that as the max value, or better yet make it error if set higher than 
that. Noting that although it's bad practice to increase a lot of them to such 
high values, there may be cases where it is necessary and in which case we 
should handle it appropriately rather than overflowing and surprising the user. 
That is, causing it to break but not in the way the user expected it to :)

Following are functions that currently could be at risk of the above:

{code:java|title=DatabaseDescriptor.java}
getThriftFramedTransportSize()
getMaxValueSize()
getCompactionLargePartitionWarningThreshold()
getCommitLogSegmentSize()
getNativeTransportMaxFrameSize()
# These are in KB so max value of 2096128
getBatchSizeWarnThreshold()
getColumnIndexSize()
getColumnIndexCacheSize()
getMaxMutationSize()
{code}

Note we may not actually need to fix all of these, and there may be more. This 
was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-20 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055253#comment-16055253
 ] 

Stefan Podkowinski commented on CASSANDRA-13581:


Looks like the initial patch was already merged by [~jjirsa] in 74e3f152229078

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-20 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13581:
---
Reviewer:   (was: Stefan Podkowinski)

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-12778) Tombstones not being deleted when only_purge_repaired_tombstones is enabled

2017-06-20 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-12778.
-
Resolution: Not A Problem

Please reopen if CASSANDRA-9143 does not solve this

> Tombstones not being deleted when only_purge_repaired_tombstones is enabled
> ---
>
> Key: CASSANDRA-12778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12778
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Arvind Nithrakashyap
>Assignee: Marcus Eriksson
>Priority: Critical
>
> When we use only_purge_repaired_tombstones for compaction, we noticed that 
> tombstones are no longer being deleted.
> {noformat}compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
> 'only_purge_repaired_tombstones': 'true'}{noformat}
> The root cause for this seems to be caused by the fact that repair itself 
> issues a flush which in turn leads to a new sstable being created (which is 
> not in the repair set). It looks like we do have some old data in this 
> sstable because of this, only tombstones older than that timestamp are 
> getting deleted even though many more keys have been repaired. 
> Fundamentally it looks like flush and repair can race with each other and 
> with leveled compaction, the flush creates a new sstable at level 0 and 
> removes the older sstable (the one that is picked for repair). Since repair 
> itself seems to issue multiple flushes, the level 0 sstable never gets 
> repaired and hence tombstones never get deleted. 
> We have already included the fix for CASSANDRA-12703 while testing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13580) Readonly datacenter support

2017-06-20 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-13580.
-
Resolution: Later

> Readonly datacenter support
> ---
>
> Key: CASSANDRA-13580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13580
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Some setups include datacenters where only reads are performed (example could 
> be datacenter dedicated for taking backups).
> We could use this information during repair to make sure that we never stream 
> out of a read only dc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13580) Readonly datacenter support

2017-06-20 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13580:

Fix Version/s: (was: 4.x)

> Readonly datacenter support
> ---
>
> Key: CASSANDRA-13580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13580
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>
> Some setups include datacenters where only reads are performed (example could 
> be datacenter dedicated for taking backups).
> We could use this information during repair to make sure that we never stream 
> out of a read only dc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-20 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13418:

Reviewer: Marcus Eriksson

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13068) Fully expired sstable not dropped when running out of disk space

2017-06-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13068:
---
Labels: lhf  (was: lhf twcs)

> Fully expired sstable not dropped when running out of disk space
> 
>
> Key: CASSANDRA-13068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13068
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Lerh Chuan Low
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If a fully expired sstable is larger than the remaining disk space we won't 
> run the compaction that can drop the sstable (ie, in our disk space check 
> should not include the fully expired sstables)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13068) Fully expired sstable not dropped when running out of disk space

2017-06-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13068:
---
Labels: lhf twcs  (was: lhf)

> Fully expired sstable not dropped when running out of disk space
> 
>
> Key: CASSANDRA-13068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13068
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Lerh Chuan Low
>  Labels: lhf, twcs
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If a fully expired sstable is larger than the remaining disk space we won't 
> run the compaction that can drop the sstable (ie, in our disk space check 
> should not include the fully expired sstables)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org