[jira] [Updated] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound

2017-03-22 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-13366:
-
Status: Ready to Commit  (was: Patch Available)

> Possible AssertionError in UnfilteredRowIteratorWithLowerBound
> --
>
> Key: CASSANDRA-13366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13366
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.11.x
>
>
> In the code introduced by CASSANDRA-8180, we build a lower bound for a 
> partition (sometimes) based on the min clustering values of the stats file. 
> We can't do that if the sstable has and range tombston marker and the code 
> does check that this is the case, but unfortunately the check is done using 
> the stats {{minLocalDeletionTime}} but that value isn't populated properly in 
> pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up 
> getting an exception like
> {noformat}
> WARN  [ReadStage-2] 2017-03-20 13:29:39,165  
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, 
> -9223372036854775808, -9223372036854775808) ]is bigger than first returned 
> value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable 
> /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122)
> {noformat}
> and this until the sstable is upgraded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound

2017-03-22 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937635#comment-15937635
 ] 

Stefania commented on CASSANDRA-13366:
--

Thanks for fixing this [~slebresne], it LGTM and the comments you've added are 
extremely useful.

CI results also look good.

Two typos, 
[here|https://github.com/pcmanus/cassandra/commit/f7fa6e97581e8e7eab739c584878bb1ea564f18a#commitcomment-21451015]
 and 
[here|https://github.com/pcmanus/cassandra/commit/f7fa6e97581e8e7eab739c584878bb1ea564f18a#commitcomment-21450989].
 

I also assume that 
[{{mayOverlapWith(}}|https://github.com/pcmanus/cassandra/commit/f7fa6e97581e8e7eab739c584878bb1ea564f18a#diff-894e091348f28001de5b7fe88e65733fL2016]
 was removed despite being public, because it is unreliable in the presence of 
range tombstones and compact tables, so I think it's justifiable.


> Possible AssertionError in UnfilteredRowIteratorWithLowerBound
> --
>
> Key: CASSANDRA-13366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13366
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.11.x
>
>
> In the code introduced by CASSANDRA-8180, we build a lower bound for a 
> partition (sometimes) based on the min clustering values of the stats file. 
> We can't do that if the sstable has and range tombston marker and the code 
> does check that this is the case, but unfortunately the check is done using 
> the stats {{minLocalDeletionTime}} but that value isn't populated properly in 
> pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up 
> getting an exception like
> {noformat}
> WARN  [ReadStage-2] 2017-03-20 13:29:39,165  
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, 
> -9223372036854775808, -9223372036854775808) ]is bigger than first returned 
> value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable 
> /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122)
> {noformat}
> and this until the sstable is upgraded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12728) Handling partially written hint files

2017-03-22 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937540#comment-15937540
 ] 

Jeff Jirsa commented on CASSANDRA-12728:


It is true, however, that there are [other 
places|https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/hints/HintsReader.java#L239-L246]
 where we clearly log+skip potentially corrupt data, and hints ARE best effort 
(but that doesn't mean they're so worthless that we want to throw them away, 
and more importantly, we really need to be careful anytime we see any kind of 
corruption).

Also worth noting that we're failing to calculate the next page, so we don't 
know if it's the last hint, or potentially many many hints. Further, we don't 
know if it's corrupt because we shut down uncleanly, or if it's corrupt because 
the disk is failing and giving us invalid blocks.

If you follow [~iamaleksey] 's suggestion, though, and make hints inspect 
errors in the same way the commitlog inspects errors (see 
https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/db/commitlog/CommitLog.java#L480-L498
 and/or 
https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L390-L413)
 , we can let the user decide how paranoid they want to be. 

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> 

[jira] [Commented] (CASSANDRA-12728) Handling partially written hint files

2017-03-22 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937488#comment-15937488
 ] 

Jeff Jirsa commented on CASSANDRA-12728:


Ignore my previous (now deleted) comment - I hadn't seen your response.  I 
haven't yet read the code, so I'm not the best person to answer that. 



> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. Also, the CRC file does not exist in 
> this case (probably because it crashed while writing the hints file). May be 
> ignoring and cleaning up such partially written hint files can be a way to 
> fix this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (CASSANDRA-12728) Handling partially written hint files

2017-03-22 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12728:
---
Comment: was deleted

(was: [~garvitjuniwal] , unless you very much want to do this in the near 
future, I'll be starting this tomorrow. )

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. Also, the CRC file does not exist in 
> this case (probably because it crashed while writing the hints file). May be 
> ignoring and cleaning up such partially written hint files can be a way to 
> fix this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12728) Handling partially written hint files

2017-03-22 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937484#comment-15937484
 ] 

Jeff Jirsa commented on CASSANDRA-12728:


[~garvitjuniwal] , unless you very much want to do this in the near future, 
I'll be starting this tomorrow. 

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. Also, the CRC file does not exist in 
> this case (probably because it crashed while writing the hints file). May be 
> ignoring and cleaning up such partially written hint files can be a way to 
> fix this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12728) Handling partially written hint files

2017-03-22 Thread Garvit Juniwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937459#comment-15937459
 ] 

Garvit Juniwal commented on CASSANDRA-12728:


[~jjirsa]
Assuming that writes are only acknowledged to client after hints are "synced" 
to disk, I believe that patch I have is correct because you can safely ignore 
any partially flushed hint at the end of file that was not synced.
If hints are written lazily (i.e., writes are acknowledged even before syncing 
hints to disk,) it is implicit that Cassandra is resilient to loosing hints in 
face of crashes. Even in this scenario, dropping the last partially written 
hint is correct.

I do not understand the suggestion of making this an operator decision. There 
could be hints that did not make it to disk in the of face of crashes, and I do 
not know how you would detect them and force a crash.

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. 

[jira] [Assigned] (CASSANDRA-12728) Handling partially written hint files

2017-03-22 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-12728:
--

Assignee: Jeff Jirsa

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>Assignee: Jeff Jirsa
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. Also, the CRC file does not exist in 
> this case (probably because it crashed while writing the hints file). May be 
> ignoring and cleaning up such partially written hint files can be a way to 
> fix this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-12728) Handling partially written hint files

2017-03-22 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-12728:
--

Assignee: (was: Jeff Jirsa)

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. Also, the CRC file does not exist in 
> this case (probably because it crashed while writing the hints file). May be 
> ignoring and cleaning up such partially written hint files can be a way to 
> fix this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS

2017-03-22 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937193#comment-15937193
 ] 

Jay Zhuang commented on CASSANDRA-13370:


[~aweisberg] how about this fix: 
[718f67d|https://github.com/cooldoger/cassandra/commit/718f67d711c15b0d9dbebce3065064c73efd85e5]?

> unittest CipherFactoryTest failed on MacOS
> --
>
> Key: CASSANDRA-13370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: 13370-trunk.txt
>
>
> Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
> {code}
> $ echo 1 > /dev/urandom
> echo: write error: operation not permitted
> {code}
> Which is causing CipherFactoryTest failed:
> {code}
> $ ant test -Dtest.name=CipherFactoryTest
> ...
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
> run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
> [junit]
> [junit] Testcase: 
> buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
> Caused an ERROR
> [junit] setSeed() failed
> [junit] java.security.ProviderException: setSeed() failed
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
> [junit] at 
> sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
> [junit] at 
> java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
> [junit] at java.security.SecureRandom.(SecureRandom.java:190)
> [junit] at 
> org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
> [junit] Caused by: java.io.IOException: Operation not permitted
> [junit] at java.io.FileOutputStream.writeBytes(Native Method)
> [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
> ...
> {code}
> I'm able to reproduce the issue on two Mac machines. But not sure if it's 
> affecting all other developers.
> {{-Djava.security.egd=file:/dev/urandom}} was introduced in:
> CASSANDRA-9581
> I would suggest to revert the 
> [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
>  as {{pig-test}} is removed ([pig is no longer 
> supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
> Or adding a condition for MacOS in build.xml.
> [~aweisberg] [~jasobrown] any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS

2017-03-22 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13370:
---
Status: Patch Available  (was: Open)

> unittest CipherFactoryTest failed on MacOS
> --
>
> Key: CASSANDRA-13370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: 13370-trunk.txt
>
>
> Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
> {code}
> $ echo 1 > /dev/urandom
> echo: write error: operation not permitted
> {code}
> Which is causing CipherFactoryTest failed:
> {code}
> $ ant test -Dtest.name=CipherFactoryTest
> ...
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
> run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
> [junit]
> [junit] Testcase: 
> buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
> Caused an ERROR
> [junit] setSeed() failed
> [junit] java.security.ProviderException: setSeed() failed
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
> [junit] at 
> sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
> [junit] at 
> java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
> [junit] at java.security.SecureRandom.(SecureRandom.java:190)
> [junit] at 
> org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
> [junit] Caused by: java.io.IOException: Operation not permitted
> [junit] at java.io.FileOutputStream.writeBytes(Native Method)
> [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
> ...
> {code}
> I'm able to reproduce the issue on two Mac machines. But not sure if it's 
> affecting all other developers.
> {{-Djava.security.egd=file:/dev/urandom}} was introduced in:
> CASSANDRA-9581
> I would suggest to revert the 
> [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
>  as {{pig-test}} is removed ([pig is no longer 
> supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
> Or adding a condition for MacOS in build.xml.
> [~aweisberg] [~jasobrown] any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS

2017-03-22 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13370:
---
Attachment: 13370-trunk.txt

> unittest CipherFactoryTest failed on MacOS
> --
>
> Key: CASSANDRA-13370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: 13370-trunk.txt
>
>
> Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
> {code}
> $ echo 1 > /dev/urandom
> echo: write error: operation not permitted
> {code}
> Which is causing CipherFactoryTest failed:
> {code}
> $ ant test -Dtest.name=CipherFactoryTest
> ...
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
> run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
> [junit]
> [junit] Testcase: 
> buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
> Caused an ERROR
> [junit] setSeed() failed
> [junit] java.security.ProviderException: setSeed() failed
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
> [junit] at 
> sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
> [junit] at 
> java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
> [junit] at java.security.SecureRandom.(SecureRandom.java:190)
> [junit] at 
> org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
> [junit] Caused by: java.io.IOException: Operation not permitted
> [junit] at java.io.FileOutputStream.writeBytes(Native Method)
> [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
> ...
> {code}
> I'm able to reproduce the issue on two Mac machines. But not sure if it's 
> affecting all other developers.
> {{-Djava.security.egd=file:/dev/urandom}} was introduced in:
> CASSANDRA-9581
> I would suggest to revert the 
> [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
>  as {{pig-test}} is removed ([pig is no longer 
> supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
> Or adding a condition for MacOS in build.xml.
> [~aweisberg] [~jasobrown] any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS

2017-03-22 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang reassigned CASSANDRA-13370:
--

Assignee: Jay Zhuang

> unittest CipherFactoryTest failed on MacOS
> --
>
> Key: CASSANDRA-13370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>
> Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
> {code}
> $ echo 1 > /dev/urandom
> echo: write error: operation not permitted
> {code}
> Which is causing CipherFactoryTest failed:
> {code}
> $ ant test -Dtest.name=CipherFactoryTest
> ...
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
> run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
> [junit]
> [junit] Testcase: 
> buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
> Caused an ERROR
> [junit] setSeed() failed
> [junit] java.security.ProviderException: setSeed() failed
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
> [junit] at 
> sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
> [junit] at 
> java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
> [junit] at java.security.SecureRandom.(SecureRandom.java:190)
> [junit] at 
> org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
> [junit] Caused by: java.io.IOException: Operation not permitted
> [junit] at java.io.FileOutputStream.writeBytes(Native Method)
> [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
> ...
> {code}
> I'm able to reproduce the issue on two Mac machines. But not sure if it's 
> affecting all other developers.
> {{-Djava.security.egd=file:/dev/urandom}} was introduced in:
> CASSANDRA-9581
> I would suggest to revert the 
> [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
>  as {{pig-test}} is removed ([pig is no longer 
> supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
> Or adding a condition for MacOS in build.xml.
> [~aweisberg] [~jasobrown] any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs

2017-03-22 Thread William R. Speirs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William R. Speirs updated CASSANDRA-13368:
--
Attachment: cassandra-13368-2.1.patch

> Exception Stack not Printed as Intended in Error Logs
> -
>
> Key: CASSANDRA-13368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13368
> Project: Cassandra
>  Issue Type: Bug
>Reporter: William R. Speirs
>Priority: Trivial
>  Labels: lhf
> Fix For: 2.1.x
>
> Attachments: cassandra-13368-2.1.patch
>
>
> There are a number of instances where it appears the programmer intended to 
> print a stack trace in an error message, but it is not actually being 
> printed. For example, in {{BlacklistedDirectories.java:54}}:
> {noformat}
> catch (Exception e)
> {
> JVMStabilityInspector.inspectThrowable(e);
> logger.error("error registering MBean {}", MBEAN_NAME, e);
> //Allow the server to start even if the bean can't be registered
> }
> {noformat}
> The logger will use the second argument for the braces, but will ignore the 
> exception {{e}}. It would be helpful to have the stack traces of these 
> exceptions printed. I propose adding a second line that prints the full stack 
> trace: {{logger.error(e.getMessage(), e);}}
> On the 2.1 branch, I found 8 instances of these types of messages:
> {noformat}
> db/BlacklistedDirectories.java:54:logger.error("error registering 
> MBean {}", MBEAN_NAME, e);
> io/sstable/SSTableReader.java:512:logger.error("Corrupt sstable 
> {}; skipped", descriptor, e);
> net/OutboundTcpConnection.java:228:logger.error("error 
> processing a message intended for {}", poolReference.endPoint(), e);
> net/OutboundTcpConnection.java:314:logger.error("error 
> writing to {}", poolReference.endPoint(), e);
> service/CassandraDaemon.java:231:logger.error("Exception in 
> thread {}", t, e);
> service/CassandraDaemon.java:562:logger.error("error 
> registering MBean {}", MBEAN_NAME, e);
> streaming/StreamSession.java:512:logger.error("[Stream #{}] 
> Streaming error occurred", planId(), e);
> transport/Server.java:442:logger.error("Problem retrieving 
> RPC address for {}", endpoint, e);
> {noformat}
> And one where it'll print the {{toString()}} version of the exception:
> {noformat}
> db/Directories.java:689:logger.error("Could not calculate the 
> size of {}. {}", input, e);
> {noformat}
> I'm happy to create a patch for each branch, just need a little guidance on 
> how to do so. We're currently running 2.1 so I started there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs

2017-03-22 Thread William R. Speirs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William R. Speirs updated CASSANDRA-13368:
--
Fix Version/s: 2.1.x
Reproduced In: 2.1.x
   Status: Patch Available  (was: Open)

diff --git a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java 
b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
index 49eefb1..4f8c721 100644
--- a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
@@ -187,7 +187,8 @@ public class CassandraAuthorizer implements IAuthorizer
 }
 catch (RequestExecutionException e)
 {
-logger.warn("CassandraAuthorizer failed to revoke all permissions 
of {}: {}", droppedUser, e);
+logger.warn("CassandraAuthorizer failed to revoke all permissions 
of {}", droppedUser);
+logger.warn(e.getMessage(), e);
 }
 }
 
@@ -206,7 +207,8 @@ public class CassandraAuthorizer implements IAuthorizer
 }
 catch (RequestExecutionException e)
 {
-logger.warn("CassandraAuthorizer failed to revoke all permissions 
on {}: {}", droppedResource, e);
+logger.warn("CassandraAuthorizer failed to revoke all permissions 
on {}", droppedResource);
+logger.warn(e.getMessage(), e);
 return;
 }
 
@@ -222,7 +224,8 @@ public class CassandraAuthorizer implements IAuthorizer
 }
 catch (RequestExecutionException e)
 {
-logger.warn("CassandraAuthorizer failed to revoke all 
permissions on {}: {}", droppedResource, e);
+logger.warn("CassandraAuthorizer failed to revoke all 
permissions on {}", droppedResource);
+logger.warn(e.getMessage(), e);
 }
 }
 }
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 4588156..ebc64e3 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -235,7 +235,8 @@ public class BatchlogManager implements BatchlogManagerMBean
 }
 catch (IOException e)
 {
-logger.warn("Skipped batch replay of {} due to {}", id, e);
+logger.warn("Skipped batch replay of {} due to:", id);
+logger.warn(e.getMessage(), e);
 deleteBatch(id);
 }
 }
diff --git a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java 
b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java
index f47fd57..d985e65 100644
--- a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java
+++ b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java
@@ -51,7 +51,8 @@ public class BlacklistedDirectories implements 
BlacklistedDirectoriesMBean
 catch (Exception e)
 {
 JVMStabilityInspector.inspectThrowable(e);
-logger.error("error registering MBean {}", MBEAN_NAME, e);
+logger.error("error registering MBean {}", MBEAN_NAME);
+logger.error(e.getMessage(), e);
 //Allow the server to start even if the bean can't be registered
 }
 }
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 6e82745..e083c6c 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -442,7 +442,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 JVMStabilityInspector.inspectThrowable(e);
 // this shouldn't block anything.
-logger.warn("Failed unregistering mbean: {}", mbeanName, e);
+logger.warn("Failed unregistering mbean: {}", mbeanName);
+logger.warn(e.getMessage(), e);
 }
 
 latencyCalculator.cancel(false);
diff --git a/src/java/org/apache/cassandra/db/Directories.java 
b/src/java/org/apache/cassandra/db/Directories.java
index 35aa447..38e7171 100644
--- a/src/java/org/apache/cassandra/db/Directories.java
+++ b/src/java/org/apache/cassandra/db/Directories.java
@@ -686,7 +686,8 @@ public class Directories
 }
 catch (IOException e)
 {
-logger.error("Could not calculate the size of {}. {}", input, e);
+logger.error("Could not calculate the size of {}", input);
+logger.error(e.getMessage(), e);
 }
 
 return visitor.getAllocatedSize();
diff --git a/src/java/org/apache/cassandra/db/HintedHandOffManager.java 
b/src/java/org/apache/cassandra/db/HintedHandOffManager.java
index 0d3ef39..1f1f54e 100644
--- a/src/java/org/apache/cassandra/db/HintedHandOffManager.java
+++ 

[jira] [Comment Edited] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs

2017-03-22 Thread William R. Speirs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937134#comment-15937134
 ] 

William R. Speirs edited comment on CASSANDRA-13368 at 3/22/17 9:10 PM:


Patch file submitted


was (Author: wspeirs):
diff --git a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java 
b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
index 49eefb1..4f8c721 100644
--- a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
@@ -187,7 +187,8 @@ public class CassandraAuthorizer implements IAuthorizer
 }
 catch (RequestExecutionException e)
 {
-logger.warn("CassandraAuthorizer failed to revoke all permissions 
of {}: {}", droppedUser, e);
+logger.warn("CassandraAuthorizer failed to revoke all permissions 
of {}", droppedUser);
+logger.warn(e.getMessage(), e);
 }
 }
 
@@ -206,7 +207,8 @@ public class CassandraAuthorizer implements IAuthorizer
 }
 catch (RequestExecutionException e)
 {
-logger.warn("CassandraAuthorizer failed to revoke all permissions 
on {}: {}", droppedResource, e);
+logger.warn("CassandraAuthorizer failed to revoke all permissions 
on {}", droppedResource);
+logger.warn(e.getMessage(), e);
 return;
 }
 
@@ -222,7 +224,8 @@ public class CassandraAuthorizer implements IAuthorizer
 }
 catch (RequestExecutionException e)
 {
-logger.warn("CassandraAuthorizer failed to revoke all 
permissions on {}: {}", droppedResource, e);
+logger.warn("CassandraAuthorizer failed to revoke all 
permissions on {}", droppedResource);
+logger.warn(e.getMessage(), e);
 }
 }
 }
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 4588156..ebc64e3 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -235,7 +235,8 @@ public class BatchlogManager implements BatchlogManagerMBean
 }
 catch (IOException e)
 {
-logger.warn("Skipped batch replay of {} due to {}", id, e);
+logger.warn("Skipped batch replay of {} due to:", id);
+logger.warn(e.getMessage(), e);
 deleteBatch(id);
 }
 }
diff --git a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java 
b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java
index f47fd57..d985e65 100644
--- a/src/java/org/apache/cassandra/db/BlacklistedDirectories.java
+++ b/src/java/org/apache/cassandra/db/BlacklistedDirectories.java
@@ -51,7 +51,8 @@ public class BlacklistedDirectories implements 
BlacklistedDirectoriesMBean
 catch (Exception e)
 {
 JVMStabilityInspector.inspectThrowable(e);
-logger.error("error registering MBean {}", MBEAN_NAME, e);
+logger.error("error registering MBean {}", MBEAN_NAME);
+logger.error(e.getMessage(), e);
 //Allow the server to start even if the bean can't be registered
 }
 }
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 6e82745..e083c6c 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -442,7 +442,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 JVMStabilityInspector.inspectThrowable(e);
 // this shouldn't block anything.
-logger.warn("Failed unregistering mbean: {}", mbeanName, e);
+logger.warn("Failed unregistering mbean: {}", mbeanName);
+logger.warn(e.getMessage(), e);
 }
 
 latencyCalculator.cancel(false);
diff --git a/src/java/org/apache/cassandra/db/Directories.java 
b/src/java/org/apache/cassandra/db/Directories.java
index 35aa447..38e7171 100644
--- a/src/java/org/apache/cassandra/db/Directories.java
+++ b/src/java/org/apache/cassandra/db/Directories.java
@@ -686,7 +686,8 @@ public class Directories
 }
 catch (IOException e)
 {
-logger.error("Could not calculate the size of {}. {}", input, e);
+logger.error("Could not calculate the size of {}", input);
+logger.error(e.getMessage(), e);
 }
 
 return visitor.getAllocatedSize();
diff --git a/src/java/org/apache/cassandra/db/HintedHandOffManager.java 
b/src/java/org/apache/cassandra/db/HintedHandOffManager.java
index 0d3ef39..1f1f54e 100644
--- 

[jira] [Updated] (CASSANDRA-12653) In-flight shadow round requests

2017-03-22 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12653:
--
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   (was: 2.2.x)
   4.0
   3.11.0
   3.0.13
   2.2.10

> In-flight shadow round requests
> ---
>
> Key: CASSANDRA-12653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12653
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.2.10, 3.0.13, 3.11.0, 4.0
>
>
> Bootstrapping or replacing a node in the cluster requires to gather and check 
> some host IDs or tokens by doing a gossip "shadow round" once before joining 
> the cluster. This is done by sending a gossip SYN to all seeds until we 
> receive a response with the cluster state, from where we can move on in the 
> bootstrap process. Receiving a response will call the shadow round done and 
> calls {{Gossiper.resetEndpointStateMap}} for cleaning up the received state 
> again.
> The issue here is that at this point there might be other in-flight requests 
> and it's very likely that shadow round responses from other seeds will be 
> received afterwards, while the current state of the bootstrap process doesn't 
> expect this to happen (e.g. gossiper may or may not be enabled). 
> One side effect will be that MigrationTasks are spawned for each shadow round 
> reply except the first. Tasks might or might not execute based on whether at 
> execution time {{Gossiper.resetEndpointStateMap}} had been called, which 
> effects the outcome of {{FailureDetector.instance.isAlive(endpoint))}} at 
> start of the task. You'll see error log messages such as follows when this 
> happend:
> {noformat}
> INFO  [SharedPool-Worker-1] 2016-09-08 08:36:39,255 Gossiper.java:993 - 
> InetAddress /xx.xx.xx.xx is now UP
> ERROR [MigrationStage:1]2016-09-08 08:36:39,255 FailureDetector.java:223 
> - unknown endpoint /xx.xx.xx.xx
> {noformat}
> Although is isn't pretty, I currently don't see any serious harm from this, 
> but it would be good to get a second opinion (feel free to close as "wont 
> fix").
> /cc [~Stefania] [~thobbs]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13333) Cassandra does not start on Windows due to 'JNA link failure'

2017-03-22 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937127#comment-15937127
 ] 

Michael Kjellman commented on CASSANDRA-1:
--

[~blerer] this looks great! Renaming {{CLibrary}} --> {{NativeLibrary}} helps 
make the intent much clearer. 

# Should the loading of {{Native.register("winmm")}} in {{WindowsTimer}} also 
be moved into NativeLibraryWindows?
# Looks like the trunk patch didn't get pushed up or potentially just a copy 
paste error? Currently it's just pointing at blerer/trunk.
# Thanks for putting the MSDN API URL in the method javadoc. :)
# In {{NativeLibraryWindows}} I think the following logger statements could be 
simplified:

{code}
catch (UnsatisfiedLinkError e)
{
logger.warn("JNA link failure, one or more native method will be 
unavailable.");
logger.error("JNA link failure details: {}", e.getMessage());
}
{code}

Can be simplified to:
{code}
logger.error("Failed to link against JNA. Native methods will be unavailable.", 
e);
{code}

> Cassandra does not start on Windows due to 'JNA link failure'
> -
>
> Key: CASSANDRA-1
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Blocker
>
> Cassandra 3.0 HEAD does not start on Windows. The only error in the logs is: 
> {{ERROR 16:30:10 JNA failing to initialize properly.}} 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12653) In-flight shadow round requests

2017-03-22 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937125#comment-15937125
 ] 

Joel Knighton commented on CASSANDRA-12653:
---

Committed to 2.2 as {{bf0906b92cf65161d828e31bc46436d427bbb4b8}} and merged 
forward through 3.0, 3.11, and trunk. Added Jason Brown as an additional 
reviewer in the commit since his feedback was incorporated in the latest round 
of patches.

Thanks everyone!

> In-flight shadow round requests
> ---
>
> Key: CASSANDRA-12653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12653
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.2.10, 3.0.13, 3.11.0, 4.0
>
>
> Bootstrapping or replacing a node in the cluster requires to gather and check 
> some host IDs or tokens by doing a gossip "shadow round" once before joining 
> the cluster. This is done by sending a gossip SYN to all seeds until we 
> receive a response with the cluster state, from where we can move on in the 
> bootstrap process. Receiving a response will call the shadow round done and 
> calls {{Gossiper.resetEndpointStateMap}} for cleaning up the received state 
> again.
> The issue here is that at this point there might be other in-flight requests 
> and it's very likely that shadow round responses from other seeds will be 
> received afterwards, while the current state of the bootstrap process doesn't 
> expect this to happen (e.g. gossiper may or may not be enabled). 
> One side effect will be that MigrationTasks are spawned for each shadow round 
> reply except the first. Tasks might or might not execute based on whether at 
> execution time {{Gossiper.resetEndpointStateMap}} had been called, which 
> effects the outcome of {{FailureDetector.instance.isAlive(endpoint))}} at 
> start of the task. You'll see error log messages such as follows when this 
> happend:
> {noformat}
> INFO  [SharedPool-Worker-1] 2016-09-08 08:36:39,255 Gossiper.java:993 - 
> InetAddress /xx.xx.xx.xx is now UP
> ERROR [MigrationStage:1]2016-09-08 08:36:39,255 FailureDetector.java:223 
> - unknown endpoint /xx.xx.xx.xx
> {noformat}
> Although is isn't pretty, I currently don't see any serious harm from this, 
> but it would be good to get a second opinion (feel free to close as "wont 
> fix").
> /cc [~Stefania] [~thobbs]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12653) In-flight shadow round requests

2017-03-22 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-12653:
--
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> In-flight shadow round requests
> ---
>
> Key: CASSANDRA-12653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12653
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> Bootstrapping or replacing a node in the cluster requires to gather and check 
> some host IDs or tokens by doing a gossip "shadow round" once before joining 
> the cluster. This is done by sending a gossip SYN to all seeds until we 
> receive a response with the cluster state, from where we can move on in the 
> bootstrap process. Receiving a response will call the shadow round done and 
> calls {{Gossiper.resetEndpointStateMap}} for cleaning up the received state 
> again.
> The issue here is that at this point there might be other in-flight requests 
> and it's very likely that shadow round responses from other seeds will be 
> received afterwards, while the current state of the bootstrap process doesn't 
> expect this to happen (e.g. gossiper may or may not be enabled). 
> One side effect will be that MigrationTasks are spawned for each shadow round 
> reply except the first. Tasks might or might not execute based on whether at 
> execution time {{Gossiper.resetEndpointStateMap}} had been called, which 
> effects the outcome of {{FailureDetector.instance.isAlive(endpoint))}} at 
> start of the task. You'll see error log messages such as follows when this 
> happend:
> {noformat}
> INFO  [SharedPool-Worker-1] 2016-09-08 08:36:39,255 Gossiper.java:993 - 
> InetAddress /xx.xx.xx.xx is now UP
> ERROR [MigrationStage:1]2016-09-08 08:36:39,255 FailureDetector.java:223 
> - unknown endpoint /xx.xx.xx.xx
> {noformat}
> Although is isn't pretty, I currently don't see any serious harm from this, 
> but it would be good to get a second opinion (feel free to close as "wont 
> fix").
> /cc [~Stefania] [~thobbs]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-03-22 Thread jkni
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2836a644
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2836a644
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2836a644

Branch: refs/heads/cassandra-3.0
Commit: 2836a644a357c0992ba89622f04668422ce2761a
Parents: f4ba908 bf0906b
Author: Joel Knighton 
Authored: Wed Mar 22 13:13:44 2017 -0500
Committer: Joel Knighton 
Committed: Wed Mar 22 13:18:59 2017 -0500

--
 CHANGES.txt |  1 +
 .../gms/GossipDigestAckVerbHandler.java | 26 ++---
 src/java/org/apache/cassandra/gms/Gossiper.java | 56 ++--
 .../apache/cassandra/service/MigrationTask.java | 12 ++---
 .../cassandra/service/StorageService.java   | 17 +++---
 5 files changed, 73 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2836a644/CHANGES.txt
--
diff --cc CHANGES.txt
index 6021315,df2421d..9140c73
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,27 -1,9 +1,28 @@@
 -2.2.10
 +3.0.13
 + * Fix CONTAINS filtering for null collections (CASSANDRA-13246)
 + * Applying: Use a unique metric reservoir per test run when using 
Cassandra-wide metrics residing in MBeans (CASSANDRA-13216)
 + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 +Merged from 2.2:
+  * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
   * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
 + * Commitlog replay may fail if last mutation is within 4 bytes of end of 
segment (CASSANDRA-13282)
   * Fix queries updating multiple time the same list (CASSANDRA-13130)
   * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 +
 +
 +3.0.12
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Update c.yaml doc for offheap memtables (CASSANDRA-13179)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 +Merged from 2.2:
   * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
   * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
   * Coalescing strategy sleeps too much (CASSANDRA-13090)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2836a644/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index cbfa750,c2eccba..802ff9c
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -124,6 -128,9 +128,8 @@@ public class Gossiper implements IFailu
  private final Map expireTimeEndpointMap = new 
ConcurrentHashMap();
  
  private volatile boolean inShadowRound = false;
 -
+ // endpoint states as gathered during shadow round
+ private final Map endpointShadowStateMap = 
new ConcurrentHashMap<>();
  
  private volatile long lastProcessedMessageAt = System.currentTimeMillis();
  
@@@ -818,28 -826,6 +827,20 @@@
  return endpointStateMap.get(ep);
  }
  
 +public boolean valuesEqual(InetAddress ep1, InetAddress ep2, 
ApplicationState as)
 +{
 +EndpointState state1 = getEndpointStateForEndpoint(ep1);
 +EndpointState state2 = getEndpointStateForEndpoint(ep2);
 +
 +if (state1 == null || state2 == null)
 +return false;
 +
 +VersionedValue value1 = state1.getApplicationState(as);
 +VersionedValue value2 = state2.getApplicationState(as);
 +
 +return !(value1 == null || value2 == null) && 
value1.value.equals(value2.value);
 +}
 +
- // removes ALL endpoint states; should only be called after shadow gossip
- public void resetEndpointStateMap()
- {
- endpointStateMap.clear();
- unreachableEndpoints.clear();
- liveEndpoints.clear();
- }
- 
  public 

[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-03-22 Thread jkni
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2836a644
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2836a644
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2836a644

Branch: refs/heads/cassandra-3.11
Commit: 2836a644a357c0992ba89622f04668422ce2761a
Parents: f4ba908 bf0906b
Author: Joel Knighton 
Authored: Wed Mar 22 13:13:44 2017 -0500
Committer: Joel Knighton 
Committed: Wed Mar 22 13:18:59 2017 -0500

--
 CHANGES.txt |  1 +
 .../gms/GossipDigestAckVerbHandler.java | 26 ++---
 src/java/org/apache/cassandra/gms/Gossiper.java | 56 ++--
 .../apache/cassandra/service/MigrationTask.java | 12 ++---
 .../cassandra/service/StorageService.java   | 17 +++---
 5 files changed, 73 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2836a644/CHANGES.txt
--
diff --cc CHANGES.txt
index 6021315,df2421d..9140c73
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,27 -1,9 +1,28 @@@
 -2.2.10
 +3.0.13
 + * Fix CONTAINS filtering for null collections (CASSANDRA-13246)
 + * Applying: Use a unique metric reservoir per test run when using 
Cassandra-wide metrics residing in MBeans (CASSANDRA-13216)
 + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 +Merged from 2.2:
+  * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
   * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
 + * Commitlog replay may fail if last mutation is within 4 bytes of end of 
segment (CASSANDRA-13282)
   * Fix queries updating multiple time the same list (CASSANDRA-13130)
   * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 +
 +
 +3.0.12
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Update c.yaml doc for offheap memtables (CASSANDRA-13179)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 +Merged from 2.2:
   * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
   * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
   * Coalescing strategy sleeps too much (CASSANDRA-13090)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2836a644/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index cbfa750,c2eccba..802ff9c
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -124,6 -128,9 +128,8 @@@ public class Gossiper implements IFailu
  private final Map expireTimeEndpointMap = new 
ConcurrentHashMap();
  
  private volatile boolean inShadowRound = false;
 -
+ // endpoint states as gathered during shadow round
+ private final Map endpointShadowStateMap = 
new ConcurrentHashMap<>();
  
  private volatile long lastProcessedMessageAt = System.currentTimeMillis();
  
@@@ -818,28 -826,6 +827,20 @@@
  return endpointStateMap.get(ep);
  }
  
 +public boolean valuesEqual(InetAddress ep1, InetAddress ep2, 
ApplicationState as)
 +{
 +EndpointState state1 = getEndpointStateForEndpoint(ep1);
 +EndpointState state2 = getEndpointStateForEndpoint(ep2);
 +
 +if (state1 == null || state2 == null)
 +return false;
 +
 +VersionedValue value1 = state1.getApplicationState(as);
 +VersionedValue value2 = state2.getApplicationState(as);
 +
 +return !(value1 == null || value2 == null) && 
value1.value.equals(value2.value);
 +}
 +
- // removes ALL endpoint states; should only be called after shadow gossip
- public void resetEndpointStateMap()
- {
- endpointStateMap.clear();
- unreachableEndpoints.clear();
- liveEndpoints.clear();
- }
- 
  public 

[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-03-22 Thread jkni
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b74ae4b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b74ae4b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b74ae4b

Branch: refs/heads/trunk
Commit: 8b74ae4b6490e1991603e9365b690da6f6900c10
Parents: f5e0a7c ec9ce3d
Author: Joel Knighton 
Authored: Wed Mar 22 13:28:14 2017 -0500
Committer: Joel Knighton 
Committed: Wed Mar 22 13:29:09 2017 -0500

--
 CHANGES.txt |   1 +
 .../gms/GossipDigestAckVerbHandler.java |  27 +++--
 src/java/org/apache/cassandra/gms/Gossiper.java |  66 +++
 .../apache/cassandra/schema/MigrationTask.java  |  12 +-
 .../cassandra/service/StorageService.java   |  17 ++-
 test/conf/cassandra-seeds.yaml  |  43 +++
 .../apache/cassandra/gms/ShadowRoundTest.java   | 116 +++
 .../apache/cassandra/net/MatcherResponse.java   |  24 ++--
 8 files changed, 253 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b74ae4b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b74ae4b/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index 50710eb,177d7dc..e5992af
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -1337,12 -1352,23 +1346,24 @@@ public class Gossiper implements IFailu
  }
  
  /**
-  *  Do a single 'shadow' round of gossip, where we do not modify any state
-  *  Used when preparing to join the ring:
-  *  * when replacing a node, to get and assume its tokens
-  *  * when joining, to check that the local host id matches any 
previous id for the endpoint address
+  * Do a single 'shadow' round of gossip by retrieving endpoint states 
that will be stored exclusively in the
+  * map return value, instead of endpointStateMap.
+  *
++ * Used when preparing to join the ring:
+  * 
+  * when replacing a node, to get and assume its tokens
+  * when joining, to check that the local host id matches any 
previous id for the endpoint address
+  * 
+  *
+  * Method is synchronized, as we use an in-progress flag to indicate that 
shadow round must be cleared
+  * again by calling {@link Gossiper#maybeFinishShadowRound(InetAddress, 
boolean, Map)}. This will update
+  * {@link Gossiper#endpointShadowStateMap} with received values, in order 
to return an immutable copy to the
+  * caller of {@link Gossiper#doShadowRound()}. Therefor only a single 
shadow round execution is permitted at
+  * the same time.
+  *
+  * @return endpoint states gathered during shadow round or empty map
   */
- public void doShadowRound()
+ public synchronized Map doShadowRound()
  {
  buildSeedsList();
  // it may be that the local address is the only entry in the seed

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b74ae4b/src/java/org/apache/cassandra/schema/MigrationTask.java
--
diff --cc src/java/org/apache/cassandra/schema/MigrationTask.java
index a785e17,000..73e396d
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/schema/MigrationTask.java
+++ b/src/java/org/apache/cassandra/schema/MigrationTask.java
@@@ -1,113 -1,0 +1,113 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.schema;
 +
 +import java.net.InetAddress;
 +import java.util.Collection;
 +import java.util.EnumSet;
 +import java.util.Set;
 +import java.util.concurrent.ConcurrentLinkedQueue;
 +import java.util.concurrent.CountDownLatch;
 +

[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-03-22 Thread jkni
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ec9ce3df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ec9ce3df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ec9ce3df

Branch: refs/heads/trunk
Commit: ec9ce3dfba0030015c5dd846b8b5b526614cf5f7
Parents: 5484bd1 2836a64
Author: Joel Knighton 
Authored: Wed Mar 22 13:20:24 2017 -0500
Committer: Joel Knighton 
Committed: Wed Mar 22 13:22:43 2017 -0500

--
 CHANGES.txt |   1 +
 .../gms/GossipDigestAckVerbHandler.java |  27 +++--
 src/java/org/apache/cassandra/gms/Gossiper.java |  65 +++
 .../apache/cassandra/service/MigrationTask.java |  12 +-
 .../cassandra/service/StorageService.java   |  17 ++-
 test/conf/cassandra-seeds.yaml  |  43 +++
 .../apache/cassandra/gms/ShadowRoundTest.java   | 116 +++
 .../apache/cassandra/net/MatcherResponse.java   |  24 ++--
 8 files changed, 252 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ec9ce3df/CHANGES.txt
--
diff --cc CHANGES.txt
index ce8535d,9140c73..8386c20
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -37,143 -49,6 +37,144 @@@ Merged from 3.0
 live rows in sstabledump (CASSANDRA-13177)
   * Provide user workaround when system_schema.columns does not contain entries
 for a table that's in system_schema.tables (CASSANDRA-13180)
 +Merged from 2.2:
++ * Discard in-flight shadow round responses (CASSANDRA-12653)
 + * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
 + * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
 + * Commitlog replay may fail if last mutation is within 4 bytes of end of 
segment (CASSANDRA-13282)
 + * Fix queries updating multiple time the same list (CASSANDRA-13130)
 + * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 + * Fix flaky LongLeveledCompactionStrategyTest (CASSANDRA-12202)
 + * Fix failing COPY TO STDOUT (CASSANDRA-12497)
 + * Fix ColumnCounter::countAll behaviour for reverse queries (CASSANDRA-13222)
 + * Exceptions encountered calling getSeeds() breaks OTC thread 
(CASSANDRA-13018)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 +Merged from 2.1:
 + * Remove unused repositories (CASSANDRA-13278)
 + * Log stacktrace of uncaught exceptions (CASSANDRA-13108)
 + * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 +
 +
 +3.10
 + * Fix secondary index queries regression (CASSANDRA-13013)
 + * Add duration type to the protocol V5 (CASSANDRA-12850)
 + * Fix duration type validation (CASSANDRA-13143)
 + * Fix flaky GcCompactionTest (CASSANDRA-12664)
 + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058)
 + * Fixed query monitoring for range queries (CASSANDRA-13050)
 + * Remove outboundBindAny configuration property (CASSANDRA-12673)
 + * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
 + * Remove timing window in test case (CASSANDRA-12875)
 + * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
 + * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)
 + * Fix validation of non-frozen UDT cells (CASSANDRA-12916)
 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903)
 + * Fix Murmur3PartitionerTest (CASSANDRA-12858)
 + * Move cqlsh syntax rules into separate module and allow easier 
customization (CASSANDRA-12897)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Fix cassandra-stress truncate option (CASSANDRA-12695)
 + * Fix crossNode value when receiving messages (CASSANDRA-12791)
 + * Don't load MX4J beans twice (CASSANDRA-12869)
 + * Extend native protocol request flags, add versions to SUPPORTED, and 
introduce ProtocolVersion enum (CASSANDRA-12838)
 + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836)
 + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845)
 + * Properly format IPv6 addresses when logging JMX service URL 
(CASSANDRA-12454)
 + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777)
 + * Use non-token restrictions for bounds when token restrictions are 
overridden (CASSANDRA-12419)
 + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803)
 + * Use different build directories for Eclipse and Ant (CASSANDRA-12466)
 + * Avoid potential AttributeError in cqlsh due to no 

[02/10] cassandra git commit: Discard in-flight shadow round responses

2017-03-22 Thread jkni
Discard in-flight shadow round responses

patch by Stefan Podkowinski; reviewed by Joel Knighton and Jason Brown for 
CASSANDRA-12653


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf0906b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf0906b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf0906b9

Branch: refs/heads/cassandra-3.0
Commit: bf0906b92cf65161d828e31bc46436d427bbb4b8
Parents: 06316df
Author: Stefan Podkowinski 
Authored: Mon Sep 19 13:56:54 2016 +0200
Committer: Joel Knighton 
Committed: Wed Mar 22 13:08:28 2017 -0500

--
 CHANGES.txt |  1 +
 .../gms/GossipDigestAckVerbHandler.java | 26 +---
 src/java/org/apache/cassandra/gms/Gossiper.java | 62 +++-
 .../apache/cassandra/service/MigrationTask.java | 12 ++--
 .../cassandra/service/StorageService.java   | 16 +++--
 5 files changed, 79 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 27dd343..df2421d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Discard in-flight shadow round responses (CASSANDRA-12653)
  * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
  * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
  * Fix queries updating multiple time the same list (CASSANDRA-13130)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java 
b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
index 9f69a94..59060f8 100644
--- a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
+++ b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
@@ -51,21 +51,31 @@ public class GossipDigestAckVerbHandler implements 
IVerbHandler
 Map epStateMap = 
gDigestAckMessage.getEndpointStateMap();
 logger.trace("Received ack with {} digests and {} states", 
gDigestList.size(), epStateMap.size());
 
-if (epStateMap.size() > 0)
-{
-/* Notify the Failure Detector */
-Gossiper.instance.notifyFailureDetector(epStateMap);
-Gossiper.instance.applyStateLocally(epStateMap);
-}
-
 if (Gossiper.instance.isInShadowRound())
 {
 if (logger.isDebugEnabled())
 logger.debug("Finishing shadow round with {}", from);
-Gossiper.instance.finishShadowRound();
+Gossiper.instance.finishShadowRound(epStateMap);
 return; // don't bother doing anything else, we have what we came 
for
 }
 
+if (epStateMap.size() > 0)
+{
+// Ignore any GossipDigestAck messages that we handle before a 
regular GossipDigestSyn has been send.
+// This will prevent Acks from leaking over from the shadow round 
that are not actual part of
+// the regular gossip conversation.
+if ((System.nanoTime() - Gossiper.instance.firstSynSendAt) < 0 || 
Gossiper.instance.firstSynSendAt == 0)
+{
+if (logger.isTraceEnabled())
+logger.trace("Ignoring unrequested GossipDigestAck from 
{}", from);
+return;
+}
+
+/* Notify the Failure Detector */
+Gossiper.instance.notifyFailureDetector(epStateMap);
+Gossiper.instance.applyStateLocally(epStateMap);
+}
+
 /* Get the state required to send to this gossipee - construct 
GossipDigestAck2Message */
 Map deltaEpStateMap = new 
HashMap();
 for (GossipDigest gDigest : gDigestList)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 06b14c4..c2eccba 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -30,6 +30,7 @@ import javax.management.ObjectName;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
 import com.google.common.util.concurrent.Uninterruptibles;
 
 import 

[03/10] cassandra git commit: Discard in-flight shadow round responses

2017-03-22 Thread jkni
Discard in-flight shadow round responses

patch by Stefan Podkowinski; reviewed by Joel Knighton and Jason Brown for 
CASSANDRA-12653


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf0906b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf0906b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf0906b9

Branch: refs/heads/cassandra-3.11
Commit: bf0906b92cf65161d828e31bc46436d427bbb4b8
Parents: 06316df
Author: Stefan Podkowinski 
Authored: Mon Sep 19 13:56:54 2016 +0200
Committer: Joel Knighton 
Committed: Wed Mar 22 13:08:28 2017 -0500

--
 CHANGES.txt |  1 +
 .../gms/GossipDigestAckVerbHandler.java | 26 +---
 src/java/org/apache/cassandra/gms/Gossiper.java | 62 +++-
 .../apache/cassandra/service/MigrationTask.java | 12 ++--
 .../cassandra/service/StorageService.java   | 16 +++--
 5 files changed, 79 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 27dd343..df2421d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Discard in-flight shadow round responses (CASSANDRA-12653)
  * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
  * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
  * Fix queries updating multiple time the same list (CASSANDRA-13130)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java 
b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
index 9f69a94..59060f8 100644
--- a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
+++ b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
@@ -51,21 +51,31 @@ public class GossipDigestAckVerbHandler implements 
IVerbHandler
 Map epStateMap = 
gDigestAckMessage.getEndpointStateMap();
 logger.trace("Received ack with {} digests and {} states", 
gDigestList.size(), epStateMap.size());
 
-if (epStateMap.size() > 0)
-{
-/* Notify the Failure Detector */
-Gossiper.instance.notifyFailureDetector(epStateMap);
-Gossiper.instance.applyStateLocally(epStateMap);
-}
-
 if (Gossiper.instance.isInShadowRound())
 {
 if (logger.isDebugEnabled())
 logger.debug("Finishing shadow round with {}", from);
-Gossiper.instance.finishShadowRound();
+Gossiper.instance.finishShadowRound(epStateMap);
 return; // don't bother doing anything else, we have what we came 
for
 }
 
+if (epStateMap.size() > 0)
+{
+// Ignore any GossipDigestAck messages that we handle before a 
regular GossipDigestSyn has been send.
+// This will prevent Acks from leaking over from the shadow round 
that are not actual part of
+// the regular gossip conversation.
+if ((System.nanoTime() - Gossiper.instance.firstSynSendAt) < 0 || 
Gossiper.instance.firstSynSendAt == 0)
+{
+if (logger.isTraceEnabled())
+logger.trace("Ignoring unrequested GossipDigestAck from 
{}", from);
+return;
+}
+
+/* Notify the Failure Detector */
+Gossiper.instance.notifyFailureDetector(epStateMap);
+Gossiper.instance.applyStateLocally(epStateMap);
+}
+
 /* Get the state required to send to this gossipee - construct 
GossipDigestAck2Message */
 Map deltaEpStateMap = new 
HashMap();
 for (GossipDigest gDigest : gDigestList)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 06b14c4..c2eccba 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -30,6 +30,7 @@ import javax.management.ObjectName;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
 import com.google.common.util.concurrent.Uninterruptibles;
 
 import 

[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-03-22 Thread jkni
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ec9ce3df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ec9ce3df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ec9ce3df

Branch: refs/heads/cassandra-3.11
Commit: ec9ce3dfba0030015c5dd846b8b5b526614cf5f7
Parents: 5484bd1 2836a64
Author: Joel Knighton 
Authored: Wed Mar 22 13:20:24 2017 -0500
Committer: Joel Knighton 
Committed: Wed Mar 22 13:22:43 2017 -0500

--
 CHANGES.txt |   1 +
 .../gms/GossipDigestAckVerbHandler.java |  27 +++--
 src/java/org/apache/cassandra/gms/Gossiper.java |  65 +++
 .../apache/cassandra/service/MigrationTask.java |  12 +-
 .../cassandra/service/StorageService.java   |  17 ++-
 test/conf/cassandra-seeds.yaml  |  43 +++
 .../apache/cassandra/gms/ShadowRoundTest.java   | 116 +++
 .../apache/cassandra/net/MatcherResponse.java   |  24 ++--
 8 files changed, 252 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ec9ce3df/CHANGES.txt
--
diff --cc CHANGES.txt
index ce8535d,9140c73..8386c20
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -37,143 -49,6 +37,144 @@@ Merged from 3.0
 live rows in sstabledump (CASSANDRA-13177)
   * Provide user workaround when system_schema.columns does not contain entries
 for a table that's in system_schema.tables (CASSANDRA-13180)
 +Merged from 2.2:
++ * Discard in-flight shadow round responses (CASSANDRA-12653)
 + * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
 + * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
 + * Commitlog replay may fail if last mutation is within 4 bytes of end of 
segment (CASSANDRA-13282)
 + * Fix queries updating multiple time the same list (CASSANDRA-13130)
 + * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 + * Fix flaky LongLeveledCompactionStrategyTest (CASSANDRA-12202)
 + * Fix failing COPY TO STDOUT (CASSANDRA-12497)
 + * Fix ColumnCounter::countAll behaviour for reverse queries (CASSANDRA-13222)
 + * Exceptions encountered calling getSeeds() breaks OTC thread 
(CASSANDRA-13018)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 +Merged from 2.1:
 + * Remove unused repositories (CASSANDRA-13278)
 + * Log stacktrace of uncaught exceptions (CASSANDRA-13108)
 + * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 +
 +
 +3.10
 + * Fix secondary index queries regression (CASSANDRA-13013)
 + * Add duration type to the protocol V5 (CASSANDRA-12850)
 + * Fix duration type validation (CASSANDRA-13143)
 + * Fix flaky GcCompactionTest (CASSANDRA-12664)
 + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058)
 + * Fixed query monitoring for range queries (CASSANDRA-13050)
 + * Remove outboundBindAny configuration property (CASSANDRA-12673)
 + * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
 + * Remove timing window in test case (CASSANDRA-12875)
 + * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
 + * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)
 + * Fix validation of non-frozen UDT cells (CASSANDRA-12916)
 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903)
 + * Fix Murmur3PartitionerTest (CASSANDRA-12858)
 + * Move cqlsh syntax rules into separate module and allow easier 
customization (CASSANDRA-12897)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Fix cassandra-stress truncate option (CASSANDRA-12695)
 + * Fix crossNode value when receiving messages (CASSANDRA-12791)
 + * Don't load MX4J beans twice (CASSANDRA-12869)
 + * Extend native protocol request flags, add versions to SUPPORTED, and 
introduce ProtocolVersion enum (CASSANDRA-12838)
 + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836)
 + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845)
 + * Properly format IPv6 addresses when logging JMX service URL 
(CASSANDRA-12454)
 + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777)
 + * Use non-token restrictions for bounds when token restrictions are 
overridden (CASSANDRA-12419)
 + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803)
 + * Use different build directories for Eclipse and Ant (CASSANDRA-12466)
 + * Avoid potential AttributeError in cqlsh due 

[01/10] cassandra git commit: Discard in-flight shadow round responses

2017-03-22 Thread jkni
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 06316df54 -> bf0906b92
  refs/heads/cassandra-3.0 f4ba9083e -> 2836a644a
  refs/heads/cassandra-3.11 5484bd1ac -> ec9ce3dfb
  refs/heads/trunk f5e0a7cdb -> 8b74ae4b6


Discard in-flight shadow round responses

patch by Stefan Podkowinski; reviewed by Joel Knighton and Jason Brown for 
CASSANDRA-12653


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf0906b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf0906b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf0906b9

Branch: refs/heads/cassandra-2.2
Commit: bf0906b92cf65161d828e31bc46436d427bbb4b8
Parents: 06316df
Author: Stefan Podkowinski 
Authored: Mon Sep 19 13:56:54 2016 +0200
Committer: Joel Knighton 
Committed: Wed Mar 22 13:08:28 2017 -0500

--
 CHANGES.txt |  1 +
 .../gms/GossipDigestAckVerbHandler.java | 26 +---
 src/java/org/apache/cassandra/gms/Gossiper.java | 62 +++-
 .../apache/cassandra/service/MigrationTask.java | 12 ++--
 .../cassandra/service/StorageService.java   | 16 +++--
 5 files changed, 79 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 27dd343..df2421d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Discard in-flight shadow round responses (CASSANDRA-12653)
  * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
  * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
  * Fix queries updating multiple time the same list (CASSANDRA-13130)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java 
b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
index 9f69a94..59060f8 100644
--- a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
+++ b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
@@ -51,21 +51,31 @@ public class GossipDigestAckVerbHandler implements 
IVerbHandler
 Map epStateMap = 
gDigestAckMessage.getEndpointStateMap();
 logger.trace("Received ack with {} digests and {} states", 
gDigestList.size(), epStateMap.size());
 
-if (epStateMap.size() > 0)
-{
-/* Notify the Failure Detector */
-Gossiper.instance.notifyFailureDetector(epStateMap);
-Gossiper.instance.applyStateLocally(epStateMap);
-}
-
 if (Gossiper.instance.isInShadowRound())
 {
 if (logger.isDebugEnabled())
 logger.debug("Finishing shadow round with {}", from);
-Gossiper.instance.finishShadowRound();
+Gossiper.instance.finishShadowRound(epStateMap);
 return; // don't bother doing anything else, we have what we came 
for
 }
 
+if (epStateMap.size() > 0)
+{
+// Ignore any GossipDigestAck messages that we handle before a 
regular GossipDigestSyn has been send.
+// This will prevent Acks from leaking over from the shadow round 
that are not actual part of
+// the regular gossip conversation.
+if ((System.nanoTime() - Gossiper.instance.firstSynSendAt) < 0 || 
Gossiper.instance.firstSynSendAt == 0)
+{
+if (logger.isTraceEnabled())
+logger.trace("Ignoring unrequested GossipDigestAck from 
{}", from);
+return;
+}
+
+/* Notify the Failure Detector */
+Gossiper.instance.notifyFailureDetector(epStateMap);
+Gossiper.instance.applyStateLocally(epStateMap);
+}
+
 /* Get the state required to send to this gossipee - construct 
GossipDigestAck2Message */
 Map deltaEpStateMap = new 
HashMap();
 for (GossipDigest gDigest : gDigestList)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 06b14c4..c2eccba 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -30,6 +30,7 @@ import javax.management.ObjectName;
 
 import 

[04/10] cassandra git commit: Discard in-flight shadow round responses

2017-03-22 Thread jkni
Discard in-flight shadow round responses

patch by Stefan Podkowinski; reviewed by Joel Knighton and Jason Brown for 
CASSANDRA-12653


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf0906b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf0906b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf0906b9

Branch: refs/heads/trunk
Commit: bf0906b92cf65161d828e31bc46436d427bbb4b8
Parents: 06316df
Author: Stefan Podkowinski 
Authored: Mon Sep 19 13:56:54 2016 +0200
Committer: Joel Knighton 
Committed: Wed Mar 22 13:08:28 2017 -0500

--
 CHANGES.txt |  1 +
 .../gms/GossipDigestAckVerbHandler.java | 26 +---
 src/java/org/apache/cassandra/gms/Gossiper.java | 62 +++-
 .../apache/cassandra/service/MigrationTask.java | 12 ++--
 .../cassandra/service/StorageService.java   | 16 +++--
 5 files changed, 79 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 27dd343..df2421d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Discard in-flight shadow round responses (CASSANDRA-12653)
  * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
  * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
  * Fix queries updating multiple time the same list (CASSANDRA-13130)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java 
b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
index 9f69a94..59060f8 100644
--- a/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
+++ b/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
@@ -51,21 +51,31 @@ public class GossipDigestAckVerbHandler implements 
IVerbHandler
 Map epStateMap = 
gDigestAckMessage.getEndpointStateMap();
 logger.trace("Received ack with {} digests and {} states", 
gDigestList.size(), epStateMap.size());
 
-if (epStateMap.size() > 0)
-{
-/* Notify the Failure Detector */
-Gossiper.instance.notifyFailureDetector(epStateMap);
-Gossiper.instance.applyStateLocally(epStateMap);
-}
-
 if (Gossiper.instance.isInShadowRound())
 {
 if (logger.isDebugEnabled())
 logger.debug("Finishing shadow round with {}", from);
-Gossiper.instance.finishShadowRound();
+Gossiper.instance.finishShadowRound(epStateMap);
 return; // don't bother doing anything else, we have what we came 
for
 }
 
+if (epStateMap.size() > 0)
+{
+// Ignore any GossipDigestAck messages that we handle before a 
regular GossipDigestSyn has been send.
+// This will prevent Acks from leaking over from the shadow round 
that are not actual part of
+// the regular gossip conversation.
+if ((System.nanoTime() - Gossiper.instance.firstSynSendAt) < 0 || 
Gossiper.instance.firstSynSendAt == 0)
+{
+if (logger.isTraceEnabled())
+logger.trace("Ignoring unrequested GossipDigestAck from 
{}", from);
+return;
+}
+
+/* Notify the Failure Detector */
+Gossiper.instance.notifyFailureDetector(epStateMap);
+Gossiper.instance.applyStateLocally(epStateMap);
+}
+
 /* Get the state required to send to this gossipee - construct 
GossipDigestAck2Message */
 Map deltaEpStateMap = new 
HashMap();
 for (GossipDigest gDigest : gDigestList)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf0906b9/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 06b14c4..c2eccba 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -30,6 +30,7 @@ import javax.management.ObjectName;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
 import com.google.common.util.concurrent.Uninterruptibles;
 
 import org.apache.cassandra.utils.Pair;
@@ 

[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-03-22 Thread jkni
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2836a644
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2836a644
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2836a644

Branch: refs/heads/trunk
Commit: 2836a644a357c0992ba89622f04668422ce2761a
Parents: f4ba908 bf0906b
Author: Joel Knighton 
Authored: Wed Mar 22 13:13:44 2017 -0500
Committer: Joel Knighton 
Committed: Wed Mar 22 13:18:59 2017 -0500

--
 CHANGES.txt |  1 +
 .../gms/GossipDigestAckVerbHandler.java | 26 ++---
 src/java/org/apache/cassandra/gms/Gossiper.java | 56 ++--
 .../apache/cassandra/service/MigrationTask.java | 12 ++---
 .../cassandra/service/StorageService.java   | 17 +++---
 5 files changed, 73 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2836a644/CHANGES.txt
--
diff --cc CHANGES.txt
index 6021315,df2421d..9140c73
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,27 -1,9 +1,28 @@@
 -2.2.10
 +3.0.13
 + * Fix CONTAINS filtering for null collections (CASSANDRA-13246)
 + * Applying: Use a unique metric reservoir per test run when using 
Cassandra-wide metrics residing in MBeans (CASSANDRA-13216)
 + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 +Merged from 2.2:
+  * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)
   * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
 + * Commitlog replay may fail if last mutation is within 4 bytes of end of 
segment (CASSANDRA-13282)
   * Fix queries updating multiple time the same list (CASSANDRA-13130)
   * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 +
 +
 +3.0.12
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Update c.yaml doc for offheap memtables (CASSANDRA-13179)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 +Merged from 2.2:
   * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
   * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
   * Coalescing strategy sleeps too much (CASSANDRA-13090)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2836a644/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index cbfa750,c2eccba..802ff9c
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -124,6 -128,9 +128,8 @@@ public class Gossiper implements IFailu
  private final Map expireTimeEndpointMap = new 
ConcurrentHashMap();
  
  private volatile boolean inShadowRound = false;
 -
+ // endpoint states as gathered during shadow round
+ private final Map endpointShadowStateMap = 
new ConcurrentHashMap<>();
  
  private volatile long lastProcessedMessageAt = System.currentTimeMillis();
  
@@@ -818,28 -826,6 +827,20 @@@
  return endpointStateMap.get(ep);
  }
  
 +public boolean valuesEqual(InetAddress ep1, InetAddress ep2, 
ApplicationState as)
 +{
 +EndpointState state1 = getEndpointStateForEndpoint(ep1);
 +EndpointState state2 = getEndpointStateForEndpoint(ep2);
 +
 +if (state1 == null || state2 == null)
 +return false;
 +
 +VersionedValue value1 = state1.getApplicationState(as);
 +VersionedValue value2 = state2.getApplicationState(as);
 +
 +return !(value1 == null || value2 == null) && 
value1.value.equals(value2.value);
 +}
 +
- // removes ALL endpoint states; should only be called after shadow gossip
- public void resetEndpointStateMap()
- {
- endpointStateMap.clear();
- unreachableEndpoints.clear();
- liveEndpoints.clear();
- }
- 
  public Set

[jira] [Commented] (CASSANDRA-13333) Cassandra does not start on Windows due to 'JNA link failure'

2017-03-22 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937079#comment-15937079
 ] 

Benjamin Lerer commented on CASSANDRA-1:


I force pushed a new patch.
The new patch use the {{Kernel32}} library to support natively the 
{{callGetPid}} method and keep the startup check. As the Windows library is not 
the {{c}} one, the patch also rename {{CLibrary}} to {{NativeLibrary}} as the 
name was misleading.  

||[3.0|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.0]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-dtest/]|
||[3.11|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.11]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-dtest/]|
||[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-dtest/]|



> Cassandra does not start on Windows due to 'JNA link failure'
> -
>
> Key: CASSANDRA-1
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Blocker
>
> Cassandra 3.0 HEAD does not start on Windows. The only error in the logs is: 
> {{ERROR 16:30:10 JNA failing to initialize properly.}} 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-6908) Dynamic endpoint snitch destabilizes cluster under heavy load

2017-03-22 Thread Shannon Carey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937013#comment-15937013
 ] 

Shannon Carey commented on CASSANDRA-6908:
--

It looks like I've run into this issue too: 
http://www.mail-archive.com/user@cassandra.apache.org/msg51510.html

My cluster was not under particularly heavy load, although there was higher 
read load in the local DC than the remote DC. Not enough load that the local 
latency was higher than remote, but the snitch apparently started routing my 
requests to the remote DC anyway (though I cannot verify that via the metrics).

> Dynamic endpoint snitch destabilizes cluster under heavy load
> -
>
> Key: CASSANDRA-6908
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6908
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Bartłomiej Romański
>Assignee: Brandon Williams
> Attachments: as-dynamic-snitch-disabled.png
>
>
> We observe that with dynamic snitch disabled our cluster is much more stable 
> than with dynamic snitch enabled.
> We've got a 15 nodes cluster with pretty strong machines (2xE5-2620, 64 GB 
> RAM, 2x480 GB SSD). We mostly do reads (about 300k/s).
> We use Astyanax on client side with TOKEN_AWARE option enabled. It 
> automatically direct read queries to one of the nodes responsible the given 
> token.
> In that case with dynamic snitch disabled Cassandra always handles read 
> locally. With dynamic snitch enabled Cassandra very often decides to proxy 
> the read to some other node. This causes much higher CPU usage and produces 
> much more garbage what results in more often GC pauses (young generation 
> fills up quicker). By "much higher" and "much more" I mean 1.5-2x.
> I'm aware that higher dynamic_snitch_badness_threshold value should solve 
> that issue. The default value is 0.1. I've looked at scores exposed in JMX 
> and the problem is that our values seemed to be completely random. They are 
> between usually 0.5 and 2.0, but changes randomly every time I hit refresh.
> Of course, I can set dynamic_snitch_badness_threshold to 5.0 or something 
> like that, but the result will be similar to simply disabling the dynamic 
> switch at all (that's what we done).
> I've tried to understand what's the logic behind these scores and I'm not 
> sure if I get the idea...
> It's a sum (without any multipliers) of two components:
> - ratio of recent given node latency to recent average node latency
> - something called 'severity', what, if I analyzed the code correctly, is a 
> result of BackgroundActivityMonitor.getIOWait() - it's a ratio of "iowait" 
> CPU time to the whole CPU time as reported in /proc/stats (the ratio is 
> multiplied by 100)
> In our case the second value is something around 0-2% but varies quite 
> heavily every second.
> What's the idea behind simply adding this two values without any multipliers 
> (e.g the second one is in percentage while the first one is not)? Are we sure 
> this is the best possible way of calculating the final score?
> Is there a way too force Cassandra to use (much) longer samples? In our case 
> we probably need that to get stable values. The 'severity' is calculated for 
> each second. The mean latency is calculated based on some magic, hardcoded 
> values (ALPHA = 0.75, WINDOW_SIZE = 100). 
> Am I right that there's no way to tune that without hacking the code?
> I'm aware that there's dynamic_snitch_update_interval_in_ms property in the 
> config file, but that only determines how often the scores are recalculated 
> not how long samples are taken. Is that correct?
> To sum up, It would be really nice to have more control over dynamic snitch 
> behavior or at least have the official option to disable it described in the 
> default config file (it took me some time to discover that we can just 
> disable it instead of hacking with dynamic_snitch_badness_threshold=1000).
> Currently for some scenarios (like ours - optimized cluster, token aware 
> client, heavy load) it causes more harm than good.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender

2017-03-22 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936987#comment-15936987
 ] 

Ariel Weisberg commented on CASSANDRA-13317:


||Code|utests|dtests||
|[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...aweisberg:cassandra-13317-3.11?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13317-3.11-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13317-3.11-dtest/1/]|

> Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due 
> to includeCallerData being false by default no appender
> 
>
> Key: CASSANDRA-13317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13317
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Fix For: 3.11.x, 4.x
>
> Attachments: 13317_v1.diff
>
>
> We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - 
> %msg%n". 
> %F:%L is intended to print the Filename:Line Number. For performance reasons 
> logback (like log4j2) disables tracking line numbers as it requires the 
> entire stack to be materialized every time.
> This causes logs to look like:
> WARN  [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not 
> supported by java driver
> INFO  [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping 
> replay
> INFO  [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement 
> caches with 14 MB
> INFO  [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo
> When instead you'd expect something like:
> INFO  [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - 
> Initializing system.available_ranges
> INFO  [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - 
> Initializing system.transferred_ranges
> INFO  [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - 
> Initializing system.views_builds_in_progress
> The fix is to add "true" to the 
> appender config to enable the line number and stack tracing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS

2017-03-22 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936957#comment-15936957
 ] 

Ariel Weisberg commented on CASSANDRA-13370:


It would be nice to have tests not block on secure random in the environments 
where that block for an unfortunate amount of time. I looked and I couldn't 
find a way to have SHA1PRNG or a fast seed generator be the default. I suspect 
there is a configuration out there that will initialize quickly, but I couldn't 
find it.

I would +1 switching to something that works on OS X in the interim.



> unittest CipherFactoryTest failed on MacOS
> --
>
> Key: CASSANDRA-13370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Priority: Minor
>
> Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
> {code}
> $ echo 1 > /dev/urandom
> echo: write error: operation not permitted
> {code}
> Which is causing CipherFactoryTest failed:
> {code}
> $ ant test -Dtest.name=CipherFactoryTest
> ...
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
> [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
> run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
> [junit]
> [junit] Testcase: 
> buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
> Caused an ERROR
> [junit] setSeed() failed
> [junit] java.security.ProviderException: setSeed() failed
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
> [junit] at 
> sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
> [junit] at 
> java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
> [junit] at java.security.SecureRandom.(SecureRandom.java:190)
> [junit] at 
> org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
> [junit] Caused by: java.io.IOException: Operation not permitted
> [junit] at java.io.FileOutputStream.writeBytes(Native Method)
> [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
> [junit] at 
> sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
> ...
> {code}
> I'm able to reproduce the issue on two Mac machines. But not sure if it's 
> affecting all other developers.
> {{-Djava.security.egd=file:/dev/urandom}} was introduced in:
> CASSANDRA-9581
> I would suggest to revert the 
> [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
>  as {{pig-test}} is removed ([pig is no longer 
> supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
> Or adding a condition for MacOS in build.xml.
> [~aweisberg] [~jasobrown] any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13186) Create index fails if the primary key is included, but docs claim it is supported

2017-03-22 Thread Aleksandr Sorokoumov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Sorokoumov reassigned CASSANDRA-13186:


Assignee: Aleksandr Sorokoumov

> Create index fails if the primary key is included, but docs claim it is 
> supported
> -
>
> Key: CASSANDRA-13186
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13186
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Ariel Weisberg
>Assignee: Aleksandr Sorokoumov
>
> {noformat}
> cqlsh:foo> CREATE TABLE users (
>...   userid text PRIMARY KEY,
>...   first_name text,
>...   last_name text,
>...   emails set,
>...   top_scores list,
>...   todo map
>... );
> cqlsh:foo> create index bar on foo.users (userid, last_name);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot 
> create secondary index on partition key column userid"
> {noformat}
> {quote}
>  yes, it's a bug in CreateIndexStatement. The check to enforce the PK 
> has only a single component is wrong
> it considers each target in isolation, so it doesn't take into account that 
> you might be creating a custom index on a PK component + something else
> {quote}
> http://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlCreateIndex.html
> {quote}
> Cassandra supports creating an index on most columns, excluding counter 
> columns but including a clustering column of a compound primary key or on the 
> partition (primary) key itself. 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13260) Add UDT support to Cassandra stress

2017-03-22 Thread Aleksandr Sorokoumov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Sorokoumov reassigned CASSANDRA-13260:


Assignee: Aleksandr Sorokoumov

> Add UDT support to Cassandra stress
> ---
>
> Key: CASSANDRA-13260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jeremy Hanna
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf, stress
>
> Splitting out UDT support in cassandra stress from CASSANDRA-9556.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2017-03-22 Thread Aleksandr Sorokoumov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Sorokoumov reassigned CASSANDRA-10968:


Assignee: Aleksandr Sorokoumov

> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13329) max_hints_delivery_threads does not work

2017-03-22 Thread Aleksandr Sorokoumov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Sorokoumov reassigned CASSANDRA-13329:


Assignee: Aleksandr Sorokoumov

> max_hints_delivery_threads does not work
> 
>
> Key: CASSANDRA-13329
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13329
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fuud
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
>
> HintsDispatchExecutor creates JMXEnabledThreadPoolExecutor with corePoolSize  
> == 1 and maxPoolSize==max_hints_delivery_threads and unbounded 
> LinkedBlockingQueue.
> In this configuration additional threads will not be created.
> Same problem with PerSSTableIndexWriter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13247) index on udt built failed and no data could be inserted

2017-03-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936928#comment-15936928
 ] 

Andrés de la Peña commented on CASSANDRA-13247:
---

I'm working on an initial version of the path 
[here|https://github.com/apache/cassandra/compare/trunk...adelapena:13247-trunk].

The patch makes CQL validation layer to forbid {{SELECT}} restrictions and 
{{CREATE INDEX}} over non-frozen UDT columns, which are not supported 
operations. Both operations are still perfectly possible with frozen UDTs.

> index on udt built failed and no data could be inserted
> ---
>
> Key: CASSANDRA-13247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13247
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mashudong
>Assignee: Andrés de la Peña
>Priority: Critical
> Attachments: udt_index.txt
>
>
> index on udt built failed and no data could be inserted
> steps to reproduce:
> CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '2'}  AND durable_writes = true;
> CREATE TYPE ks1.address (
> street text,
> city text,
> zip_code int,
> phones set
> );
> CREATE TYPE ks1.fullname (
> firstname text,
> lastname text
> );
> CREATE TABLE ks1.users (
> id uuid PRIMARY KEY,
> addresses map,
> age int,
> direct_reports set,
> name fullname
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} 
> allow filtering;
> ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] 
> message="Operation failed - received 0 responses and 1 failures" 
> info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 
> 'consistency': 'ONE'}
> WARN  [ReadStage-2] 2017-02-22 16:59:33,392 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for 
> 'complex' types
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> 

[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS

2017-03-22 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13370:
---
Description: 
Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
{code}
$ echo 1 > /dev/urandom
echo: write error: operation not permitted
{code}
Which is causing CipherFactoryTest failed:
{code}
$ ant test -Dtest.name=CipherFactoryTest
...
[junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
[junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
[junit]
[junit] Testcase: 
buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
Caused an ERROR
[junit] setSeed() failed
[junit] java.security.ProviderException: setSeed() failed
[junit] at 
sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
[junit] at 
sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
[junit] at 
sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
[junit] at 
java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
[junit] at java.security.SecureRandom.(SecureRandom.java:190)
[junit] at 
org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
[junit] Caused by: java.io.IOException: Operation not permitted
[junit] at java.io.FileOutputStream.writeBytes(Native Method)
[junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
[junit] at 
sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
...
{code}

I'm able to reproduce the issue on two Mac machines. But not sure if it's 
affecting all other developers.

{{-Djava.security.egd=file:/dev/urandom}} was introduced in:
CASSANDRA-9581

I would suggest to revert the 
[change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
 as {{pig-test}} is removed ([pig is no longer 
supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
Or adding a condition for MacOS in build.xml.

[~aweisberg] [~jasobrown] any thoughts?

  was:
Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
{code}
$ echo 1 > /dev/urandom
echo: write error: operation not permitted
{code}
Which is causing CipherFactoryTest failed:
{code}
$ ant test -Dtest.name=CipherFactoryTest
...
[junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
[junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
[junit]
[junit] Testcase: 
buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
Caused an ERROR
[junit] setSeed() failed
[junit] java.security.ProviderException: setSeed() failed
[junit] at 
sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
[junit] at 
sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
[junit] at 
sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
[junit] at 
java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
[junit] at java.security.SecureRandom.(SecureRandom.java:190)
[junit] at 
org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
[junit] Caused by: java.io.IOException: Operation not permitted
[junit] at java.io.FileOutputStream.writeBytes(Native Method)
[junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
[junit] at 
sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
...
{code}

I'm able to reproduce the issue on two Mac machines. But not sure if it's 
affecting all other developers.

{{-Djava.security.egd=file:/dev/urandom}} was introduced in:
CASSANDRA-9581

I would suggest to revert the 
[change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
 as {{pig-test}} is removed ([pig is no longer 
supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
Or adding a condition for MacOS.

[~aweisberg] [~jasobrown] any thoughts?


> unittest CipherFactoryTest failed on MacOS
> --
>
> Key: CASSANDRA-13370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13370
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Priority: Minor
>
> Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
> {code}
> $ echo 1 > /dev/urandom
> echo: write error: operation not permitted
> {code}
> Which is causing CipherFactoryTest failed:
> {code}
> $ ant test 

[jira] [Created] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS

2017-03-22 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-13370:
--

 Summary: unittest CipherFactoryTest failed on MacOS
 Key: CASSANDRA-13370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13370
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Jay Zhuang
Priority: Minor


Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}:
{code}
$ echo 1 > /dev/urandom
echo: write error: operation not permitted
{code}
Which is causing CipherFactoryTest failed:
{code}
$ ant test -Dtest.name=CipherFactoryTest
...
[junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest
[junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests 
run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec
[junit]
[junit] Testcase: 
buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest):  
Caused an ERROR
[junit] setSeed() failed
[junit] java.security.ProviderException: setSeed() failed
[junit] at 
sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472)
[junit] at 
sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331)
[junit] at 
sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214)
[junit] at 
java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209)
[junit] at java.security.SecureRandom.(SecureRandom.java:190)
[junit] at 
org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50)
[junit] Caused by: java.io.IOException: Operation not permitted
[junit] at java.io.FileOutputStream.writeBytes(Native Method)
[junit] at java.io.FileOutputStream.write(FileOutputStream.java:313)
[junit] at 
sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470)
...
{code}

I'm able to reproduce the issue on two Mac machines. But not sure if it's 
affecting all other developers.

{{-Djava.security.egd=file:/dev/urandom}} was introduced in:
CASSANDRA-9581

I would suggest to revert the 
[change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643]
 as {{pig-test}} is removed ([pig is no longer 
supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]).
Or adding a condition for MacOS.

[~aweisberg] [~jasobrown] any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-03-22 Thread Nachiket Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nachiket Patil updated CASSANDRA-13369:
---
Description: 
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
{code}
CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
{code}

Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  


  was:
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};

Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  



> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 4.0
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> {code}
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> {code}
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-03-22 Thread Nachiket Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nachiket Patil updated CASSANDRA-13369:
---
Description: 
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
```
CREATE KEYSPACE EXcalibur WITH REPLICATION = {'class': 
'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
```
Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  


  was:
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
```
$ CREATE KEYSPACE "Excalibur" WITH REPLICATION = {'class' : 
'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 5};
```
Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  



> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 4.0
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> ```
> CREATE KEYSPACE EXcalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> ```
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-03-22 Thread Nachiket Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nachiket Patil updated CASSANDRA-13369:
---
Description: 
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};

Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  


  was:
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
```
CREATE KEYSPACE EXcalibur WITH REPLICATION = {'class': 
'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
```
Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  



> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 4.0
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-03-22 Thread Nachiket Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nachiket Patil updated CASSANDRA-13369:
---
Description: 
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
```
$ CREATE KEYSPACE "Excalibur" WITH REPLICATION = {'class' : 
'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 5};
```
Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  


  was:
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
$ CREATE KEYSPACE "Excalibur" WITH REPLICATION = {'class' : 
'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 5};

Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  



> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 4.0
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> ```
> $ CREATE KEYSPACE "Excalibur" WITH REPLICATION = {'class' : 
> 'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 5};
> ```
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-03-22 Thread Nachiket Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nachiket Patil updated CASSANDRA-13369:
---
Description: 
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
$ CREATE KEYSPACE "Excalibur" WITH REPLICATION = {'class' : 
'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 5};

Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  


  was:
If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
`$ CREATE KEYSPACE "Excalibur"
  WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 
5};`

Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  



> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 4.0
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> $ CREATE KEYSPACE "Excalibur" WITH REPLICATION = {'class' : 
> 'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 5};
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-03-22 Thread Nachiket Patil (JIRA)
Nachiket Patil created CASSANDRA-13369:
--

 Summary: If there are multiple values for a key, CQL grammar 
choses last value. This should not be silent or should not be allowed.
 Key: CASSANDRA-13369
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
Reporter: Nachiket Patil
Assignee: Nachiket Patil
Priority: Minor
 Fix For: 4.0


If through CQL, multiple values are specified for a key, grammar parses the map 
and last value for the key wins. This behavior is bad.
e.g. 
`$ CREATE KEYSPACE "Excalibur"
  WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 2, 'dc1' : 
5};`

Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
result in loss of data. This behavior should not be silent or not be allowed at 
all.  




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


cassandra-builds git commit: Move cassandra6,7 slaves to ONLINE list

2017-03-22 Thread mshuler
Repository: cassandra-builds
Updated Branches:
  refs/heads/master dc96c0476 -> a018b48b4


Move cassandra6,7 slaves to ONLINE list


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/a018b48b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/a018b48b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/a018b48b

Branch: refs/heads/master
Commit: a018b48b4746d823d110c24cd97c9272a6508ebf
Parents: dc96c04
Author: Michael Shuler 
Authored: Wed Mar 22 13:38:25 2017 -0500
Committer: Michael Shuler 
Committed: Wed Mar 22 13:38:25 2017 -0500

--
 ASF-slaves.txt | 25 +
 1 file changed, 9 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/a018b48b/ASF-slaves.txt
--
diff --git a/ASF-slaves.txt b/ASF-slaves.txt
index 15cadc5..8a24422 100644
--- a/ASF-slaves.txt
+++ b/ASF-slaves.txt
@@ -29,22 +29,15 @@ Add ssh pub key from puppet deployment repo:
 
 ONLINE Slaves:
 
-cassandra slaves (16G RAM)
-
-cassandra1 - 163.172.83.157 - Ubuntu 16.04 LTS amd64, donated by Datastax
-cassandra2 - 163.172.83.159 - Ubuntu 16.04 LTS amd64, donated by Datastax
-cassandra3 - 163.172.83.161 - Ubuntu 16.04 LTS amd64, donated by Datastax
-cassandra4 - 163.172.83.163 - Ubuntu 16.04 LTS amd64, donated by Datastax
-cassandra5 - 163.172.83.175 - Ubuntu 16.04 LTS amd64, donated by Datastax
-
-
-
-Slaves in progress of being added to pool:
-
-cassandra-large slaves (32G RAM)
-
-cassandra6 - 163.172.71.128 - Ubuntu 16.04 LTS amd64, donated by Datastax
-cassandra7 - 163.172.71.129 - Ubuntu 16.04 LTS amd64, donated by Datastax
+'cassandra' label slaves (16G RAM)
+
+cassandra1 - 163.172.83.157 - Ubuntu 16.04 LTS amd64, 16G RAM, donated by 
Datastax
+cassandra2 - 163.172.83.159 - Ubuntu 16.04 LTS amd64, 16G RAM, donated by 
Datastax
+cassandra3 - 163.172.83.161 - Ubuntu 16.04 LTS amd64, 16G RAM, donated by 
Datastax
+cassandra4 - 163.172.83.163 - Ubuntu 16.04 LTS amd64, 16G RAM, donated by 
Datastax
+cassandra5 - 163.172.83.175 - Ubuntu 16.04 LTS amd64, 16G RAM, donated by 
Datastax
+cassandra6 - 163.172.71.128 - Ubuntu 16.04 LTS amd64, 32G RAM, donated by 
Datastax
+cassandra7 - 163.172.71.129 - Ubuntu 16.04 LTS amd64, 32G RAM, donated by 
Datastax
 
 
 



[jira] [Updated] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender

2017-03-22 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13317:
---
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)

> Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due 
> to includeCallerData being false by default no appender
> 
>
> Key: CASSANDRA-13317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13317
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Fix For: 3.11.x, 4.x
>
> Attachments: 13317_v1.diff
>
>
> We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - 
> %msg%n". 
> %F:%L is intended to print the Filename:Line Number. For performance reasons 
> logback (like log4j2) disables tracking line numbers as it requires the 
> entire stack to be materialized every time.
> This causes logs to look like:
> WARN  [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not 
> supported by java driver
> INFO  [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping 
> replay
> INFO  [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement 
> caches with 14 MB
> INFO  [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo
> When instead you'd expect something like:
> INFO  [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - 
> Initializing system.available_ranges
> INFO  [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - 
> Initializing system.transferred_ranges
> INFO  [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - 
> Initializing system.views_builds_in_progress
> The fix is to add "true" to the 
> appender config to enable the line number and stack tracing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender

2017-03-22 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13317:
---
Fix Version/s: 4.x
   3.11.x
   3.0.x
   2.2.x

> Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due 
> to includeCallerData being false by default no appender
> 
>
> Key: CASSANDRA-13317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13317
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 13317_v1.diff
>
>
> We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - 
> %msg%n". 
> %F:%L is intended to print the Filename:Line Number. For performance reasons 
> logback (like log4j2) disables tracking line numbers as it requires the 
> entire stack to be materialized every time.
> This causes logs to look like:
> WARN  [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not 
> supported by java driver
> INFO  [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping 
> replay
> INFO  [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement 
> caches with 14 MB
> INFO  [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo
> When instead you'd expect something like:
> INFO  [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - 
> Initializing system.available_ranges
> INFO  [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - 
> Initializing system.transferred_ranges
> INFO  [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - 
> Initializing system.views_builds_in_progress
> The fix is to add "true" to the 
> appender config to enable the line number and stack tracing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs

2017-03-22 Thread William R. Speirs (JIRA)
William R. Speirs created CASSANDRA-13368:
-

 Summary: Exception Stack not Printed as Intended in Error Logs
 Key: CASSANDRA-13368
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13368
 Project: Cassandra
  Issue Type: Bug
Reporter: William R. Speirs
Priority: Trivial


There are a number of instances where it appears the programmer intended to 
print a stack trace in an error message, but it is not actually being printed. 
For example, in {{BlacklistedDirectories.java:54}}:

{noformat}
catch (Exception e)
{
JVMStabilityInspector.inspectThrowable(e);
logger.error("error registering MBean {}", MBEAN_NAME, e);
//Allow the server to start even if the bean can't be registered
}
{noformat}

The logger will use the second argument for the braces, but will ignore the 
exception {{e}}. It would be helpful to have the stack traces of these 
exceptions printed. I propose adding a second line that prints the full stack 
trace: {{logger.error(e.getMessage(), e);}}

On the 2.1 branch, I found 8 instances of these types of messages:

{noformat}
db/BlacklistedDirectories.java:54:logger.error("error registering 
MBean {}", MBEAN_NAME, e);
io/sstable/SSTableReader.java:512:logger.error("Corrupt sstable {}; 
skipped", descriptor, e);
net/OutboundTcpConnection.java:228:logger.error("error 
processing a message intended for {}", poolReference.endPoint(), e);
net/OutboundTcpConnection.java:314:logger.error("error writing 
to {}", poolReference.endPoint(), e);
service/CassandraDaemon.java:231:logger.error("Exception in 
thread {}", t, e);
service/CassandraDaemon.java:562:logger.error("error 
registering MBean {}", MBEAN_NAME, e);
streaming/StreamSession.java:512:logger.error("[Stream #{}] 
Streaming error occurred", planId(), e);
transport/Server.java:442:logger.error("Problem retrieving RPC 
address for {}", endpoint, e);
{noformat}

And one where it'll print the {{toString()}} version of the exception:

{noformat}
db/Directories.java:689:logger.error("Could not calculate the size 
of {}. {}", input, e);
{noformat}

I'm happy to create a patch for each branch, just need a little guidance on how 
to do so. We're currently running 2.1 so I started there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender

2017-03-22 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936852#comment-15936852
 ] 

Ariel Weisberg commented on CASSANDRA-13317:


{{includeCallerData}} as far as I can tell is only a property on 
{{ch.qos.logback.classic.AsyncAppender}}. The other appenders don't have that 
tunable and I suspect they always fetch the caller data. I'll add this to 
logback-test.xml's async appender which was missing it.

> Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due 
> to includeCallerData being false by default no appender
> 
>
> Key: CASSANDRA-13317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13317
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Attachments: 13317_v1.diff
>
>
> We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - 
> %msg%n". 
> %F:%L is intended to print the Filename:Line Number. For performance reasons 
> logback (like log4j2) disables tracking line numbers as it requires the 
> entire stack to be materialized every time.
> This causes logs to look like:
> WARN  [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not 
> supported by java driver
> INFO  [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping 
> replay
> INFO  [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement 
> caches with 14 MB
> INFO  [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo
> When instead you'd expect something like:
> INFO  [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - 
> Initializing system.available_ranges
> INFO  [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - 
> Initializing system.transferred_ranges
> INFO  [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - 
> Initializing system.views_builds_in_progress
> The fix is to add "true" to the 
> appender config to enable the line number and stack tracing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
When importing data with the _COPY_ the command into a column family that is a 
_map>_, I get a _unhashable type: 'list'_ error. Here 
is how to reproduce:

{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';


table1.csv file content:
1,{'key': ['value1']}


cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';


table2.csv file content:
1,{'key': {'value1'}}


cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';


table1.csv file content:
1,{'key': ['value1']}


cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';


table2.csv file content:
1,{'key': {'value1'}}


cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> When importing data with the _COPY_ the command into a column family that is 
> a _map>_, I get a _unhashable type: 'list'_ error. 
> Here is how to reproduce:
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
When importing data with the _COPY_ command into a column family that has a 
_map>_ field, I get a _unhashable type: 'list'_ error. 
Here is how to reproduce:

{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';


table1.csv file content:
1,{'key': ['value1']}


cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';


table2.csv file content:
1,{'key': {'value1'}}


cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.

  was:
When importing data with the _COPY_ the command into a column family that is a 
_map>_, I get a _unhashable type: 'list'_ error. Here 
is how to reproduce:

{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';


table1.csv file content:
1,{'key': ['value1']}


cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';


table2.csv file content:
1,{'key': {'value1'}}


cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> When importing data with the _COPY_ command into a column family that has a 
> _map>_ field, I get a _unhashable type: 'list'_ 
> error. Here is how to reproduce:
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2017-03-22 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936547#comment-15936547
 ] 

Alex Petrov commented on CASSANDRA-10855:
-

Opened a follow-up ticket: [CASSANDRA-13367]

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>Assignee: Ben Manes
>  Labels: performance
> Fix For: 4.0
>
> Attachments: CASSANDRA-10855.patch, CASSANDRA-10855.patch
>
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13367) CASSANDRA-10855 breaks authentication: throws server error instead of bad credentials on cache load failure

2017-03-22 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-13367:
---

 Summary: CASSANDRA-10855 breaks authentication: throws server 
error instead of bad credentials on cache load failure
 Key: CASSANDRA-13367
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13367
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Petrov
Assignee: Alex Petrov






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12773) cassandra-stress error for one way SSL

2017-03-22 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936539#comment-15936539
 ] 

Stefan Podkowinski commented on CASSANDRA-12773:


I've now schedule the CI jobs for trunk as well. Here's the overview with the 
corrected links (thanks for pointing that out).

||2.2||3.0||3.11||trunk||
|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-12773-2.2]|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-12773-3.0]|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-12773-3.11]|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-12773-trunk]|
|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-3.11-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-trunk-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-2.2-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-3.11-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-12773-trunk-testall/]|


> cassandra-stress error for one way SSL 
> ---
>
> Key: CASSANDRA-12773
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12773
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jane Deng
>Assignee: Stefan Podkowinski
> Fix For: 2.2.x
>
> Attachments: 12773-2.2.patch
>
>
> CASSANDRA-9325 added keystore/truststore configuration into cassandra-stress. 
> However, for one way ssl (require_client_auth=false), there is no need to 
> pass keystore info into ssloptions. Cassadra-stress errored out:
> {noformat}
> java.lang.RuntimeException: java.io.IOException: Error creating the 
> initializing the SSL Context 
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:200)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:207)
>  
> at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) 
> at org.apache.cassandra.stress.Stress.main(Stress.java:117) 
> Caused by: java.io.IOException: Error creating the initializing the SSL 
> Context 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:151)
>  
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:128)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:191)
>  
> ... 5 more 
> Caused by: java.io.IOException: Keystore was tampered with, or password was 
> incorrect 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:772) 
> at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:55) 
> at java.security.KeyStore.load(KeyStore.java:1445) 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:129)
>  
> ... 7 more 
> Caused by: java.security.UnrecoverableKeyException: Password verification 
> failed 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:770) 
> ... 10 more
> {noformat}
> It's a bug from CASSANDRA-9325. When the keystore is absent, the keystore is 
> assigned to the path of the truststore, but the password isn't taken care.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12944) Diagnostic Events

2017-03-22 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936515#comment-15936515
 ] 

Stefan Podkowinski commented on CASSANDRA-12944:


My first version of the unit testing side of this tickets is now mostly 
completed. I've added 
[HintsServiceEventsExampleTest.java|https://github.com/spodkowinski/cassandra/blob/WIP-12944/test/unit/org/apache/cassandra/examples/HintsServiceEventsExampleTest.java]
 to illustrate how events can be collected during a test and afterwards 
inspected if they match what you'd expect to happen.

In short diagnostic events will improve unit testing by a) providing test flow 
control instances between events in form of CompletableFutures (see 
[PendingRangeCalculatorServiceTest.java|https://github.com/spodkowinski/cassandra/blob/WIP-12944/test/unit/org/apache/cassandra/gms/PendingRangeCalculatorServiceTest.java])
 and b) validate state and behavior by allowing you to inspect generated events 
(see mentioned 
[HintsServiceEventsExampleTest.java|https://github.com/spodkowinski/cassandra/blob/WIP-12944/test/unit/org/apache/cassandra/examples/HintsServiceEventsExampleTest.java]).
 Minor API changes will likely still happen, but that will be mostly it, if you 
don't have any more feedback.

At some point I'm also going to break up this ticket into easier to review 
subtasks for
* core classes (pub/sub, base event classes, number of event implementations)
* unit testing classes (including examples and docs)
* native transport integration (maybe with CLI prototype that would dump events 
to console)



> Diagnostic Events
> -
>
> Key: CASSANDRA-12944
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12944
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Stefan Podkowinski
>
> I'd like to propose a new "diagnostic events" feature that would allow to 
> observe internal Cassandra events in unit tests and from external tools via 
> native transport. The motivation is to improve testing as well as operational 
> monitoring and troubleshooting beyond logs and metrics.
> Please find more details in the linked proposal and give it some thoughts :)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13341) Legacy deserializer can create empty range tombstones

2017-03-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936514#comment-15936514
 ] 

Sylvain Lebresne commented on CASSANDRA-13341:
--

Thanks, pushed a new commit with fixes for those. Re-triggered CI to be extra 
sure even though it's mostly updates to comments. 

> Legacy deserializer can create empty range tombstones
> -
>
> Key: CASSANDRA-13341
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13341
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.11.x
>
>
> Range tombstones in the 2.x file format is a bit far-westy so you can 
> actually get sequences of range tombstones like {{\[1, 4\]@3 \[1, 10\]@5}}. 
> But the current legacy deserializer doesn't handle this correctly. On the 
> first range, it will generate a {{INCL_START(1)@3}} open marker, but upon 
> seeing the next tombstone it will decide to close the previously opened range 
> and re-open with deletion time 5, so will generate 
> {{EXCL_END_INCL_START(1)@3-5}}. That result in the first range being empty, 
> which break future assertions in the code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound

2017-03-22 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13366:
---
Since Version: 3.4

> Possible AssertionError in UnfilteredRowIteratorWithLowerBound
> --
>
> Key: CASSANDRA-13366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13366
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.11.x
>
>
> In the code introduced by CASSANDRA-8180, we build a lower bound for a 
> partition (sometimes) based on the min clustering values of the stats file. 
> We can't do that if the sstable has and range tombston marker and the code 
> does check that this is the case, but unfortunately the check is done using 
> the stats {{minLocalDeletionTime}} but that value isn't populated properly in 
> pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up 
> getting an exception like
> {noformat}
> WARN  [ReadStage-2] 2017-03-20 13:29:39,165  
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, 
> -9223372036854775808, -9223372036854775808) ]is bigger than first returned 
> value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable 
> /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122)
> {noformat}
> and this until the sstable is upgraded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound

2017-03-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-13366:
-
Reviewer: Stefania
  Status: Patch Available  (was: Open)

Attaching patch below:
| [13366-3.11|https://github.com/pcmanus/cassandra/commits/13366-3.11] | 
[utests|http://cassci.datastax.com/job/pcmanus-13366-3.11-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-13366-3.11-dtest] |

Mostly, this just make sure we don't use pre-3.0 sstables for building the 
lower bound since it's unsafe. There was also a corner case with {{null}} in 
clusterings (which we only allow for compact tables for backward compatiblity 
and should be pretty rare) that wasn't handled so the patch adds that. And I 
added a bunch of comments as I felt this could be useful to future readers.

I'd like to try to write an upgrade dtest for this, but haven't taken the time 
yet. I'll update when that's the case, but the problem is simple enough that 
this probably shouldn't block review in the meantime ([~Stefania] assigning you 
since you wrote CASSANDRA-8180, but feel free to unassign if you don't have 
time).


> Possible AssertionError in UnfilteredRowIteratorWithLowerBound
> --
>
> Key: CASSANDRA-13366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13366
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.11.x
>
>
> In the code introduced by CASSANDRA-8180, we build a lower bound for a 
> partition (sometimes) based on the min clustering values of the stats file. 
> We can't do that if the sstable has and range tombston marker and the code 
> does check that this is the case, but unfortunately the check is done using 
> the stats {{minLocalDeletionTime}} but that value isn't populated properly in 
> pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up 
> getting an exception like
> {noformat}
> WARN  [ReadStage-2] 2017-03-20 13:29:39,165  
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, 
> -9223372036854775808, -9223372036854775808) ]is bigger than first returned 
> value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable 
> /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122)
> {noformat}
> and this until the sstable is upgraded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13365) Nodes entering GC loop, does not recover

2017-03-22 Thread Mina Naguib (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936377#comment-15936377
 ] 

Mina Naguib edited comment on CASSANDRA-13365 at 3/22/17 2:41 PM:
--

For the time being, we've mitigated by adding a script on each node that 
detects it's entered a GC loop (at least 5 logged "Full GC (Allocation 
Failure)" errors in 5 minutes) and force-restarts the node (we don't have the 
luxury of asking it for a proper drain and soft shutdown).

I'd appreciate any ideas for further investigation towards a proper fix.


was (Author: minaguib):
For the time being, we've mitigated by adding a script on each node that 
detects it's entered a GC loop (at least 5 logged "Full GC (Allocation 
Failure)" errors in 5 minutes) and force-restarts the node.

I'd appreciate any ideas for further investigation towards a proper fix.

> Nodes entering GC loop, does not recover
> 
>
> Key: CASSANDRA-13365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13365
> Project: Cassandra
>  Issue Type: Bug
> Environment: 34-node cluster over 4 DCs
> Linux CentOS 7.2 x86
> Mix of 64GB/128GB RAM / node
> Mix of 32/40 hardware threads / node, Xeon ~2.4Ghz
> High read volume, low write volume, occasional sstable bulk loading
>Reporter: Mina Naguib
>
> Over the last week we've been observing two related problems affecting our 
> Cassandra cluster
> Problem 1: 1-few nodes per DC entering GC loop, not recovering
> Checking the heap usage stats, there's a sudden jump of 1-3GB. Some nodes 
> recover, but some don't and log this:
> {noformat}
> 2017-03-21T11:23:02.957-0400: 54099.519: [Full GC (Allocation Failure)  
> 13G->11G(14G), 29.4127307 secs]
> 2017-03-21T11:23:45.270-0400: 54141.833: [Full GC (Allocation Failure)  
> 13G->12G(14G), 28.1561881 secs]
> 2017-03-21T11:24:20.307-0400: 54176.869: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.7019501 secs]
> 2017-03-21T11:24:50.528-0400: 54207.090: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1372267 secs]
> 2017-03-21T11:25:19.190-0400: 54235.752: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0703975 secs]
> 2017-03-21T11:25:46.711-0400: 54263.273: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3187768 secs]
> 2017-03-21T11:26:15.419-0400: 54291.981: [Full GC (Allocation Failure)  
> 13G->13G(14G), 26.9493405 secs]
> 2017-03-21T11:26:43.399-0400: 54319.961: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5222085 secs]
> 2017-03-21T11:27:11.383-0400: 54347.945: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1769581 secs]
> 2017-03-21T11:27:40.174-0400: 54376.737: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4639031 secs]
> 2017-03-21T11:28:08.946-0400: 54405.508: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.3480523 secs]
> 2017-03-21T11:28:40.117-0400: 54436.680: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8220513 secs]
> 2017-03-21T11:29:08.459-0400: 54465.022: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4691271 secs]
> 2017-03-21T11:29:37.114-0400: 54493.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0275733 secs]
> 2017-03-21T11:30:04.635-0400: 54521.198: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1902627 secs]
> 2017-03-21T11:30:32.114-0400: 54548.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8872850 secs]
> 2017-03-21T11:31:01.430-0400: 54577.993: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1609706 secs]
> 2017-03-21T11:31:29.024-0400: 54605.587: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3635138 secs]
> 2017-03-21T11:31:57.303-0400: 54633.865: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4143510 secs]
> 2017-03-21T11:32:25.110-0400: 54661.672: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8595986 secs]
> 2017-03-21T11:32:53.922-0400: 54690.485: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5242543 secs]
> 2017-03-21T11:33:21.867-0400: 54718.429: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.8930130 secs]
> 2017-03-21T11:33:53.712-0400: 54750.275: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.6523013 secs]
> 2017-03-21T11:34:21.760-0400: 54778.322: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3030198 secs]
> 2017-03-21T11:34:50.073-0400: 54806.635: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1594154 secs]
> 2017-03-21T11:35:17.743-0400: 54834.306: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3766949 secs]
> 2017-03-21T11:35:45.797-0400: 54862.360: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5756770 secs]
> 2017-03-21T11:36:13.816-0400: 54890.378: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5541813 secs]
> 2017-03-21T11:36:41.926-0400: 54918.488: [Full GC (Allocation Failure)  
> 13G->13G(14G), 33.7510103 secs]
> 2017-03-21T11:37:16.132-0400: 54952.695: [Full GC 

[jira] [Commented] (CASSANDRA-9996) Extra "keyspace updated" SchemaChange when creating/removing a table

2017-03-22 Thread Jorge Bay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936412#comment-15936412
 ] 

Jorge Bay commented on CASSANDRA-9996:
--

This looks like a duplicate of CASSANDRA-9646.

> Extra "keyspace updated" SchemaChange when creating/removing a table
> 
>
> Key: CASSANDRA-9996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9996
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Olivier Michallat
>Priority: Minor
> Fix For: 2.2.x
>
>
> When a table gets created or removed, 2.2 sends an extra "keyspace updated" 
> schema change event in addition to the normal "table created" event. 2.1 only 
> sends table created.
> In {{LegacySchemaTables#mergeKeyspaces}}, the keyspace is added to 
> {{altered}} so it calls {{Schema#updateKeyspace}}, which triggers the event.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound

2017-03-22 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-13366:


 Summary: Possible AssertionError in 
UnfilteredRowIteratorWithLowerBound
 Key: CASSANDRA-13366
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13366
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.11.x


In the code introduced by CASSANDRA-8180, we build a lower bound for a 
partition (sometimes) based on the min clustering values of the stats file. We 
can't do that if the sstable has and range tombston marker and the code does 
check that this is the case, but unfortunately the check is done using the 
stats {{minLocalDeletionTime}} but that value isn't populated properly in 
pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up 
getting an exception like
{noformat}
WARN  [ReadStage-2] 2017-03-20 13:29:39,165  
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[ReadStage-2,5,main]: {}
java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, 
-9223372036854775808, -9223372036854775808) ]is bigger than first returned 
value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable 
/var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122)
{noformat}
and this until the sstable is upgraded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13365) Nodes entering GC loop, does not recover

2017-03-22 Thread Mina Naguib (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936377#comment-15936377
 ] 

Mina Naguib commented on CASSANDRA-13365:
-

For the time being, we've mitigated by adding a script on each node that 
detects it's entered a GC loop (at least 5 logged "Full GC (Allocation 
Failure)" errors in 5 minutes) and force-restarts the node.

I'd appreciate any ideas for further investigation towards a proper fix.

> Nodes entering GC loop, does not recover
> 
>
> Key: CASSANDRA-13365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13365
> Project: Cassandra
>  Issue Type: Bug
> Environment: 34-node cluster over 4 DCs
> Linux CentOS 7.2 x86
> Mix of 64GB/128GB RAM / node
> Mix of 32/40 hardware threads / node, Xeon ~2.4Ghz
> High read volume, low write volume, occasional sstable bulk loading
>Reporter: Mina Naguib
>
> Over the last week we've been observing two related problems affecting our 
> Cassandra cluster
> Problem 1: 1-few nodes per DC entering GC loop, not recovering
> Checking the heap usage stats, there's a sudden jump of 1-3GB. Some nodes 
> recover, but some don't and log this:
> {noformat}
> 2017-03-21T11:23:02.957-0400: 54099.519: [Full GC (Allocation Failure)  
> 13G->11G(14G), 29.4127307 secs]
> 2017-03-21T11:23:45.270-0400: 54141.833: [Full GC (Allocation Failure)  
> 13G->12G(14G), 28.1561881 secs]
> 2017-03-21T11:24:20.307-0400: 54176.869: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.7019501 secs]
> 2017-03-21T11:24:50.528-0400: 54207.090: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1372267 secs]
> 2017-03-21T11:25:19.190-0400: 54235.752: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0703975 secs]
> 2017-03-21T11:25:46.711-0400: 54263.273: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3187768 secs]
> 2017-03-21T11:26:15.419-0400: 54291.981: [Full GC (Allocation Failure)  
> 13G->13G(14G), 26.9493405 secs]
> 2017-03-21T11:26:43.399-0400: 54319.961: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5222085 secs]
> 2017-03-21T11:27:11.383-0400: 54347.945: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1769581 secs]
> 2017-03-21T11:27:40.174-0400: 54376.737: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4639031 secs]
> 2017-03-21T11:28:08.946-0400: 54405.508: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.3480523 secs]
> 2017-03-21T11:28:40.117-0400: 54436.680: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8220513 secs]
> 2017-03-21T11:29:08.459-0400: 54465.022: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4691271 secs]
> 2017-03-21T11:29:37.114-0400: 54493.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.0275733 secs]
> 2017-03-21T11:30:04.635-0400: 54521.198: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1902627 secs]
> 2017-03-21T11:30:32.114-0400: 54548.676: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8872850 secs]
> 2017-03-21T11:31:01.430-0400: 54577.993: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1609706 secs]
> 2017-03-21T11:31:29.024-0400: 54605.587: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3635138 secs]
> 2017-03-21T11:31:57.303-0400: 54633.865: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4143510 secs]
> 2017-03-21T11:32:25.110-0400: 54661.672: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.8595986 secs]
> 2017-03-21T11:32:53.922-0400: 54690.485: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5242543 secs]
> 2017-03-21T11:33:21.867-0400: 54718.429: [Full GC (Allocation Failure)  
> 13G->13G(14G), 30.8930130 secs]
> 2017-03-21T11:33:53.712-0400: 54750.275: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.6523013 secs]
> 2017-03-21T11:34:21.760-0400: 54778.322: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3030198 secs]
> 2017-03-21T11:34:50.073-0400: 54806.635: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.1594154 secs]
> 2017-03-21T11:35:17.743-0400: 54834.306: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3766949 secs]
> 2017-03-21T11:35:45.797-0400: 54862.360: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5756770 secs]
> 2017-03-21T11:36:13.816-0400: 54890.378: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.5541813 secs]
> 2017-03-21T11:36:41.926-0400: 54918.488: [Full GC (Allocation Failure)  
> 13G->13G(14G), 33.7510103 secs]
> 2017-03-21T11:37:16.132-0400: 54952.695: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.4856611 secs]
> 2017-03-21T11:37:44.454-0400: 54981.017: [Full GC (Allocation Failure)  
> 13G->13G(14G), 28.1269335 secs]
> 2017-03-21T11:38:12.774-0400: 55009.337: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.7830448 secs]
> 2017-03-21T11:38:40.840-0400: 55037.402: [Full GC (Allocation Failure)  
> 13G->13G(14G), 27.3527326 secs]
> 2017-03-21T11:39:08.610-0400: 55065.173: [Full GC 

[jira] [Updated] (CASSANDRA-13340) Bugs handling range tombstones in the sstable iterators

2017-03-22 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13340:

Reviewer: Branimir Lambov

> Bugs handling range tombstones in the sstable iterators
> ---
>
> Key: CASSANDRA-13340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13340
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> There is 2 bugs in the way sstable iterators handle range tombstones:
> # empty range tombstones can be returned due to a strict comparison that 
> shouldn't be.
> # the sstable reversed iterator can actually return completely bogus results 
> when range tombstones are spanning multiple index blocks.
> The 2 bugs are admittedly separate but as they both impact the same area of 
> code and are both range tombstones related, I suggest just fixing both here 
> (unless something really really mind).
> Marking the ticket critical mostly for the 2nd bug: it can truly make use 
> return bad results on reverse queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13365) Nodes entering GC loop, does not recover

2017-03-22 Thread Mina Naguib (JIRA)
Mina Naguib created CASSANDRA-13365:
---

 Summary: Nodes entering GC loop, does not recover
 Key: CASSANDRA-13365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13365
 Project: Cassandra
  Issue Type: Bug
 Environment: 34-node cluster over 4 DCs
Linux CentOS 7.2 x86
Mix of 64GB/128GB RAM / node
Mix of 32/40 hardware threads / node, Xeon ~2.4Ghz
High read volume, low write volume, occasional sstable bulk loading
Reporter: Mina Naguib


Over the last week we've been observing two related problems affecting our 
Cassandra cluster

Problem 1: 1-few nodes per DC entering GC loop, not recovering

Checking the heap usage stats, there's a sudden jump of 1-3GB. Some nodes 
recover, but some don't and log this:
{noformat}
2017-03-21T11:23:02.957-0400: 54099.519: [Full GC (Allocation Failure)  
13G->11G(14G), 29.4127307 secs]
2017-03-21T11:23:45.270-0400: 54141.833: [Full GC (Allocation Failure)  
13G->12G(14G), 28.1561881 secs]
2017-03-21T11:24:20.307-0400: 54176.869: [Full GC (Allocation Failure)  
13G->13G(14G), 27.7019501 secs]
2017-03-21T11:24:50.528-0400: 54207.090: [Full GC (Allocation Failure)  
13G->13G(14G), 27.1372267 secs]
2017-03-21T11:25:19.190-0400: 54235.752: [Full GC (Allocation Failure)  
13G->13G(14G), 27.0703975 secs]
2017-03-21T11:25:46.711-0400: 54263.273: [Full GC (Allocation Failure)  
13G->13G(14G), 27.3187768 secs]
2017-03-21T11:26:15.419-0400: 54291.981: [Full GC (Allocation Failure)  
13G->13G(14G), 26.9493405 secs]
2017-03-21T11:26:43.399-0400: 54319.961: [Full GC (Allocation Failure)  
13G->13G(14G), 27.5222085 secs]
2017-03-21T11:27:11.383-0400: 54347.945: [Full GC (Allocation Failure)  
13G->13G(14G), 27.1769581 secs]
2017-03-21T11:27:40.174-0400: 54376.737: [Full GC (Allocation Failure)  
13G->13G(14G), 27.4639031 secs]
2017-03-21T11:28:08.946-0400: 54405.508: [Full GC (Allocation Failure)  
13G->13G(14G), 30.3480523 secs]
2017-03-21T11:28:40.117-0400: 54436.680: [Full GC (Allocation Failure)  
13G->13G(14G), 27.8220513 secs]
2017-03-21T11:29:08.459-0400: 54465.022: [Full GC (Allocation Failure)  
13G->13G(14G), 27.4691271 secs]
2017-03-21T11:29:37.114-0400: 54493.676: [Full GC (Allocation Failure)  
13G->13G(14G), 27.0275733 secs]
2017-03-21T11:30:04.635-0400: 54521.198: [Full GC (Allocation Failure)  
13G->13G(14G), 27.1902627 secs]
2017-03-21T11:30:32.114-0400: 54548.676: [Full GC (Allocation Failure)  
13G->13G(14G), 27.8872850 secs]
2017-03-21T11:31:01.430-0400: 54577.993: [Full GC (Allocation Failure)  
13G->13G(14G), 27.1609706 secs]
2017-03-21T11:31:29.024-0400: 54605.587: [Full GC (Allocation Failure)  
13G->13G(14G), 27.3635138 secs]
2017-03-21T11:31:57.303-0400: 54633.865: [Full GC (Allocation Failure)  
13G->13G(14G), 27.4143510 secs]
2017-03-21T11:32:25.110-0400: 54661.672: [Full GC (Allocation Failure)  
13G->13G(14G), 27.8595986 secs]
2017-03-21T11:32:53.922-0400: 54690.485: [Full GC (Allocation Failure)  
13G->13G(14G), 27.5242543 secs]
2017-03-21T11:33:21.867-0400: 54718.429: [Full GC (Allocation Failure)  
13G->13G(14G), 30.8930130 secs]
2017-03-21T11:33:53.712-0400: 54750.275: [Full GC (Allocation Failure)  
13G->13G(14G), 27.6523013 secs]
2017-03-21T11:34:21.760-0400: 54778.322: [Full GC (Allocation Failure)  
13G->13G(14G), 27.3030198 secs]
2017-03-21T11:34:50.073-0400: 54806.635: [Full GC (Allocation Failure)  
13G->13G(14G), 27.1594154 secs]
2017-03-21T11:35:17.743-0400: 54834.306: [Full GC (Allocation Failure)  
13G->13G(14G), 27.3766949 secs]
2017-03-21T11:35:45.797-0400: 54862.360: [Full GC (Allocation Failure)  
13G->13G(14G), 27.5756770 secs]
2017-03-21T11:36:13.816-0400: 54890.378: [Full GC (Allocation Failure)  
13G->13G(14G), 27.5541813 secs]
2017-03-21T11:36:41.926-0400: 54918.488: [Full GC (Allocation Failure)  
13G->13G(14G), 33.7510103 secs]
2017-03-21T11:37:16.132-0400: 54952.695: [Full GC (Allocation Failure)  
13G->13G(14G), 27.4856611 secs]
2017-03-21T11:37:44.454-0400: 54981.017: [Full GC (Allocation Failure)  
13G->13G(14G), 28.1269335 secs]
2017-03-21T11:38:12.774-0400: 55009.337: [Full GC (Allocation Failure)  
13G->13G(14G), 27.7830448 secs]
2017-03-21T11:38:40.840-0400: 55037.402: [Full GC (Allocation Failure)  
13G->13G(14G), 27.3527326 secs]
2017-03-21T11:39:08.610-0400: 55065.173: [Full GC (Allocation Failure)  
13G->13G(14G), 27.5828941 secs]
2017-03-21T11:39:36.833-0400: 55093.396: [Full GC (Allocation Failure)  
13G->13G(14G), 27.9303030 secs]
2017-03-21T11:40:05.265-0400: 55121.828: [Full GC (Allocation Failure)  
13G->13G(14G), 36.9902867 secs]
2017-03-21T11:40:42.400-0400: 55158.963: [Full GC (Allocation Failure)  
13G->13G(14G), 27.6835744 secs]
2017-03-21T11:41:10.529-0400: 55187.091: [Full GC (Allocation Failure)  
13G->13G(14G), 27.1899555 secs]
2017-03-21T11:41:38.018-0400: 55214.581: [Full GC (Allocation Failure)  
13G->13G(14G), 27.7309706 secs]
2017-03-21T11:42:06.062-0400: 55242.624: [Full GC 

[jira] [Assigned] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-13364:


Assignee: Stefania

> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12773) cassandra-stress error for one way SSL

2017-03-22 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12773:
-
Status: Ready to Commit  (was: Patch Available)

> cassandra-stress error for one way SSL 
> ---
>
> Key: CASSANDRA-12773
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12773
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jane Deng
>Assignee: Stefan Podkowinski
> Fix For: 2.2.x
>
> Attachments: 12773-2.2.patch
>
>
> CASSANDRA-9325 added keystore/truststore configuration into cassandra-stress. 
> However, for one way ssl (require_client_auth=false), there is no need to 
> pass keystore info into ssloptions. Cassadra-stress errored out:
> {noformat}
> java.lang.RuntimeException: java.io.IOException: Error creating the 
> initializing the SSL Context 
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:200)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:207)
>  
> at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) 
> at org.apache.cassandra.stress.Stress.main(Stress.java:117) 
> Caused by: java.io.IOException: Error creating the initializing the SSL 
> Context 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:151)
>  
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:128)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:191)
>  
> ... 5 more 
> Caused by: java.io.IOException: Keystore was tampered with, or password was 
> incorrect 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:772) 
> at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:55) 
> at java.security.KeyStore.load(KeyStore.java:1445) 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:129)
>  
> ... 7 more 
> Caused by: java.security.UnrecoverableKeyException: Password verification 
> failed 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:770) 
> ... 10 more
> {noformat}
> It's a bug from CASSANDRA-9325. When the keystore is absent, the keystore is 
> assigned to the path of the truststore, but the password isn't taken care.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12773) cassandra-stress error for one way SSL

2017-03-22 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936140#comment-15936140
 ] 

Robert Stupp commented on CASSANDRA-12773:
--

LGTM - ship it (assuming CI for trunk looks good, too).

(the link to the branches are also wrong - seems you've used the wrong ticket 
number in the links.)

> cassandra-stress error for one way SSL 
> ---
>
> Key: CASSANDRA-12773
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12773
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jane Deng
>Assignee: Stefan Podkowinski
> Fix For: 2.2.x
>
> Attachments: 12773-2.2.patch
>
>
> CASSANDRA-9325 added keystore/truststore configuration into cassandra-stress. 
> However, for one way ssl (require_client_auth=false), there is no need to 
> pass keystore info into ssloptions. Cassadra-stress errored out:
> {noformat}
> java.lang.RuntimeException: java.io.IOException: Error creating the 
> initializing the SSL Context 
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:200)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:207)
>  
> at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) 
> at org.apache.cassandra.stress.Stress.main(Stress.java:117) 
> Caused by: java.io.IOException: Error creating the initializing the SSL 
> Context 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:151)
>  
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:128)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:191)
>  
> ... 5 more 
> Caused by: java.io.IOException: Keystore was tampered with, or password was 
> incorrect 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:772) 
> at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:55) 
> at java.security.KeyStore.load(KeyStore.java:1445) 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:129)
>  
> ... 7 more 
> Caused by: java.security.UnrecoverableKeyException: Password verification 
> failed 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:770) 
> ... 10 more
> {noformat}
> It's a bug from CASSANDRA-9325. When the keystore is absent, the keystore is 
> assigned to the path of the truststore, but the password isn't taken care.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';


table1.csv file content:
1,{'key': ['value1']}


cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';


table2.csv file content:
1,{'key': {'value1'}}


cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...

table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...

table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
{code}

table1.csv file content:
{code}1,{'key': ['value1']}{code}
{code}
cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
{code}

table1.csv file content:
{code}1,{'key': ['value1']}{code}
{code}
cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> {code}
> table1.csv file content:
> {code}1,{'key': ['value1']}{code}
> {code}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside _copyutil.py_.

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside copyutil.py.


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But looks like it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside _copyutil.py_.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
class inside copyutil.py.

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But looks like it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}
> The exception seems to arrive in _convert_map_ function in _ImportConversion_ 
> class inside copyutil.py.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

cqlsh:ks> copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

cqlsh:ks> copy table2 from 'table2.csv';
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.417 seconds (0 skipped).
{code}

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

copy table2 from 'table2.csv';
{code}


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But looks like it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> cqlsh:ks> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> cqlsh:ks> copy table2 from 'table2.csv';
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.417 seconds (0 skipped).
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works for Map works fine.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

copy table2 from 'table2.csv';
{code}

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
Using 7 child processes

Starting copy of kv.table1 with columns [col1, col2map].
Processed: 1 rows; Rate:  10 rows/s; Avg. rate:  10 rows/s
1 rows exported to 1 files in 0.110 seconds.

table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
Using 7 child processes

Starting copy of kv.table1 with columns [col1, col2map].

Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works for Map works fine.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

copy table2 from 'table2.csv';
{code}


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But looks like it works for Map works fine.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> copy table2 from 'table2.csv';
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13364) Cqlsh COPY fails importing Map<String,List>, ParseError unhashable type list

2017-03-22 Thread Nicolae N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae N updated CASSANDRA-13364:
--
Description: 
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works fine for Map.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

copy table2 from 'table2.csv';
{code}

  was:
{code}
CREATE TABLE table1 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table1 (col1, col2map) values (1, {'key': ['value1']});

cqlsh:ks> copy table1 to 'table1.csv';
...
table1.csv file content:
1,{'key': ['value1']}

cqlsh:ks> copy table1 from 'table1.csv';
...
Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
unhashable type: 'list',  given up without retries
Failed to process 1 rows; failed rows written to kv_table1.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
1 rows imported from 1 files in 0.420 seconds (0 skipped).
{code}

But looks like it works for Map works fine.

{code}
CREATE TABLE table2 (
col1 int PRIMARY KEY,
col2map map>
);

insert into table2 (col1, col2map) values (1, {'key': {'value1'}});

copy table2 to 'table2.csv';

table2.csv file content:
1,{'key': {'value1'}}

copy table2 from 'table2.csv';
{code}


> Cqlsh COPY fails importing Map, ParseError unhashable 
> type list
> 
>
> Key: CASSANDRA-13364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13364
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolae N
>  Labels: cqlsh
> Fix For: 3.11.x
>
>
> {code}
> CREATE TABLE table1 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table1 (col1, col2map) values (1, {'key': ['value1']});
> cqlsh:ks> copy table1 to 'table1.csv';
> ...
> table1.csv file content:
> 1,{'key': ['value1']}
> cqlsh:ks> copy table1 from 'table1.csv';
> ...
> Failed to import 1 rows: ParseError - Failed to parse {'key': ['value1']} : 
> unhashable type: 'list',  given up without retries
> Failed to process 1 rows; failed rows written to kv_table1.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   2 rows/s
> 1 rows imported from 1 files in 0.420 seconds (0 skipped).
> {code}
> But looks like it works fine for Map.
> {code}
> CREATE TABLE table2 (
> col1 int PRIMARY KEY,
> col2map map>
> );
> insert into table2 (col1, col2map) values (1, {'key': {'value1'}});
> copy table2 to 'table2.csv';
> table2.csv file content:
> 1,{'key': {'value1'}}
> copy table2 from 'table2.csv';
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)