[jira] [Comment Edited] (CASSANDRA-10190) Python 3 support for cqlsh

2020-01-30 Thread Dinesh Joshi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027226#comment-17027226
 ] 

Dinesh Joshi edited comment on CASSANDRA-10190 at 1/31/20 6:41 AM:
---

[~jaikiran] ofcourse. You can clone my branch and build C* and use cqlsh that 
is packaged in there. Please note that this ticket is still WIP and I 
appreciate early feedback, however I might address issues in follow on tickets.


was (Author: djoshi3):
[~jaikiran] ofcourse. You can clone my branch and build C* and use cqlsh that 
is packaged in there.

> Python 3 support for cqlsh
> --
>
> Key: CASSANDRA-10190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Tools
>Reporter: Andrew Pennebaker
>Assignee: Patrick Bannister
>Priority: Normal
>  Labels: cqlsh
> Fix For: 4.0, 4.0-alpha
>
> Attachments: 
> 0001-Fix-issues-from-version-specific-logic-commit.patch, 
> 0001-Update-six-to-1.12.0.patch, 
> 0002-Simplify-version-specific-logic-by-using-six.moves-a.patch, 
> coverage_notes.txt
>
>
> Users who operate in a Python 3 environment may have trouble launching cqlsh. 
> Could we please update cqlsh's syntax to run in Python 3?
> As a workaround, users can setup pyenv, and cd to a directory with a 
> .python-version containing "2.7". But it would be nice if cqlsh supported 
> modern Python versions out of the box.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10190) Python 3 support for cqlsh

2020-01-30 Thread Dinesh Joshi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027226#comment-17027226
 ] 

Dinesh Joshi commented on CASSANDRA-10190:
--

[~jaikiran] ofcourse. You can clone my branch and build C* and use cqlsh that 
is packaged in there.

> Python 3 support for cqlsh
> --
>
> Key: CASSANDRA-10190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Tools
>Reporter: Andrew Pennebaker
>Assignee: Patrick Bannister
>Priority: Normal
>  Labels: cqlsh
> Fix For: 4.0, 4.0-alpha
>
> Attachments: 
> 0001-Fix-issues-from-version-specific-logic-commit.patch, 
> 0001-Update-six-to-1.12.0.patch, 
> 0002-Simplify-version-specific-logic-by-using-six.moves-a.patch, 
> coverage_notes.txt
>
>
> Users who operate in a Python 3 environment may have trouble launching cqlsh. 
> Could we please update cqlsh's syntax to run in Python 3?
> As a workaround, users can setup pyenv, and cd to a directory with a 
> .python-version containing "2.7". But it would be nice if cqlsh supported 
> modern Python versions out of the box.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10190) Python 3 support for cqlsh

2020-01-30 Thread Jaikiran Pai (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027192#comment-17027192
 ] 

Jaikiran Pai commented on CASSANDRA-10190:
--

Hello Dinesh,

Would it be possible to do some beta release of the cqlsh python package which 
adds support for Python 3, so that some of us can try it out in our projects 
and report back any issues?

 

> Python 3 support for cqlsh
> --
>
> Key: CASSANDRA-10190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Tools
>Reporter: Andrew Pennebaker
>Assignee: Patrick Bannister
>Priority: Normal
>  Labels: cqlsh
> Fix For: 4.0, 4.0-alpha
>
> Attachments: 
> 0001-Fix-issues-from-version-specific-logic-commit.patch, 
> 0001-Update-six-to-1.12.0.patch, 
> 0002-Simplify-version-specific-logic-by-using-six.moves-a.patch, 
> coverage_notes.txt
>
>
> Users who operate in a Python 3 environment may have trouble launching cqlsh. 
> Could we please update cqlsh's syntax to run in Python 3?
> As a workaround, users can setup pyenv, and cd to a directory with a 
> .python-version containing "2.7". But it would be nice if cqlsh supported 
> modern Python versions out of the box.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15310) Fix flakey - testIdleDisconnect - org.apache.cassandra.transport.IdleDisconnectTest

2020-01-30 Thread Andrew Prudhomme (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027117#comment-17027117
 ] 

Andrew Prudhomme commented on CASSANDRA-15310:
--

[~e.dimitrova] I am seeing the same with local testing. On alpha2 it fails most 
every time. On trunk it hasn't failed in 100+ runs.

I don't think this is a problem anymore.

> Fix flakey - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTest
> ---
>
> Key: CASSANDRA-15310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15310
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Joey Lynch
>Assignee: Andrew Prudhomme
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Example run: 
> [https://circleci.com/gh/jolynch/cassandra/561#tests/containers/86]
>  
> {noformat}
> Your job ran 4428 tests with 1 failure
> - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTestjunit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.transport.IdleDisconnectTest.testIdleDisconnect(IdleDisconnectTest.java:56)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15526) Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest testConcurrentMemtableReadsAndWrites

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-15526:
---

Assignee: (was: Ekaterina Dimitrova)

> Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest 
> testConcurrentMemtableReadsAndWrites
> 
>
> Key: CASSANDRA-15526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: David Capwell
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> {code}
> junit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.index.sasi.utils.RangeIterator.(RangeIterator.java:46)
>   at 
> org.apache.cassandra.index.sasi.memory.KeyRangeIterator.(KeyRangeIterator.java:42)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex$ConcurrentTrie.search(TrieMemIndex.java:150)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex.search(TrieMemIndex.java:102)
>   at 
> org.apache.cassandra.index.sasi.memory.IndexMemtable.search(IndexMemtable.java:70)
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.searchMemtable(ColumnIndex.java:138)
>   at org.apache.cassandra.index.sasi.TermIterator.build(TermIterator.java:91)
>   at 
> org.apache.cassandra.index.sasi.plan.QueryController.getIndexes(QueryController.java:145)
>   at 
> org.apache.cassandra.index.sasi.plan.Operation$Builder.complete(Operation.java:434)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.analyze(QueryPlan.java:57)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.execute(QueryPlan.java:68)
>   at 
> org.apache.cassandra.index.sasi.SASIIndex.lambda$searcherFor$2(SASIIndex.java:301)
>   at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:455)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getIndexed(SASIIndexTest.java:2576)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getPaged(SASIIndexTest.java:2537)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testConcurrentMemtableReadsAndWrites(SASIIndexTest.java:1108)
>   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15526) Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest testConcurrentMemtableReadsAndWrites

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-15526:
---

Assignee: Ekaterina Dimitrova

> Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest 
> testConcurrentMemtableReadsAndWrites
> 
>
> Key: CASSANDRA-15526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: David Capwell
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> {code}
> junit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.index.sasi.utils.RangeIterator.(RangeIterator.java:46)
>   at 
> org.apache.cassandra.index.sasi.memory.KeyRangeIterator.(KeyRangeIterator.java:42)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex$ConcurrentTrie.search(TrieMemIndex.java:150)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex.search(TrieMemIndex.java:102)
>   at 
> org.apache.cassandra.index.sasi.memory.IndexMemtable.search(IndexMemtable.java:70)
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.searchMemtable(ColumnIndex.java:138)
>   at org.apache.cassandra.index.sasi.TermIterator.build(TermIterator.java:91)
>   at 
> org.apache.cassandra.index.sasi.plan.QueryController.getIndexes(QueryController.java:145)
>   at 
> org.apache.cassandra.index.sasi.plan.Operation$Builder.complete(Operation.java:434)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.analyze(QueryPlan.java:57)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.execute(QueryPlan.java:68)
>   at 
> org.apache.cassandra.index.sasi.SASIIndex.lambda$searcherFor$2(SASIIndex.java:301)
>   at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:455)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getIndexed(SASIIndexTest.java:2576)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getPaged(SASIIndexTest.java:2537)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testConcurrentMemtableReadsAndWrites(SASIIndexTest.java:1108)
>   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15526) Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest testConcurrentMemtableReadsAndWrites

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-15526:
---

Assignee: (was: Ekaterina Dimitrova)

> Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest 
> testConcurrentMemtableReadsAndWrites
> 
>
> Key: CASSANDRA-15526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: David Capwell
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> {code}
> junit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.index.sasi.utils.RangeIterator.(RangeIterator.java:46)
>   at 
> org.apache.cassandra.index.sasi.memory.KeyRangeIterator.(KeyRangeIterator.java:42)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex$ConcurrentTrie.search(TrieMemIndex.java:150)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex.search(TrieMemIndex.java:102)
>   at 
> org.apache.cassandra.index.sasi.memory.IndexMemtable.search(IndexMemtable.java:70)
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.searchMemtable(ColumnIndex.java:138)
>   at org.apache.cassandra.index.sasi.TermIterator.build(TermIterator.java:91)
>   at 
> org.apache.cassandra.index.sasi.plan.QueryController.getIndexes(QueryController.java:145)
>   at 
> org.apache.cassandra.index.sasi.plan.Operation$Builder.complete(Operation.java:434)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.analyze(QueryPlan.java:57)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.execute(QueryPlan.java:68)
>   at 
> org.apache.cassandra.index.sasi.SASIIndex.lambda$searcherFor$2(SASIIndex.java:301)
>   at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:455)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getIndexed(SASIIndexTest.java:2576)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getPaged(SASIIndexTest.java:2537)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testConcurrentMemtableReadsAndWrites(SASIIndexTest.java:1108)
>   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15526) Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest testConcurrentMemtableReadsAndWrites

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-15526:
---

Assignee: Ekaterina Dimitrova

> Fix flakey test - org.apache.cassandra.index.sasi.SASIIndexTest 
> testConcurrentMemtableReadsAndWrites
> 
>
> Key: CASSANDRA-15526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: David Capwell
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> {code}
> junit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.index.sasi.utils.RangeIterator.(RangeIterator.java:46)
>   at 
> org.apache.cassandra.index.sasi.memory.KeyRangeIterator.(KeyRangeIterator.java:42)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex$ConcurrentTrie.search(TrieMemIndex.java:150)
>   at 
> org.apache.cassandra.index.sasi.memory.TrieMemIndex.search(TrieMemIndex.java:102)
>   at 
> org.apache.cassandra.index.sasi.memory.IndexMemtable.search(IndexMemtable.java:70)
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.searchMemtable(ColumnIndex.java:138)
>   at org.apache.cassandra.index.sasi.TermIterator.build(TermIterator.java:91)
>   at 
> org.apache.cassandra.index.sasi.plan.QueryController.getIndexes(QueryController.java:145)
>   at 
> org.apache.cassandra.index.sasi.plan.Operation$Builder.complete(Operation.java:434)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.analyze(QueryPlan.java:57)
>   at org.apache.cassandra.index.sasi.plan.QueryPlan.execute(QueryPlan.java:68)
>   at 
> org.apache.cassandra.index.sasi.SASIIndex.lambda$searcherFor$2(SASIIndex.java:301)
>   at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:455)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getIndexed(SASIIndexTest.java:2576)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.getPaged(SASIIndexTest.java:2537)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testConcurrentMemtableReadsAndWrites(SASIIndexTest.java:1108)
>   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15367) Memtable memory allocations may deadlock

2020-01-30 Thread Benedict Elliott Smith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027080#comment-17027080
 ] 

Benedict Elliott Smith commented on CASSANDRA-15367:


bq.  but I’m not sure if it’s worth addressing

I don't think any deadlock is acceptable to ignore.  Hmm.  If we don't go with 
one of the other approaches I've suggested, I'll have to find some time in a 
week to see if there's a variant of this suggested approach that works in this 
respect.

bq. 

I think this is something I have proposed before, but it's not trivial.  I had 
planned to implement something like this as part of my work addressing this 
problem, but decided not to given the complexity.  The idea would be to 
introduce a linked-list of deferred updates, and merge them either on future 
reads or writes, but ensuring everyone sees a consistent view with this 
approach, while minimising duplicated work and ensuring progress, is less 
trivial than I imagined when I proposed it a while ago.

bq. About removing the lock, I’m sure 15511 will help with contention, and we 
should commit it, however I think there will still be pathological cases where 
faster updates won’t be enough

We can benchmark this specific scenario, but all we really care about is if the 
aggregate behaviour for all 21 operations is good enough to warrant removal of 
the lock, and the commensurate reduction in complexity when reasoning about the 
system (that has been _amply_ demonstrated by this ticket).  IMO, the 
performance numbers from 15511 more than cross this threshold, but we can 
certainly explore further verification work to be certain.

> Memtable memory allocations may deadlock
> 
>
> Key: CASSANDRA-15367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15367
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Assignee: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> * Under heavy contention, we guard modifications to a partition with a mutex, 
> for the lifetime of the memtable.
> * Memtables block for the completion of all {{OpOrder.Group}} started before 
> their flush began
> * Memtables permit operations from this cohort to fall-through to the 
> following Memtable, in order to guarantee a precise commitLogUpperBound
> * Memtable memory limits may be lifted for operations in the first cohort, 
> since they block flush (and hence block future memory allocation)
> With very unfortunate scheduling
> * A contended partition may rapidly escalate to a mutex
> * The system may reach memory limits that prevent allocations for the new 
> Memtable’s cohort (C2) 
> * An operation from C2 may hold the mutex when this occurs
> * Operations from a prior Memtable’s cohort (C1), for a contended partition, 
> may fall-through to the next Memtable
> * The operations from C1 may execute after the above is encountered by those 
> from C2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15310) Fix flakey - testIdleDisconnect - org.apache.cassandra.transport.IdleDisconnectTest

2020-01-30 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027035#comment-17027035
 ] 

Ekaterina Dimitrova commented on CASSANDRA-15310:
-

Thanks for the fast response [~aprudhomme].

Looking at the OSS Jenkins I see it marked as stable here(if I am looking at 
the right place):

https://builds.apache.org/view/A-D/view/Cassandra%20trunk/job/Cassandra-trunk-test/1047/testReport/org.apache.cassandra.transport/IdleDisconnectTest/

> Fix flakey - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTest
> ---
>
> Key: CASSANDRA-15310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15310
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Joey Lynch
>Assignee: Andrew Prudhomme
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Example run: 
> [https://circleci.com/gh/jolynch/cassandra/561#tests/containers/86]
>  
> {noformat}
> Your job ran 4428 tests with 1 failure
> - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTestjunit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.transport.IdleDisconnectTest.testIdleDisconnect(IdleDisconnectTest.java:56)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15310) Fix flakey - testIdleDisconnect - org.apache.cassandra.transport.IdleDisconnectTest

2020-01-30 Thread Andrew Prudhomme (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027027#comment-17027027
 ] 

Andrew Prudhomme commented on CASSANDRA-15310:
--

Thanks [~e.dimitrova]. This test is flakey in alpha1/2, but seems fine on the 
current trunk. Doing some digging, it looks like it got fixed as part of a 
different commit 
[https://github.com/apache/cassandra/commit/3a8300e0b86c4acfb7b7702197d36cc39ebe94bc#diff-0da71200299fb3393ea8f95eae9124ea]

If no one is seeing this anymore, the ticket can probably be closed.

> Fix flakey - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTest
> ---
>
> Key: CASSANDRA-15310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15310
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Joey Lynch
>Assignee: Andrew Prudhomme
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Example run: 
> [https://circleci.com/gh/jolynch/cassandra/561#tests/containers/86]
>  
> {noformat}
> Your job ran 4428 tests with 1 failure
> - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTestjunit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.transport.IdleDisconnectTest.testIdleDisconnect(IdleDisconnectTest.java:56)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15537) 4.0 quality testing: Local Read/Write Path: Upgrade and Diff Test

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15537:
--
Summary: 4.0 quality testing: Local Read/Write Path: Upgrade and Diff Test  
(was: Local Read/Write Path: Upgrade and Diff Test)

> 4.0 quality testing: Local Read/Write Path: Upgrade and Diff Test
> -
>
> Key: CASSANDRA-15537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15537
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Yifan Cai
>Priority: Normal
> Fix For: 4.0
>
>
> {color:#de350b}NOTE: testing out epic flow for the cwiki testing stuff; no 
> need to do anything on this atm (1/3/2020){color}
>  
> Execution of upgrade and diff tests via cassandra-diff have proven to be one 
> of the most effective approaches toward identifying issues with the local 
> read/write path. These include instances of data loss, data corruption, data 
> resurrection, incorrect responses to queries, incomplete responses, and 
> others. Upgrade and diff tests can be executed concurrent with fault 
> injection (such as host or network failure); as well as during mixed-version 
> scenarios (such as upgrading half of the instances in a cluster, and running 
> upgradesstables on only half of the upgraded instances).
> Upgrade and diff tests are expected to continue through the release cycle, 
> and are a great way for contributors to gain confidence in the correctness of 
> the database under their own workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15538) 4.0 quality testing: Local Read/Write Path: Other Areas

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15538:
--
Fix Version/s: 4.0

> 4.0 quality testing: Local Read/Write Path: Other Areas
> ---
>
> Key: CASSANDRA-15538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15538
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0
>
>
> {color:#de350b}NOTE: just testing out Jira as replacement for cwiki. No need 
> to take action on this atm (1/30/2020){color}
>  
> Testing in this area refers to the local read/write path (StorageProxy, 
> ColumnFamilyStore, Memtable, SSTable reading/writing, etc). We are still 
> finding numerous bugs and issues with the 3.0 storage engine rewrite 
> (CASSANDRA-8099). For 4.0 we want to ensure that we thoroughly cover the 
> local read/write path with techniques such as property-based testing, fuzzing 
> ([example|http://cassandra.apache.org/blog/2018/10/17/finding_bugs_with_property_based_testing.html]),
>  and a source audit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15537) Local Read/Write Path: Upgrade and Diff Test

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15537:
--
Fix Version/s: 4.0

> Local Read/Write Path: Upgrade and Diff Test
> 
>
> Key: CASSANDRA-15537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15537
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Yifan Cai
>Priority: Normal
> Fix For: 4.0
>
>
> {color:#de350b}NOTE: testing out epic flow for the cwiki testing stuff; no 
> need to do anything on this atm (1/3/2020){color}
>  
> Execution of upgrade and diff tests via cassandra-diff have proven to be one 
> of the most effective approaches toward identifying issues with the local 
> read/write path. These include instances of data loss, data corruption, data 
> resurrection, incorrect responses to queries, incomplete responses, and 
> others. Upgrade and diff tests can be executed concurrent with fault 
> injection (such as host or network failure); as well as during mixed-version 
> scenarios (such as upgrading half of the instances in a cluster, and running 
> upgradesstables on only half of the upgraded instances).
> Upgrade and diff tests are expected to continue through the release cycle, 
> and are a great way for contributors to gain confidence in the correctness of 
> the database under their own workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15537) Local Read/Write Path: Upgrade and Diff Test

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15537:
--
Issue Type: Improvement  (was: Task)

> Local Read/Write Path: Upgrade and Diff Test
> 
>
> Key: CASSANDRA-15537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15537
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Yifan Cai
>Priority: Normal
>
> {color:#de350b}NOTE: testing out epic flow for the cwiki testing stuff; no 
> need to do anything on this atm (1/3/2020){color}
>  
> Execution of upgrade and diff tests via cassandra-diff have proven to be one 
> of the most effective approaches toward identifying issues with the local 
> read/write path. These include instances of data loss, data corruption, data 
> resurrection, incorrect responses to queries, incomplete responses, and 
> others. Upgrade and diff tests can be executed concurrent with fault 
> injection (such as host or network failure); as well as during mixed-version 
> scenarios (such as upgrading half of the instances in a cluster, and running 
> upgradesstables on only half of the upgraded instances).
> Upgrade and diff tests are expected to continue through the release cycle, 
> and are a great way for contributors to gain confidence in the correctness of 
> the database under their own workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15537) Local Read/Write Path: Upgrade and Diff Test

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15537:
--
Issue Type: Task  (was: Improvement)

> Local Read/Write Path: Upgrade and Diff Test
> 
>
> Key: CASSANDRA-15537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15537
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Yifan Cai
>Priority: Normal
>
> {color:#de350b}NOTE: testing out epic flow for the cwiki testing stuff; no 
> need to do anything on this atm (1/3/2020){color}
>  
> Execution of upgrade and diff tests via cassandra-diff have proven to be one 
> of the most effective approaches toward identifying issues with the local 
> read/write path. These include instances of data loss, data corruption, data 
> resurrection, incorrect responses to queries, incomplete responses, and 
> others. Upgrade and diff tests can be executed concurrent with fault 
> injection (such as host or network failure); as well as during mixed-version 
> scenarios (such as upgrading half of the instances in a cluster, and running 
> upgradesstables on only half of the upgraded instances).
> Upgrade and diff tests are expected to continue through the release cycle, 
> and are a great way for contributors to gain confidence in the correctness of 
> the database under their own workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15538) 4.0 quality testing: Local Read/Write Path: Other Areas

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15538:
--
Reviewers: Blake Eggleston, Sam Tunnicliffe

> 4.0 quality testing: Local Read/Write Path: Other Areas
> ---
>
> Key: CASSANDRA-15538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15538
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Aleksey Yeschenko
>Priority: Normal
>
> {color:#de350b}NOTE: just testing out Jira as replacement for cwiki. No need 
> to take action on this atm (1/30/2020){color}
>  
> Testing in this area refers to the local read/write path (StorageProxy, 
> ColumnFamilyStore, Memtable, SSTable reading/writing, etc). We are still 
> finding numerous bugs and issues with the 3.0 storage engine rewrite 
> (CASSANDRA-8099). For 4.0 we want to ensure that we thoroughly cover the 
> local read/write path with techniques such as property-based testing, fuzzing 
> ([example|http://cassandra.apache.org/blog/2018/10/17/finding_bugs_with_property_based_testing.html]),
>  and a source audit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15538) 4.0 quality testing: Local Read/Write Path: Other Areas

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15538:
--
Description: 
{color:#de350b}NOTE: just testing out Jira as replacement for cwiki. No need to 
take action on this atm (1/30/2020){color}

 

Testing in this area refers to the local read/write path (StorageProxy, 
ColumnFamilyStore, Memtable, SSTable reading/writing, etc). We are still 
finding numerous bugs and issues with the 3.0 storage engine rewrite 
(CASSANDRA-8099). For 4.0 we want to ensure that we thoroughly cover the local 
read/write path with techniques such as property-based testing, fuzzing 
([example|http://cassandra.apache.org/blog/2018/10/17/finding_bugs_with_property_based_testing.html]),
 and a source audit.

  was:Testing in this area refers to the local read/write path (StorageProxy, 
ColumnFamilyStore, Memtable, SSTable reading/writing, etc). We are still 
finding numerous bugs and issues with the 3.0 storage engine rewrite 
(CASSANDRA-8099). For 4.0 we want to ensure that we thoroughly cover the local 
read/write path with techniques such as property-based testing, fuzzing 
([example|http://cassandra.apache.org/blog/2018/10/17/finding_bugs_with_property_based_testing.html]),
 and a source audit.


> 4.0 quality testing: Local Read/Write Path: Other Areas
> ---
>
> Key: CASSANDRA-15538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15538
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Aleksey Yeschenko
>Priority: Normal
>
> {color:#de350b}NOTE: just testing out Jira as replacement for cwiki. No need 
> to take action on this atm (1/30/2020){color}
>  
> Testing in this area refers to the local read/write path (StorageProxy, 
> ColumnFamilyStore, Memtable, SSTable reading/writing, etc). We are still 
> finding numerous bugs and issues with the 3.0 storage engine rewrite 
> (CASSANDRA-8099). For 4.0 we want to ensure that we thoroughly cover the 
> local read/write path with techniques such as property-based testing, fuzzing 
> ([example|http://cassandra.apache.org/blog/2018/10/17/finding_bugs_with_property_based_testing.html]),
>  and a source audit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15538) 4.0 quality testing: Local Read/Write Path: Other Areas

2020-01-30 Thread Josh McKenzie (Jira)
Josh McKenzie created CASSANDRA-15538:
-

 Summary: 4.0 quality testing: Local Read/Write Path: Other Areas
 Key: CASSANDRA-15538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15538
 Project: Cassandra
  Issue Type: Task
  Components: Test/dtest
Reporter: Josh McKenzie
Assignee: Aleksey Yeschenko


Testing in this area refers to the local read/write path (StorageProxy, 
ColumnFamilyStore, Memtable, SSTable reading/writing, etc). We are still 
finding numerous bugs and issues with the 3.0 storage engine rewrite 
(CASSANDRA-8099). For 4.0 we want to ensure that we thoroughly cover the local 
read/write path with techniques such as property-based testing, fuzzing 
([example|http://cassandra.apache.org/blog/2018/10/17/finding_bugs_with_property_based_testing.html]),
 and a source audit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15537) Local Read/Write Path: Upgrade and Diff Test

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15537:
--
Description: 
{color:#de350b}NOTE: testing out epic flow for the cwiki testing stuff; no need 
to do anything on this atm (1/3/2020){color}

 

Execution of upgrade and diff tests via cassandra-diff have proven to be one of 
the most effective approaches toward identifying issues with the local 
read/write path. These include instances of data loss, data corruption, data 
resurrection, incorrect responses to queries, incomplete responses, and others. 
Upgrade and diff tests can be executed concurrent with fault injection (such as 
host or network failure); as well as during mixed-version scenarios (such as 
upgrading half of the instances in a cluster, and running upgradesstables on 
only half of the upgraded instances).

Upgrade and diff tests are expected to continue through the release cycle, and 
are a great way for contributors to gain confidence in the correctness of the 
database under their own workloads.

  was:
Execution of upgrade and diff tests via cassandra-diff have proven to be one of 
the most effective approaches toward identifying issues with the local 
read/write path. These include instances of data loss, data corruption, data 
resurrection, incorrect responses to queries, incomplete responses, and others. 
Upgrade and diff tests can be executed concurrent with fault injection (such as 
host or network failure); as well as during mixed-version scenarios (such as 
upgrading half of the instances in a cluster, and running upgradesstables on 
only half of the upgraded instances).

Upgrade and diff tests are expected to continue through the release cycle, and 
are a great way for contributors to gain confidence in the correctness of the 
database under their own workloads.


> Local Read/Write Path: Upgrade and Diff Test
> 
>
> Key: CASSANDRA-15537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15537
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Yifan Cai
>Priority: Normal
>
> {color:#de350b}NOTE: testing out epic flow for the cwiki testing stuff; no 
> need to do anything on this atm (1/3/2020){color}
>  
> Execution of upgrade and diff tests via cassandra-diff have proven to be one 
> of the most effective approaches toward identifying issues with the local 
> read/write path. These include instances of data loss, data corruption, data 
> resurrection, incorrect responses to queries, incomplete responses, and 
> others. Upgrade and diff tests can be executed concurrent with fault 
> injection (such as host or network failure); as well as during mixed-version 
> scenarios (such as upgrading half of the instances in a cluster, and running 
> upgradesstables on only half of the upgraded instances).
> Upgrade and diff tests are expected to continue through the release cycle, 
> and are a great way for contributors to gain confidence in the correctness of 
> the database under their own workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15313) Fix flaky - ChecksummingTransformerTest - org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest

2020-01-30 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-15313:


Assignee: Brandon Williams  (was: Ekaterina Dimitrova)

> Fix flaky - ChecksummingTransformerTest - 
> org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest
> ---
>
> Key: CASSANDRA-15313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15313
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Vinay Chella
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> During the recent runs, this test appears to be flaky.
> Example failure: 
> [https://circleci.com/gh/vinaykumarchella/cassandra/459#tests/containers/94]
> corruptionCausesFailure-compression - 
> org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest
> {code:java}
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
>   at org.quicktheories.impl.Precursor.(Precursor.java:17)
>   at 
> org.quicktheories.impl.ConcreteDetachedSource.(ConcreteDetachedSource.java:8)
>   at 
> org.quicktheories.impl.ConcreteDetachedSource.detach(ConcreteDetachedSource.java:23)
>   at org.quicktheories.generators.Retry.generate(CodePoints.java:51)
>   at 
> org.quicktheories.generators.Generate.lambda$intArrays$10(Generate.java:190)
>   at 
> org.quicktheories.generators.Generate$$Lambda$17/1847008471.generate(Unknown 
> Source)
>   at org.quicktheories.core.DescribingGenerator.generate(Gen.java:255)
>   at org.quicktheories.core.Gen.lambda$map$0(Gen.java:36)
>   at org.quicktheories.core.Gen$$Lambda$20/71399214.generate(Unknown 
> Source)
>   at org.quicktheories.core.Gen.lambda$map$0(Gen.java:36)
>   at org.quicktheories.core.Gen$$Lambda$20/71399214.generate(Unknown 
> Source)
>   at org.quicktheories.core.Gen.lambda$mix$10(Gen.java:184)
>   at org.quicktheories.core.Gen$$Lambda$45/802243390.generate(Unknown 
> Source)
>   at org.quicktheories.core.Gen.lambda$flatMap$5(Gen.java:93)
>   at org.quicktheories.core.Gen$$Lambda$48/363509958.generate(Unknown 
> Source)
>   at 
> org.quicktheories.dsl.TheoryBuilder4.lambda$prgnToTuple$12(TheoryBuilder4.java:188)
>   at 
> org.quicktheories.dsl.TheoryBuilder4$$Lambda$40/2003496028.generate(Unknown 
> Source)
>   at org.quicktheories.core.DescribingGenerator.generate(Gen.java:255)
>   at org.quicktheories.core.FilteredGenerator.generate(Gen.java:225)
>   at org.quicktheories.core.Gen.lambda$map$0(Gen.java:36)
>   at org.quicktheories.core.Gen$$Lambda$20/71399214.generate(Unknown 
> Source)
>   at org.quicktheories.impl.Core.generate(Core.java:150)
>   at org.quicktheories.impl.Core.shrink(Core.java:103)
>   at org.quicktheories.impl.Core.run(Core.java:39)
>   at org.quicktheories.impl.TheoryRunner.check(TheoryRunner.java:35)
>   at org.quicktheories.dsl.TheoryBuilder4.check(TheoryBuilder4.java:150)
>   at 
> org.quicktheories.dsl.TheoryBuilder4.checkAssert(TheoryBuilder4.java:162)
>   at 
> org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest.corruptionCausesFailure(ChecksummingTransformerTest.java:87)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15537) Local Read/Write Path: Upgrade and Diff Test

2020-01-30 Thread Josh McKenzie (Jira)
Josh McKenzie created CASSANDRA-15537:
-

 Summary: Local Read/Write Path: Upgrade and Diff Test
 Key: CASSANDRA-15537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15537
 Project: Cassandra
  Issue Type: Task
  Components: Test/dtest
Reporter: Josh McKenzie
Assignee: Yifan Cai


Execution of upgrade and diff tests via cassandra-diff have proven to be one of 
the most effective approaches toward identifying issues with the local 
read/write path. These include instances of data loss, data corruption, data 
resurrection, incorrect responses to queries, incomplete responses, and others. 
Upgrade and diff tests can be executed concurrent with fault injection (such as 
host or network failure); as well as during mixed-version scenarios (such as 
upgrading half of the instances in a cluster, and running upgradesstables on 
only half of the upgraded instances).

Upgrade and diff tests are expected to continue through the release cycle, and 
are a great way for contributors to gain confidence in the correctness of the 
database under their own workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15311) Fix flakey test_13595 - consistency_test.TestConsistency

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15311:

Attachment: CASSANDRA-15311.txt

> Fix flakey  test_13595 - consistency_test.TestConsistency
> -
>
> Key: CASSANDRA-15311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15311
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: CASSANDRA-15311.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/559#tests/containers/29]
> {noformat}
> Your job ran 1007 tests with 1 failure
> test_13595 - 
> consistency_test.TestConsistencyconsistency_test.pyAssertionError: assert 9 
> == 4  +  where 4 =   0x7f9f0775b160>>('org.apache.cassandra.metrics:type=Table,name=ShortReadProtectionRequests,keyspace=test,scope=test',
>  'Count')  +where  > = 
> .read_attribute
> self = 
> @since('3.0')
> def test_13595(self):
> """
> @jira_ticket CASSANDRA-13595
> """
> cluster = self.cluster
> 
> # disable hinted handoff and set batch commit log so this doesn't 
> interfere with the test
> cluster.set_configuration_options(values={'hinted_handoff_enabled': 
> False})
> cluster.set_batch_commitlog(enabled=True)
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> remove_perf_disable_shared_mem(node1)  # necessary for jmx
> cluster.start(wait_other_notice=True)
> 
> session = self.patient_cql_connection(node1)
> 
> query = "CREATE KEYSPACE IF NOT EXISTS test WITH replication = 
> {'class': 'NetworkTopologyStrategy', 'datacenter1': 2};"
> session.execute(query)
> 
> query = 'CREATE TABLE IF NOT EXISTS test.test (id int PRIMARY KEY);'
> session.execute(query)
> 
> # populate the table with 10 partitions,
> # then delete a bunch of them on different nodes
> # until we get the following pattern:
> 
> #token | k | 1 | 2 |
> # -7509452495886106294 | 5 | n | y |
> # -4069959284402364209 | 1 | y | n |
> # -3799847372828181882 | 8 | n | y |
> # -3485513579396041028 | 0 | y | n |
> # -3248873570005575792 | 2 | n | y |
> # -2729420104000364805 | 4 | y | n |
> #  1634052884888577606 | 7 | n | y |
> #  2705480034054113608 | 6 | y | n |
> #  3728482343045213994 | 9 | n | y |
> #  9010454139840013625 | 3 | y | y |
> 
> stmt = session.prepare('INSERT INTO test.test (id) VALUES (?);')
> for id in range(0, 10):
> session.execute(stmt, [id], ConsistencyLevel.ALL)
> 
> # delete every other partition on node1 while node2 is down
> node2.stop(wait_other_notice=True)
> session.execute('DELETE FROM test.test WHERE id IN (5, 8, 2, 7, 9);')
> node2.start(wait_other_notice=True, wait_for_binary_proto=True)
> 
> session = self.patient_cql_connection(node2)
> 
> # delete every other alternate partition on node2 while node1 is down
> node1.stop(wait_other_notice=True)
> session.execute('DELETE FROM test.test WHERE id IN (1, 0, 4, 6);')
> node1.start(wait_other_notice=True, wait_for_binary_proto=True)
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> # until #13595 the query would incorrectly return [1]
> assert_all(session,
>'SELECT id FROM test.test LIMIT 1;',
>[[3]],
>cl=ConsistencyLevel.ALL)
> 
> srp = make_mbean('metrics', type='Table', 
> name='ShortReadProtectionRequests', keyspace='test', scope='test')
> with JolokiaAgent(node1) as jmx:
> # 4 srp requests for node1 and 5 for node2, total of 9
> >   assert 9 == jmx.read_attribute(srp, 'Count')
> E   AssertionError: assert 9 == 4
> E+  where 4 =   0x7f9f0775b160>>('org.apache.cassandra.metrics:type=Table,name=ShortReadProtectionRequests,keyspace=test,scope=test',
>  'Count')
> E+where  > = 
> .read_attribute
> consistency_test.py:1288: AssertionError {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15311) Fix flakey test_13595 - consistency_test.TestConsistency

2020-01-30 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026998#comment-17026998
 ] 

Ekaterina Dimitrova commented on CASSANDRA-15311:
-

Looking at the Jenkins OSS artifacts this test is marked as stable. (I hope I 
look at the right place)
https://builds.apache.org/view/A-D/view/Cassandra%20trunk/job/Cassandra-trunk-dtest/925/testReport/consistency_test/TestConsistency/test_13595/
Running it 100 times on my system also it was always completing successfully. 
Attached is the log.
Joey Lynch Do you have any other artifacts from CircleCI? I never used it and I 
am not familiar where I can find artifacts for Cassandra or multiplex any tests 
there. Any advice/additional information would be highly appreciated!

> Fix flakey  test_13595 - consistency_test.TestConsistency
> -
>
> Key: CASSANDRA-15311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15311
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: CASSANDRA-15311.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/559#tests/containers/29]
> {noformat}
> Your job ran 1007 tests with 1 failure
> test_13595 - 
> consistency_test.TestConsistencyconsistency_test.pyAssertionError: assert 9 
> == 4  +  where 4 =   0x7f9f0775b160>>('org.apache.cassandra.metrics:type=Table,name=ShortReadProtectionRequests,keyspace=test,scope=test',
>  'Count')  +where  > = 
> .read_attribute
> self = 
> @since('3.0')
> def test_13595(self):
> """
> @jira_ticket CASSANDRA-13595
> """
> cluster = self.cluster
> 
> # disable hinted handoff and set batch commit log so this doesn't 
> interfere with the test
> cluster.set_configuration_options(values={'hinted_handoff_enabled': 
> False})
> cluster.set_batch_commitlog(enabled=True)
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> remove_perf_disable_shared_mem(node1)  # necessary for jmx
> cluster.start(wait_other_notice=True)
> 
> session = self.patient_cql_connection(node1)
> 
> query = "CREATE KEYSPACE IF NOT EXISTS test WITH replication = 
> {'class': 'NetworkTopologyStrategy', 'datacenter1': 2};"
> session.execute(query)
> 
> query = 'CREATE TABLE IF NOT EXISTS test.test (id int PRIMARY KEY);'
> session.execute(query)
> 
> # populate the table with 10 partitions,
> # then delete a bunch of them on different nodes
> # until we get the following pattern:
> 
> #token | k | 1 | 2 |
> # -7509452495886106294 | 5 | n | y |
> # -4069959284402364209 | 1 | y | n |
> # -3799847372828181882 | 8 | n | y |
> # -3485513579396041028 | 0 | y | n |
> # -3248873570005575792 | 2 | n | y |
> # -2729420104000364805 | 4 | y | n |
> #  1634052884888577606 | 7 | n | y |
> #  2705480034054113608 | 6 | y | n |
> #  3728482343045213994 | 9 | n | y |
> #  9010454139840013625 | 3 | y | y |
> 
> stmt = session.prepare('INSERT INTO test.test (id) VALUES (?);')
> for id in range(0, 10):
> session.execute(stmt, [id], ConsistencyLevel.ALL)
> 
> # delete every other partition on node1 while node2 is down
> node2.stop(wait_other_notice=True)
> session.execute('DELETE FROM test.test WHERE id IN (5, 8, 2, 7, 9);')
> node2.start(wait_other_notice=True, wait_for_binary_proto=True)
> 
> session = self.patient_cql_connection(node2)
> 
> # delete every other alternate partition on node2 while node1 is down
> node1.stop(wait_other_notice=True)
> session.execute('DELETE FROM test.test WHERE id IN (1, 0, 4, 6);')
> node1.start(wait_other_notice=True, wait_for_binary_proto=True)
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> # until #13595 the query would incorrectly return [1]
> assert_all(session,
>'SELECT id FROM test.test LIMIT 1;',
>[[3]],
>cl=ConsistencyLevel.ALL)
> 
> srp = make_mbean('metrics', type='Table', 
> name='ShortReadProtectionRequests', keyspace='test', scope='test')
> with JolokiaAgent(node1) as jmx:
> # 4 srp requests for node1 and 5 for node2, total of 9
> >   assert 9 == jmx.read_attribute(srp, 'Count')
> E   AssertionError: assert 9 == 4
> E+  where 4 =   

[jira] [Updated] (CASSANDRA-15536) (JIRA WORKFLOW TEST) 4.0 Quality: Components and Test Plans

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15536:
--
Change Category: Quality Assurance
 Complexity: Challenging
Component/s: Test/unit
 Test/fuzz
 Test/dtest
 Test/benchmark
   Priority: High  (was: Normal)
 Status: Open  (was: Triage Needed)

> (JIRA WORKFLOW TEST) 4.0 Quality: Components and Test Plans
> ---
>
> Key: CASSANDRA-15536
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15536
> Project: Cassandra
>  Issue Type: Epic
>  Components: Test/benchmark, Test/dtest, Test/fuzz, Test/unit
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: High
> Fix For: 4.0
>
>
> [ALPHA TEST]
> {color:#de350b} This is a test to shift the test tracking and work from 
> [cwiki|https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Quality:+Components+and+Test+Plans]
>  into JIRA to unify our workflow and reduce friction to collaboration on all 
> the work going into 4.0. The goal of this exercise is to see if this works as 
> a value-add replacement for the old model.{color}
> -
>  The overarching goal of the 4.0 release is that Cassandra 4.0 should be at a 
> state where major users would run it in production when it is cut. To gain 
> this confidence there are various ongoing testing efforts involving 
> correctness, performance, and ease of use. In this page we try to coordinate 
> and identify blockers for subsystems before we can release 4.0
> For each component we strive to have shepherds and contributors involved. 
> Shepherds should be committers or knowledgeable component owners and are 
> responsible for driving their blocking tickets to completion and ensuring 
> quality in their claimed area, while contributors have signed up to help 
> verify that subsystem by running tests or contributing fixes. Shepherds also 
> ideally help set testing standards and ensure that we meet a high standard of 
> quality in their claimed area.
> {color:#de350b}(For now, we will overload "assignee == shepherd", and 
> "reviewer(s) == contributors" so we don't have to change fields in 
> JIRA.){color}
> -If you are interested in contributing to testing 4.0, please add your name 
> as a contributor and get involved in the the tracking ticket, and dev 
> list/IRC discussions involving that component.-
>  {color:#de350b}For now - please treat these tickets as read-only until such 
> time as we discuss this approach on the dev ML.{color}
> h3. Targeted Components / Subsystems
> We've tried to collect some of the major components or subsystems that we 
> want to ensure work properly towards having a great 4.0 release. If you think 
> something is missing please add it. Better yet volunteer to contribute to 
> testing it!
> h4. Internode Messaging
>  In 4.0 we're getting a new Netty based inter-node communication system 
> (CASSANDRA-8457). As internode messaging is vital to the correctness and 
> performance of the database we should make sure that all forms (TLS, 
> compressed, low latency, high latency, etc ...) of internode messaging 
> function correctly.
> h4. Test Infrastructure / Automation: Diff Testing
>  Diff testing is a form of model-based testing in which two clusters are 
> exhaustively compared to assert identity. To support Apache Cassandra 4.0 
> validation, contributors have developed cassandra-diff. This is a Spark 
> application that distributes the token range over a configurable number of 
> Spark executors, then parallelizes randomized forward and reverse reads with 
> varying paging sizes to read and compare every row present in the cluster, 
> persisting a record of mismatches for investigation. This methodology has 
> been instrumental to identifying data loss, data corruption, and incorrect 
> response issues introduced in early Cassandra 3.0 releases.
> cassandra-diff and associated documentation can be found at: 
> [https://github.com/apache/cassandra-diff]. Contributors are encouraged to 
> run diff tests against clusters they manage and report issues to ensure 
> workload diversity across the project.
> h4. System Tables and Internal Schema
>  This task covers a review of and minor bug fixes to local and distributed 
> system keyspaces. Planned work in this area is now complete.
> h4. Source Audit and Performance Testing: Streaming
>  This task covers an audit of the Streaming implementation in Apache 
> Cassandra 4.0. In this release, contributors have implemented full-SSTable 
> streaming to improve performance and reduce memory pressure. Internode 
> messaging changes implemented in CASSANDRA-15066 adjacent to streaming 
> suggested that 

[jira] [Updated] (CASSANDRA-15536) (JIRA WORKFLOW TEST) 4.0 Quality: Components and Test Plans

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15536:
--
Fix Version/s: 4.0

> (JIRA WORKFLOW TEST) 4.0 Quality: Components and Test Plans
> ---
>
> Key: CASSANDRA-15536
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15536
> Project: Cassandra
>  Issue Type: Epic
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.0
>
>
> [ALPHA TEST]
> {color:#de350b} This is a test to shift the test tracking and work from 
> [cwiki|https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Quality:+Components+and+Test+Plans]
>  into JIRA to unify our workflow and reduce friction to collaboration on all 
> the work going into 4.0. The goal of this exercise is to see if this works as 
> a value-add replacement for the old model.{color}
> -
>  The overarching goal of the 4.0 release is that Cassandra 4.0 should be at a 
> state where major users would run it in production when it is cut. To gain 
> this confidence there are various ongoing testing efforts involving 
> correctness, performance, and ease of use. In this page we try to coordinate 
> and identify blockers for subsystems before we can release 4.0
> For each component we strive to have shepherds and contributors involved. 
> Shepherds should be committers or knowledgeable component owners and are 
> responsible for driving their blocking tickets to completion and ensuring 
> quality in their claimed area, while contributors have signed up to help 
> verify that subsystem by running tests or contributing fixes. Shepherds also 
> ideally help set testing standards and ensure that we meet a high standard of 
> quality in their claimed area.
> {color:#de350b}(For now, we will overload "assignee == shepherd", and 
> "reviewer(s) == contributors" so we don't have to change fields in 
> JIRA.){color}
> -If you are interested in contributing to testing 4.0, please add your name 
> as a contributor and get involved in the the tracking ticket, and dev 
> list/IRC discussions involving that component.-
>  {color:#de350b}For now - please treat these tickets as read-only until such 
> time as we discuss this approach on the dev ML.{color}
> h3. Targeted Components / Subsystems
> We've tried to collect some of the major components or subsystems that we 
> want to ensure work properly towards having a great 4.0 release. If you think 
> something is missing please add it. Better yet volunteer to contribute to 
> testing it!
> h4. Internode Messaging
>  In 4.0 we're getting a new Netty based inter-node communication system 
> (CASSANDRA-8457). As internode messaging is vital to the correctness and 
> performance of the database we should make sure that all forms (TLS, 
> compressed, low latency, high latency, etc ...) of internode messaging 
> function correctly.
> h4. Test Infrastructure / Automation: Diff Testing
>  Diff testing is a form of model-based testing in which two clusters are 
> exhaustively compared to assert identity. To support Apache Cassandra 4.0 
> validation, contributors have developed cassandra-diff. This is a Spark 
> application that distributes the token range over a configurable number of 
> Spark executors, then parallelizes randomized forward and reverse reads with 
> varying paging sizes to read and compare every row present in the cluster, 
> persisting a record of mismatches for investigation. This methodology has 
> been instrumental to identifying data loss, data corruption, and incorrect 
> response issues introduced in early Cassandra 3.0 releases.
> cassandra-diff and associated documentation can be found at: 
> [https://github.com/apache/cassandra-diff]. Contributors are encouraged to 
> run diff tests against clusters they manage and report issues to ensure 
> workload diversity across the project.
> h4. System Tables and Internal Schema
>  This task covers a review of and minor bug fixes to local and distributed 
> system keyspaces. Planned work in this area is now complete.
> h4. Source Audit and Performance Testing: Streaming
>  This task covers an audit of the Streaming implementation in Apache 
> Cassandra 4.0. In this release, contributors have implemented full-SSTable 
> streaming to improve performance and reduce memory pressure. Internode 
> messaging changes implemented in CASSANDRA-15066 adjacent to streaming 
> suggested that review of the streaming implementation itself may be 
> desirable. Prior work also covered performance testing of full-SSTable 
> streaming.
> h4. Test Infrastructure / Automation: "Harry"
>  CASSANDRA-15348 - Harry: generator library and extensible framework for fuzz 
> testing Apache Cassandra TRIAGE NEEDED
> Harry is a component for 

[jira] [Created] (CASSANDRA-15536) (JIRA WORKFLOW TEST) 4.0 Quality: Components and Test Plans

2020-01-30 Thread Josh McKenzie (Jira)
Josh McKenzie created CASSANDRA-15536:
-

 Summary: (JIRA WORKFLOW TEST) 4.0 Quality: Components and Test 
Plans
 Key: CASSANDRA-15536
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15536
 Project: Cassandra
  Issue Type: Epic
Reporter: Josh McKenzie
Assignee: Josh McKenzie


[ALPHA TEST]
{color:#de350b} This is a test to shift the test tracking and work from 
[cwiki|https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Quality:+Components+and+Test+Plans]
 into JIRA to unify our workflow and reduce friction to collaboration on all 
the work going into 4.0. The goal of this exercise is to see if this works as a 
value-add replacement for the old model.{color}

-
 The overarching goal of the 4.0 release is that Cassandra 4.0 should be at a 
state where major users would run it in production when it is cut. To gain this 
confidence there are various ongoing testing efforts involving correctness, 
performance, and ease of use. In this page we try to coordinate and identify 
blockers for subsystems before we can release 4.0

For each component we strive to have shepherds and contributors involved. 
Shepherds should be committers or knowledgeable component owners and are 
responsible for driving their blocking tickets to completion and ensuring 
quality in their claimed area, while contributors have signed up to help verify 
that subsystem by running tests or contributing fixes. Shepherds also ideally 
help set testing standards and ensure that we meet a high standard of quality 
in their claimed area.

{color:#de350b}(For now, we will overload "assignee == shepherd", and 
"reviewer(s) == contributors" so we don't have to change fields in JIRA.){color}

-If you are interested in contributing to testing 4.0, please add your name as 
a contributor and get involved in the the tracking ticket, and dev list/IRC 
discussions involving that component.-
 {color:#de350b}For now - please treat these tickets as read-only until such 
time as we discuss this approach on the dev ML.{color}

h3. Targeted Components / Subsystems
We've tried to collect some of the major components or subsystems that we want 
to ensure work properly towards having a great 4.0 release. If you think 
something is missing please add it. Better yet volunteer to contribute to 
testing it!

h4. Internode Messaging
 In 4.0 we're getting a new Netty based inter-node communication system 
(CASSANDRA-8457). As internode messaging is vital to the correctness and 
performance of the database we should make sure that all forms (TLS, 
compressed, low latency, high latency, etc ...) of internode messaging function 
correctly.


h4. Test Infrastructure / Automation: Diff Testing
 Diff testing is a form of model-based testing in which two clusters are 
exhaustively compared to assert identity. To support Apache Cassandra 4.0 
validation, contributors have developed cassandra-diff. This is a Spark 
application that distributes the token range over a configurable number of 
Spark executors, then parallelizes randomized forward and reverse reads with 
varying paging sizes to read and compare every row present in the cluster, 
persisting a record of mismatches for investigation. This methodology has been 
instrumental to identifying data loss, data corruption, and incorrect response 
issues introduced in early Cassandra 3.0 releases.

cassandra-diff and associated documentation can be found at: 
[https://github.com/apache/cassandra-diff]. Contributors are encouraged to run 
diff tests against clusters they manage and report issues to ensure workload 
diversity across the project.


h4. System Tables and Internal Schema
 This task covers a review of and minor bug fixes to local and distributed 
system keyspaces. Planned work in this area is now complete.


h4. Source Audit and Performance Testing: Streaming
 This task covers an audit of the Streaming implementation in Apache Cassandra 
4.0. In this release, contributors have implemented full-SSTable streaming to 
improve performance and reduce memory pressure. Internode messaging changes 
implemented in CASSANDRA-15066 adjacent to streaming suggested that review of 
the streaming implementation itself may be desirable. Prior work also covered 
performance testing of full-SSTable streaming.


h4. Test Infrastructure / Automation: "Harry"
 CASSANDRA-15348 - Harry: generator library and extensible framework for fuzz 
testing Apache Cassandra TRIAGE NEEDED

Harry is a component for fuzz testing and verification of the Apache Cassandra 
clusters at scale. Harry allows to run tests that are able to validate state of 
both dense nodes (to test local read-write path) and large clusters (to test 
distributed read-write path), and do it efficiently. Harry defines a model that 
holds the state of the database, generators that produce reproducible, 
pseudo-random 

[jira] [Assigned] (CASSANDRA-15313) Fix flaky - ChecksummingTransformerTest - org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-15313:
---

Assignee: Ekaterina Dimitrova

> Fix flaky - ChecksummingTransformerTest - 
> org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest
> ---
>
> Key: CASSANDRA-15313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15313
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Vinay Chella
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> During the recent runs, this test appears to be flaky.
> Example failure: 
> [https://circleci.com/gh/vinaykumarchella/cassandra/459#tests/containers/94]
> corruptionCausesFailure-compression - 
> org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest
> {code:java}
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
>   at org.quicktheories.impl.Precursor.(Precursor.java:17)
>   at 
> org.quicktheories.impl.ConcreteDetachedSource.(ConcreteDetachedSource.java:8)
>   at 
> org.quicktheories.impl.ConcreteDetachedSource.detach(ConcreteDetachedSource.java:23)
>   at org.quicktheories.generators.Retry.generate(CodePoints.java:51)
>   at 
> org.quicktheories.generators.Generate.lambda$intArrays$10(Generate.java:190)
>   at 
> org.quicktheories.generators.Generate$$Lambda$17/1847008471.generate(Unknown 
> Source)
>   at org.quicktheories.core.DescribingGenerator.generate(Gen.java:255)
>   at org.quicktheories.core.Gen.lambda$map$0(Gen.java:36)
>   at org.quicktheories.core.Gen$$Lambda$20/71399214.generate(Unknown 
> Source)
>   at org.quicktheories.core.Gen.lambda$map$0(Gen.java:36)
>   at org.quicktheories.core.Gen$$Lambda$20/71399214.generate(Unknown 
> Source)
>   at org.quicktheories.core.Gen.lambda$mix$10(Gen.java:184)
>   at org.quicktheories.core.Gen$$Lambda$45/802243390.generate(Unknown 
> Source)
>   at org.quicktheories.core.Gen.lambda$flatMap$5(Gen.java:93)
>   at org.quicktheories.core.Gen$$Lambda$48/363509958.generate(Unknown 
> Source)
>   at 
> org.quicktheories.dsl.TheoryBuilder4.lambda$prgnToTuple$12(TheoryBuilder4.java:188)
>   at 
> org.quicktheories.dsl.TheoryBuilder4$$Lambda$40/2003496028.generate(Unknown 
> Source)
>   at org.quicktheories.core.DescribingGenerator.generate(Gen.java:255)
>   at org.quicktheories.core.FilteredGenerator.generate(Gen.java:225)
>   at org.quicktheories.core.Gen.lambda$map$0(Gen.java:36)
>   at org.quicktheories.core.Gen$$Lambda$20/71399214.generate(Unknown 
> Source)
>   at org.quicktheories.impl.Core.generate(Core.java:150)
>   at org.quicktheories.impl.Core.shrink(Core.java:103)
>   at org.quicktheories.impl.Core.run(Core.java:39)
>   at org.quicktheories.impl.TheoryRunner.check(TheoryRunner.java:35)
>   at org.quicktheories.dsl.TheoryBuilder4.check(TheoryBuilder4.java:150)
>   at 
> org.quicktheories.dsl.TheoryBuilder4.checkAssert(TheoryBuilder4.java:162)
>   at 
> org.apache.cassandra.transport.frame.checksum.ChecksummingTransformerTest.corruptionCausesFailure(ChecksummingTransformerTest.java:87)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14970) New releases must supply SHA-256 and/or SHA-512 checksums

2020-01-30 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026974#comment-17026974
 ] 

Michael Semb Wever commented on CASSANDRA-14970:


The cassandra-builds patch was used to cut and stage the 4.0-alpha3 release.

> New releases must supply SHA-256 and/or SHA-512 checksums
> -
>
> Key: CASSANDRA-14970
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14970
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Michael Shuler
>Assignee: Michael Semb Wever
>Priority: Urgent
> Fix For: 2.2.16, 3.0.20, 3.11.6, 4.0
>
> Attachments: 
> 0001-Update-downloads-for-sha256-sha512-checksum-files.patch, 
> 0001-Update-release-checksum-algorithms-to-SHA-256-SHA-512.patch, 
> ant-publish-checksum-fail.jpg, build_cassandra-2.1.png, build_trunk.png, 
> cassandra-2.1_14970_updated.patch
>
>
> Release policy was updated around 9/2018 to state:
> "For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed."
> build.xml needs to be updated from MD5 & SHA-1 to, at least, SHA-256 or both. 
> cassandra-builds/cassandra-release scripts need to be updated to work with 
> the new checksum files.
> http://www.apache.org/dev/release-distribution#sigs-and-sums



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15310) Fix flakey - testIdleDisconnect - org.apache.cassandra.transport.IdleDisconnectTest

2020-01-30 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026973#comment-17026973
 ] 

Ekaterina Dimitrova commented on CASSANDRA-15310:
-

Hi [~vinaykumarcse] and [~aprudhomme],

Are you still working on this one? Any help needed?

 

> Fix flakey - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTest
> ---
>
> Key: CASSANDRA-15310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15310
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Joey Lynch
>Assignee: Andrew Prudhomme
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Example run: 
> [https://circleci.com/gh/jolynch/cassandra/561#tests/containers/86]
>  
> {noformat}
> Your job ran 4428 tests with 1 failure
> - testIdleDisconnect - 
> org.apache.cassandra.transport.IdleDisconnectTestjunit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.transport.IdleDisconnectTest.testIdleDisconnect(IdleDisconnectTest.java:56)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14761) Rename speculative_retry to match additional_write_policy

2020-01-30 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026945#comment-17026945
 ] 

Joey Lynch commented on CASSANDRA-14761:


[~jmckenzie], since TR is experimental I'm inclined to not introduce changes to 
the previously stable read options (it introduces risk in configuration 
management and user confusion) to match the unstable TR user interface. Like I 
commented earlier I'm inclined to not associate the two things. In terms of 
someone driving this ticket think one of the TR patch authors might be the 
appropriate owner (I'm still happy to review or not if folks disagree with my 
reasoning hehe).

> Rename speculative_retry to match additional_write_policy
> -
>
> Key: CASSANDRA-14761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14761
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Ariel Weisberg
>Priority: Normal
> Fix For: 4.0
>
>
> It's not really speculative. This commit is where it was last named and shows 
> what to update 
> https://github.com/aweisberg/cassandra/commit/e1df8e977d942a1b0da7c2a7554149c781d0e6c3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12995) update hppc dependency to 0.7

2020-01-30 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026866#comment-17026866
 ] 

Ekaterina Dimitrova edited comment on CASSANDRA-12995 at 1/30/20 7:10 PM:
--

One of the unit tests fails as it looks like the equals method of LongHashSet 
does not overwrite properly in the 0.8 version. I will look how to file an 
issue to the hppc project and provide an update here later. 


was (Author: e.dimitrova):
One of the unit tests fails as it looks like the equals method of LongHashSet 
is not overwritten properly in the 0.8 version. I will look how to file an 
issue to the hppc project and provide an update here later. 

> update hppc dependency to 0.7
> -
>
> Key: CASSANDRA-12995
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12995
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies, Packaging
>Reporter: Tomas Repik
>Assignee: Ekaterina Dimitrova
>Priority: Normal
>  Labels: easyfix
> Fix For: 4.0
>
> Attachments: Screen Shot 2020-01-29 at 11.14.30 AM.png, Screen Shot 
> 2020-01-29 at 11.20.39 AM.png, Screen Shot 2020-01-29 at 11.20.47 AM.png, 
> cassandra-3.11.0-hppc.patch
>
>
> Cassandra 3.11.0 is about to be included in Fedora. There are some tweaks to 
> the sources we need to do in order to successfully build it. Cassandra 
> depends on hppc 0.5.4, but In Fedora we have the newer version 0.7.1 Upstream 
> released even newer version 0.7.2. I attached a patch updating cassandra 
> sources to depend on the 0.7.1 hppc sources. It should be also compatible 
> with the newest upstream version. The only actual changes are the removal of 
> Open infix in class names. The issue was discussed in here: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1340876 Please consider updating.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15367) Memtable memory allocations may deadlock

2020-01-30 Thread Blake Eggleston (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026922#comment-17026922
 ] 

Blake Eggleston commented on CASSANDRA-15367:
-

Nice. I’m also fairly comfortable not addressing competition in this scenario. 
Ideally, we would, but I’m not sure it would be worth adding a secondary 
synchronization path. Although I guess we could synchronize on the writeOp.

There still is (technically) a brief window for deadlock between 
{{setCommitLogUpperBound}} and {{writeBarrier.issue()}} in 
{{org.apache.cassandra.db.ColumnFamilyStore.Flush#Flush}}, but I’m not sure if 
it’s worth addressing, since we’d need to immediately waste 10MB on a partition 
as soon as a memtable is created, and it’s not exacerbated by flush queue 
length. Anyway, I think this qualifies as good enough. I’d also prefer it over 
waiting on the previous op group because it limits the window of potential bad 
behavior to a narrower set of circumstances. What do you think?

About removing the lock, I’m sure 15511 will help with contention, and we 
should commit it, however I think there will still be pathological cases where 
faster updates won’t be enough. For instance, if there were 20 small updates 
and one much larger one contending with each other, I can imagine the large one 
would have a tough time making progress and end up wasting a lot of memory.


 This might be better illustrated with code, and would be a trunk-only follow 
on ticket, but instead of synchronizing writes on the partition object whenever 
there’s contention, what if we queued up contended writes on the partition? If 
a write comes in and there’s no longer contention, or the size of queued writes 
is too high, it could merge the updates and synchronize on applying them. By 
merging the updates, I think we’d end up allocating less memory in the 
contended case than the uncontended case.
 

> Memtable memory allocations may deadlock
> 
>
> Key: CASSANDRA-15367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15367
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Assignee: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> * Under heavy contention, we guard modifications to a partition with a mutex, 
> for the lifetime of the memtable.
> * Memtables block for the completion of all {{OpOrder.Group}} started before 
> their flush began
> * Memtables permit operations from this cohort to fall-through to the 
> following Memtable, in order to guarantee a precise commitLogUpperBound
> * Memtable memory limits may be lifted for operations in the first cohort, 
> since they block flush (and hence block future memory allocation)
> With very unfortunate scheduling
> * A contended partition may rapidly escalate to a mutex
> * The system may reach memory limits that prevent allocations for the new 
> Memtable’s cohort (C2) 
> * An operation from C2 may hold the mutex when this occurs
> * Operations from a prior Memtable’s cohort (C1), for a contended partition, 
> may fall-through to the next Memtable
> * The operations from C1 may execute after the above is encountered by those 
> from C2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r37804 - in /dev/cassandra/4.0-alpha3: cassandra-4.0~alpha3-1.noarch.rpm cassandra-4.0~alpha3-1.src.rpm cassandra-tools-4.0~alpha3-1.noarch.rpm

2020-01-30 Thread mck
Author: mck
Date: Thu Jan 30 18:35:38 2020
New Revision: 37804

Log:
staging cassandra rpm packages for 4.0-alpha3

Added:
dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.noarch.rpm   (with props)
dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.src.rpm   (with props)
dev/cassandra/4.0-alpha3/cassandra-tools-4.0~alpha3-1.noarch.rpm   (with 
props)

Added: dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.noarch.rpm
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.noarch.rpm
--
svn:mime-type = application/octet-stream

Added: dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.src.rpm
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.src.rpm
--
svn:mime-type = application/octet-stream

Added: dev/cassandra/4.0-alpha3/cassandra-tools-4.0~alpha3-1.noarch.rpm
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/cassandra-tools-4.0~alpha3-1.noarch.rpm
--
svn:mime-type = application/octet-stream



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r37803 - in /dev/cassandra/4.0-alpha3: cassandra-tools_4.0~alpha3_all.deb cassandra_4.0~alpha3.dsc cassandra_4.0~alpha3.tar.gz cassandra_4.0~alpha3_all.deb cassandra_4.0~alpha3_amd64.build

2020-01-30 Thread mck
Author: mck
Date: Thu Jan 30 18:28:27 2020
New Revision: 37803

Log:
staging cassandra debian packages for 4.0-alpha3

Added:
dev/cassandra/4.0-alpha3/cassandra-tools_4.0~alpha3_all.deb   (with props)
dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3.dsc
dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3.tar.gz   (with props)
dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_all.deb   (with props)
dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_amd64.buildinfo
dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_amd64.changes

Added: dev/cassandra/4.0-alpha3/cassandra-tools_4.0~alpha3_all.deb
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/cassandra-tools_4.0~alpha3_all.deb
--
svn:mime-type = application/octet-stream

Added: dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3.dsc
==
--- dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3.dsc (added)
+++ dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3.dsc Thu Jan 30 18:28:27 2020
@@ -0,0 +1,41 @@
+-BEGIN PGP SIGNED MESSAGE-
+Hash: SHA512
+
+Format: 1.0
+Source: cassandra
+Binary: cassandra, cassandra-tools
+Architecture: all
+Version: 4.0~alpha3
+Maintainer: Eric Evans 
+Uploaders: Sylvain Lebresne 
+Homepage: http://cassandra.apache.org
+Standards-Version: 3.8.3
+Vcs-Browser: https://gitbox.apache.org/repos/asf?p=cassandra.git
+Vcs-Git: https://gitbox.apache.org/repos/asf/cassandra.git
+Build-Depends: debhelper (>= 5), openjdk-8-jdk | java8-jdk, ant (>= 1.9), 
ant-optional (>= 1.9), dh-python, python-dev (>= 2.7), quilt, bash-completion
+Package-List:
+ cassandra deb misc extra arch=all
+ cassandra-tools deb misc extra arch=all
+Checksums-Sha1:
+ 08da67b02408658d4926924d8324cc239d51119f 44347622 cassandra_4.0~alpha3.tar.gz
+Checksums-Sha256:
+ 9bf6324864c40af32f4553f206c11d9b78f4a2e0fb9f39b73607e722270ee251 44347622 
cassandra_4.0~alpha3.tar.gz
+Files:
+ 37d60808c2eba4061f0488f8dca1489b 44347622 cassandra_4.0~alpha3.tar.gz
+
+-BEGIN PGP SIGNATURE-
+
+iQIzBAEBCgAdFiEEpMRl/qDFUlYaOSph6RM1134+h8sFAl4zIAcACgkQ6RM1134+
+h8vd9xAArZJx1V9ItAFdwGExzU7pvG2ymxFm21Qe+gCr8X8s8jkwsKxtPu1QhDI2
+PQsaIr0VAVoibtA3h7pm+vC0oth0etsgHWbgYmJ6nUhgo32+KhP4pTsW2D434kri
+Itwn+olcYc8zt38XEKg5drg8nVzcfhm/ghfhYPvVJtQjMn0lOojXvTk/aJJ1zReJ
+Bw+SC95+PviBlEwK76hnNdAZaYsR7byk4FJPFtP7lAd6diGCiiqbFSpydqXNc5Yq
+HWRxT/n+xpinV11IqhrRCA6BPoCUTjQub09C6NV5I7Uppcvh1+FMYY9CwJkB6aLK
+jqmVhX9jDzQ4x67v1GJJWsJTObarFx5SB3nwy2Gr2AtuiYXKh2EJ+2q0awxA7IoZ
+AUm9k87HcT1H4D/XMXtxsQTvJc7XfCTA4Fqty+vpk+sXNMJiue8uxD/JcHEQPsjq
+vMTffkwCUlY0yAWD6oCdPGPZwCUbwcpfvHfPN8wr/0XWJdYpMImsOuK5baHFU55z
+ODOTJM4y/KwlLie7GxgrdPRyS5DQ3bIkb+9xXuMhHClqnbDqdil641jKwCMXHVKe
+Nkew/hQK9KMAidyZg6wTkjOldAsbgaTpjQLsYaapuv69WklxGFzuHohxjt6RJXzZ
+8gDM/7ZYVmxSSrBK2TaW2NrVM4qgwyK5cmWKQmj4L3QXBruRSRY=
+=dEaz
+-END PGP SIGNATURE-

Added: dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3.tar.gz
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_all.deb
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_all.deb
--
svn:mime-type = application/octet-stream

Added: dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_amd64.buildinfo
==
--- dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_amd64.buildinfo (added)
+++ dev/cassandra/4.0-alpha3/cassandra_4.0~alpha3_amd64.buildinfo Thu Jan 30 
18:28:27 2020
@@ -0,0 +1,361 @@
+-BEGIN PGP SIGNED MESSAGE-
+Hash: SHA512
+
+Format: 1.0
+Source: cassandra
+Binary: cassandra cassandra-tools
+Architecture: all source
+Version: 4.0~alpha3
+Checksums-Md5:
+ af53e256599da0eb4ae51a2f41aeeedd 1803 cassandra_4.0~alpha3.dsc
+ 7f52687759102c229a427a7738ba61e9 4504 cassandra-tools_4.0~alpha3_all.deb
+ ad5890ad4d436c5a9edc24ed3b0c3716 40612992 cassandra_4.0~alpha3_all.deb
+Checksums-Sha1:
+ 52d564215bb850871866224b8e25b1bf128836d0 1803 cassandra_4.0~alpha3.dsc
+ 1172036e29fc467b8d4666b8e603fb513930cdb2 4504 
cassandra-tools_4.0~alpha3_all.deb
+ d3c01db611d16f1c1e8a67a2878af8832feecdf3 40612992 cassandra_4.0~alpha3_all.deb
+Checksums-Sha256:
+ 61fdbe578f06bafc65756168b7eb6c109f9821fad9bf2463729cdaff26bbabd1 1803 
cassandra_4.0~alpha3.dsc
+ 9cc4018b373ae467dfe8f1c1ed25d17d745869e69cfacf127f831e0c353ed363 4504 
cassandra-tools_4.0~alpha3_all.deb
+ 

svn commit: r37802 - /dev/cassandra/4.0-alpha3/

2020-01-30 Thread mck
Author: mck
Date: Thu Jan 30 18:25:38 2020
New Revision: 37802

Log:
staging cassandra 4.0-alpha3

Added:
dev/cassandra/4.0-alpha3/
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz   (with 
props)
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.asc
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha256
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha512
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz   (with 
props)
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz.asc
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz.sha256
dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz.sha512

Added: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.asc
==
--- dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.asc (added)
+++ dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.asc Thu Jan 
30 18:25:38 2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEpMRl/qDFUlYaOSph6RM1134+h8sFAl4zFKUACgkQ6RM1134+
+h8tTZg/7BPVYuxMkYFV0QY7SjQpAoh5U3g/8LMAQWlZS1Sqp+iJNBT2Xsjuh0Ki8
+xqnt2ZWQHpM7AWFk9/CrLRFAkfrFYu+VKyeqXfVTMaoONYZQYuwuvPgMIp4cNQQS
+uR3q79wAW/XQvP8EeNVY3ucl14+fecMEoWSbrQXNThEDzAWvkeTDj+9TIiUJbk21
+bUwNQ1txghUMaAR51cEfr5bIBCf+LGpvVx1+Ics+XmME88SXqpbl2Edo001KDBo4
+q/QJuYAoE3Urk1K5vTbNaFHbOIpewM+KsBh4q/kG2cwAk1SQDo1AT72lvYlAzyfU
+uLuNZRdQatEBnkeNz/LBmcJ35k76TM977Yevv2aRorqRFuHiZKhZHTYKVql1HHnu
+gxVZfCBu/hCVsPC3vevStniMqPX+c8orNzsedEcQ+aI9W5/wGSoRyvqbHtT6v5XW
+9R+mWriY6JyHR7z+4vERm3iGwcCzhsS07dSAzctcXHsfVABqRd9fpqrqQONhTGH8
+U8crRueo7sxyW57MsOForFMV+KpZp9KQ02ppopZdbkDZGUmY0oXGJe41iOWO746w
+xKdu70UBI761Xh2fWLJHxFnirq25BYTv4jyAje3vubafoGiVq+ssT93jJKhYLuos
+A9xy04JewAJyUVDt249vWEGcVkYi9Gp7z3REYpoYnPWPhwPBtRE=
+=0Nur
+-END PGP SIGNATURE-

Added: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha256
==
--- dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha256 
(added)
+++ dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha256 Thu 
Jan 30 18:25:38 2020
@@ -0,0 +1 @@
+28757dde589f70410f9a6a95c39ee7e6cde63440e2b06b91ae6b200614fa364d

Added: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha512
==
--- dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha512 
(added)
+++ dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-bin.tar.gz.sha512 Thu 
Jan 30 18:25:38 2020
@@ -0,0 +1 @@
+ca179e30797f256b69e5bd3d1c015e09988eb3f5caeec36fb16576dc457319a0820af5b21d13a96ecdfb41fd08c5a1e7ca701523b61e60682bb5566e05f00574

Added: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz
==
Binary file - no diff available.

Propchange: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz.asc
==
--- dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz.asc (added)
+++ dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz.asc Thu Jan 
30 18:25:38 2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEpMRl/qDFUlYaOSph6RM1134+h8sFAl4zFKcACgkQ6RM1134+
+h8uD2RAAxLv1gmDsDaKCrKK4mfstg6g8jth+X419epylb21pLUrRhMut5N7pgGh2
+XVbX/t9hfVEtSCc9OqJAv2IofrynXmDgGJY7w8Qyh97kiHLP11M2dyes9A7fhIZP
+A21dqX7nTK7hkvbXmeUZB1n9m2jj/nqZ1o4ifOnz7fWyqedRXlO55V/rTR0QmHz0
+geZc2Mq8aPj8qVBIg0yZsP1VhQ3n1QIL6Fumnmqz9tep8PjmtBtg9MKCqzT/yr3X
+uNIcV7vLUjD6DeFLbGbonsCZH2nz2cSPStF05MDTS96u+8IYHICEHjqE7yy/rYlY
+lnMagSzdi6ztaFem7sodm5bCdnJLbclGmDH6f5snVKQ5dq8FQ3FwwgNkxOfdXE0L
+4b5wvN3Xcs7FjbHdhdcPcSfoqCdisJmfo39cDJHeU40DR7GG/4rWIrMe53nGjJ8Q
+2FWBgjC2xBzJqHJMAo6uFkMXFjQ33ng+iO6N89yE1uRMX/zDSKYBw7a+nSApf1iK
+PYnyx8swa/qZ3WfapzmMSyZxZjkOx0NNLXl4CSWNS7SRU0b6s6L2Fe/CAYFqF51L
+RG3BfxjocACvDRyvp6FopLWsVhbiq2Phh6IfOgnaq1js5pJOrgHv4SZQ/SnKJBiQ
+PY5QbfUg1iqo4WjrZ92cT+l83dsMAY5LZgvtFYMBDvegDXkk6QY=
+=m6Fw
+-END PGP SIGNATURE-

Added: dev/cassandra/4.0-alpha3/apache-cassandra-4.0-alpha3-src.tar.gz.sha256
==
--- 

[jira] [Commented] (CASSANDRA-15520) split circle ci commands into reusable scripts which can be used outside of circle ci

2020-01-30 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026888#comment-17026888
 ] 

David Capwell commented on CASSANDRA-15520:
---

Removed me as the assigned to reflect that I am not actively working on this.

I can start refactoring circle ci to match this model, but this makes more 
sense if Jenkins can benefit, and I don't have the time to confirm that right 
now.

> split circle ci commands into reusable scripts which can be used outside of 
> circle ci
> -
>
> Key: CASSANDRA-15520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build
>Reporter: David Capwell
>Priority: Normal
>
> CircleCI is one of the main tools we use for build and test, but there is 
> also ASF Jenkins and many people run builds in their own companies as well. 
> It would be nice to refactor the existing CircleCI yaml to delegate to a set 
> of scripts which could be reused by other build systems.
> I feel that we could do the following directory layout
> {code}
> ci - top level directory containing all scripts
>  -  - directory containing the different build 
> steps
>  -  - a single build with 
> the required steps to run it
>  - split.sh - script which takes in a output file to write to and 
> dumps out all test cases (not partitioned)
>  - run_partition.sh - script which takes a partitioned list of tests 
> and executes the build (does not move around artifacts)
> {code}
> This would allow CircleCI and Jenkins to run the same way, but also acts as 
> documentation for how to run some of the tests (jvm-dtest and python dtest 
> upgrade tests take more time to figure out how to run).
> CircleCI would also be simpler as it would mostly be the circle ci specific 
> logic (partition tests, move code/test results around, etc.) calling these 
> scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15520) split circle ci commands into reusable scripts which can be used outside of circle ci

2020-01-30 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell reassigned CASSANDRA-15520:
-

Assignee: (was: David Capwell)

> split circle ci commands into reusable scripts which can be used outside of 
> circle ci
> -
>
> Key: CASSANDRA-15520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build
>Reporter: David Capwell
>Priority: Normal
>
> CircleCI is one of the main tools we use for build and test, but there is 
> also ASF Jenkins and many people run builds in their own companies as well. 
> It would be nice to refactor the existing CircleCI yaml to delegate to a set 
> of scripts which could be reused by other build systems.
> I feel that we could do the following directory layout
> {code}
> ci - top level directory containing all scripts
>  -  - directory containing the different build 
> steps
>  -  - a single build with 
> the required steps to run it
>  - split.sh - script which takes in a output file to write to and 
> dumps out all test cases (not partitioned)
>  - run_partition.sh - script which takes a partitioned list of tests 
> and executes the build (does not move around artifacts)
> {code}
> This would allow CircleCI and Jenkins to run the same way, but also acts as 
> documentation for how to run some of the tests (jvm-dtest and python dtest 
> upgrade tests take more time to figure out how to run).
> CircleCI would also be simpler as it would mostly be the circle ci specific 
> logic (partition tests, move code/test results around, etc.) calling these 
> scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12995) update hppc dependency to 0.7

2020-01-30 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026866#comment-17026866
 ] 

Ekaterina Dimitrova commented on CASSANDRA-12995:
-

One of the unit tests fails as it looks like the equals method of LongHashSet 
is not overwritten properly in the 0.8 version. I will look how to file an 
issue to the hppc project and provide an update here later. 

> update hppc dependency to 0.7
> -
>
> Key: CASSANDRA-12995
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12995
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies, Packaging
>Reporter: Tomas Repik
>Assignee: Ekaterina Dimitrova
>Priority: Normal
>  Labels: easyfix
> Fix For: 4.0
>
> Attachments: Screen Shot 2020-01-29 at 11.14.30 AM.png, Screen Shot 
> 2020-01-29 at 11.20.39 AM.png, Screen Shot 2020-01-29 at 11.20.47 AM.png, 
> cassandra-3.11.0-hppc.patch
>
>
> Cassandra 3.11.0 is about to be included in Fedora. There are some tweaks to 
> the sources we need to do in order to successfully build it. Cassandra 
> depends on hppc 0.5.4, but In Fedora we have the newer version 0.7.1 Upstream 
> released even newer version 0.7.2. I attached a patch updating cassandra 
> sources to depend on the 0.7.1 hppc sources. It should be also compatible 
> with the newest upstream version. The only actual changes are the removal of 
> Open infix in class names. The issue was discussed in here: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1340876 Please consider updating.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14788) Add test coverage workflows to CircleCI config

2020-01-30 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-14788:
--
Reviewers:   (was: David Capwell)

> Add test coverage workflows to CircleCI config
> --
>
> Key: CASSANDRA-14788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14788
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build
>Reporter: Jon Meredith
>Assignee: Jon Meredith
>Priority: Low
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> To support 4.0 testing efforts it's helpful to know how much of the code is 
> being exercised by unit tests and dtests.
> Add support for running the unit tests and dtests instrumented for test 
> coverage on CircleCI and then combine the results of all tests (unit, dtest 
> with vnodes, dtest without vnodes) into a single coverage report.
> All of the hard work of getting JaCoCo to work with unit tests and dtests has 
> already been done, it just needs wiring up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14788) Add test coverage workflows to CircleCI config

2020-01-30 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-14788:
--
Status: Patch Available  (was: Review In Progress)

spoke to Jon and turns out this is a old patch.  Too much has changed so the 
patch can't be applied anymore and has to be redone.

If this happens, feel free to poke me and Ill review.

> Add test coverage workflows to CircleCI config
> --
>
> Key: CASSANDRA-14788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14788
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build
>Reporter: Jon Meredith
>Assignee: Jon Meredith
>Priority: Low
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> To support 4.0 testing efforts it's helpful to know how much of the code is 
> being exercised by unit tests and dtests.
> Add support for running the unit tests and dtests instrumented for test 
> coverage on CircleCI and then combine the results of all tests (unit, dtest 
> with vnodes, dtest without vnodes) into a single coverage report.
> All of the hard work of getting JaCoCo to work with unit tests and dtests has 
> already been done, it just needs wiring up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] tag 4.0-alpha3-tentative created (now 5f7c886)

2020-01-30 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to tag 4.0-alpha3-tentative
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


  at 5f7c886  (commit)
No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] tag 4.0-alpha3-tentative deleted (was 5f7c886)

2020-01-30 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to tag 4.0-alpha3-tentative
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


*** WARNING: tag 4.0-alpha3-tentative was deleted! ***

 was 5f7c886  Merge branch 'cassandra-3.11' into trunk

The revisions that were on this tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14788) Add test coverage workflows to CircleCI config

2020-01-30 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-14788:
--
Reviewers: David Capwell, David Capwell  (was: David Capwell)
   David Capwell, David Capwell
   Status: Review In Progress  (was: Patch Available)

Started looking now.

Please modify config-2.2.yml and not config.yml* directly; linked the readme 
and my notes in GitHub.

> Add test coverage workflows to CircleCI config
> --
>
> Key: CASSANDRA-14788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14788
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build
>Reporter: Jon Meredith
>Assignee: Jon Meredith
>Priority: Low
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> To support 4.0 testing efforts it's helpful to know how much of the code is 
> being exercised by unit tests and dtests.
> Add support for running the unit tests and dtests instrumented for test 
> coverage on CircleCI and then combine the results of all tests (unit, dtest 
> with vnodes, dtest without vnodes) into a single coverage report.
> All of the hard work of getting JaCoCo to work with unit tests and dtests has 
> already been done, it just needs wiring up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] tag 4.0-alpha3-tentative created (now 5f7c886)

2020-01-30 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to tag 4.0-alpha3-tentative
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


  at 5f7c886  (commit)
No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-11105:
--
Resolution: Won't Do
Status: Resolved  (was: Open)

Closing this one out as Won't Do - see [~mck]'s comment as to reasoning. Feel 
free to open if there's disagreement there.

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 11105-trunk.txt, batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch 
> too large
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120)
>   at 
> 

[jira] [Commented] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2020-01-30 Thread Alexander Dejanovski (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026822#comment-17026822
 ] 

Alexander Dejanovski commented on CASSANDRA-11105:
--

I agree with [~mck].

The code has evolved too much anyway since my patch was written, and internally 
we've moved our efforts on a cassandra-stress replacement tool.

Happy to have the ticket closed as "won't do".

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 11105-trunk.txt, batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch 
> too large
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
>   at 
> 

[jira] [Updated] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-11105:
--
Status: Patch Available  (was: Ready to Commit)

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 11105-trunk.txt, batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch 
> too large
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120)
>   at 
> com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
>   at 
> com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:45)
> 

[jira] [Updated] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-11105:
--
Status: Open  (was: Patch Available)

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 11105-trunk.txt, batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch 
> too large
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120)
>   at 
> com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
>   at 
> com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:45)
>   at 

[cassandra] tag 4.0-alpha3-tentative deleted (was 729146a)

2020-01-30 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to tag 4.0-alpha3-tentative
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


*** WARNING: tag 4.0-alpha3-tentative was deleted! ***

 was 729146a  Merge branch 'cassandra-3.11' into trunk

The revisions that were on this tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r37801 - /dev/cassandra/4.0-alpha3/

2020-01-30 Thread mck
Author: mck
Date: Thu Jan 30 17:00:03 2020
New Revision: 37801

Log:
removing staging artifacts for Cassandra 4.0-alpha3, will redo with an RSA (not 
DSA) key

Removed:
dev/cassandra/4.0-alpha3/


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12995) update hppc dependency to 0.7

2020-01-30 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-12995:
-
Reviewers: Brandon Williams, Brandon Williams  (was: Brandon Williams)
   Brandon Williams, Brandon Williams
   Status: Review In Progress  (was: Patch Available)

> update hppc dependency to 0.7
> -
>
> Key: CASSANDRA-12995
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12995
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies, Packaging
>Reporter: Tomas Repik
>Assignee: Ekaterina Dimitrova
>Priority: Normal
>  Labels: easyfix
> Fix For: 4.0
>
> Attachments: Screen Shot 2020-01-29 at 11.14.30 AM.png, Screen Shot 
> 2020-01-29 at 11.20.39 AM.png, Screen Shot 2020-01-29 at 11.20.47 AM.png, 
> cassandra-3.11.0-hppc.patch
>
>
> Cassandra 3.11.0 is about to be included in Fedora. There are some tweaks to 
> the sources we need to do in order to successfully build it. Cassandra 
> depends on hppc 0.5.4, but In Fedora we have the newer version 0.7.1 Upstream 
> released even newer version 0.7.2. I attached a patch updating cassandra 
> sources to depend on the 0.7.1 hppc sources. It should be also compatible 
> with the newest upstream version. The only actual changes are the removal of 
> Open infix in class names. The issue was discussed in here: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1340876 Please consider updating.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2020-01-30 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026814#comment-17026814
 ] 

Michael Semb Wever commented on CASSANDRA-11105:


I'd be in preference of closing out the ticket as 'wont do', because not only 
is there a workaround but that workaround is probably closer to what you are 
trying to benchmarking. That is, big batches are not normal and not 
recommended. That cassandra-stress by default uses batches is unfortunate, and 
even more unfortunate that it is so convoluted to make batches consist of only 
single inserts.

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 11105-trunk.txt, batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: 

[jira] [Comment Edited] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026769#comment-17026769
 ] 

Michael Semb Wever edited comment on CASSANDRA-15534 at 1/30/20 4:50 PM:
-

Yes you are correct [~mshuler].

My understanding was that an old DSA key of length 3072 was ok (for now). But 
the rpm signing has made it clearly evident that DSA should just be avoided.

Given my DSA key has never been used, the patch is updated (and i'll redo the 
stage release artifacts).


was (Author: michaelsembwever):
Yes you are correct [~mshuler].

My understanding was that an old DSA key of length 3072 was ok (for now). But 
the rpm signing has made it clearly evident that DSA should just be avoided.

Given my DSA key has never been used, the patch is updated (and redo the stage 
release artifacts).

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Shuler
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2020-01-30 Thread Josh McKenzie (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026804#comment-17026804
 ] 

Josh McKenzie commented on CASSANDRA-11105:
---

[~adejanovski] - it's been almost 3 years since your patch on this ticket. Are 
you still active on the project and do you have a desire to move this forward 
by any chance (i.e. should we rebase and drum up a reviewer here)? If not, 
[~mck] - do you have cycles to take this on or perhaps a position on its 
importance to 4.0?

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 11105-trunk.txt, batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch 
> too large
>   at 
> 

[jira] [Commented] (CASSANDRA-14761) Rename speculative_retry to match additional_write_policy

2020-01-30 Thread Josh McKenzie (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026801#comment-17026801
 ] 

Josh McKenzie commented on CASSANDRA-14761:
---

[~jolynch] - this has been sitting since feb of '19. Do you want to take 
assignee on this and we get another reviewer, we kick status back to open in 
backlog for the release, or some other approach? Ariel's not active on the 
project anymore so it makes sense this one's somewhat stuck.

> Rename speculative_retry to match additional_write_policy
> -
>
> Key: CASSANDRA-14761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14761
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Ariel Weisberg
>Priority: Normal
> Fix For: 4.0
>
>
> It's not really speculative. This commit is where it was last named and shows 
> what to update 
> https://github.com/aweisberg/cassandra/commit/e1df8e977d942a1b0da7c2a7554149c781d0e6c3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15300) 4.0 rpmbuild spec file is missing auditlogviewer and fqltool

2020-01-30 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie reassigned CASSANDRA-15300:
-

Assignee: Yuko Sakanaka

> 4.0 rpmbuild spec file is missing auditlogviewer and fqltool
> 
>
> Key: CASSANDRA-15300
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15300
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Yuko Sakanaka
>Assignee: Yuko Sakanaka
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0
>
> Attachments: 15300-4.0.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The spec file on the current trunk branch (cassandra 4.0) is missing 
> auditlogviewer and fqltool.
> I tried rpmbuild on trunk brunch, but it failed with unpacked files error.
> RPM build errors:
>     Installed (but unpackaged) file(s) found:
>    /usr/bin/auditlogviewer
>    /usr/bin/fqltool
> I guess the committers will modify this file in the future because they are 
> new features but I suggest that the following lines be added into the spec 
> file.
>  
> %attr(755,root,root) %\{_bindir}/auditlogviewer
> %attr(755,root,root) %\{_bindir}/fqltool
>  
> thanks.
>  
> [PATCH] Add auditlogviewer and fqltool into rpmbuild spec file. patch
>  by ysakanaka; for CASSANDRA-15300
>  
> —
>  redhat/cassandra.spec | 2 ++
>  1 file changed, 2 insertions(+)
>  
> diff --git a/redhat/cassandra.spec b/redhat/cassandra.spec
> index eaf7565..0aedbd7 100644
> — a/redhat/cassandra.spec
> +++ b/redhat/cassandra.spec
> @@ -173,6 +173,8 @@ This package contains extra tools for working with 
> Cassandra clusters.
>  %attr(755,root,root) %\{_bindir}/sstableofflinerelevel
>  %attr(755,root,root) %\{_bindir}/sstablerepairedset
>  %attr(755,root,root) %\{_bindir}/sstablesplit
> +%attr(755,root,root) %\{_bindir}/auditlogviewer
> +%attr(755,root,root) %\{_bindir}/fqltool
>  
>  
>  %changelog
> -- 
> 1.8.3.1
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Shuler (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15534:
---
  Fix Version/s: 4.0
Source Control Link: https://dist.apache.org/repos/dist/release/cassandra/
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed revision 37800.

Thanks, Mick.

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Shuler
>Priority: Normal
> Fix For: 4.0
>
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Shuler (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15534:
---
Status: Ready to Commit  (was: Review In Progress)

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Shuler
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r37800 - /release/cassandra/KEYS

2020-01-30 Thread mshuler
Author: mshuler
Date: Thu Jan 30 15:47:05 2020
New Revision: 37800

Log:
Remove Mick Semb Wever's DSA key and add his RSA key

Patch by Mick Semb Wever, reviewed by Michael Shuler for CASSANDRA-15534

Modified:
release/cassandra/KEYS

Modified: release/cassandra/KEYS
==
--- release/cassandra/KEYS (original)
+++ release/cassandra/KEYS Thu Jan 30 15:47:05 2020
@@ -3864,359 +3864,84 @@ iKh4wsFPQGBh9ssAC3lQrs6T7ccqnRoO6xsmL+Y2
 gbFPnWvcHSSFnKg=
 =GW0U
 -END PGP PUBLIC KEY BLOCK-
-pub   dsa3072 2010-04-26 [SC]
-  ABCD3108336F7CC6567E769FFDD3B769B21C125C
-uid   [ultimate] Mick Semb Wever 
-sig 3FDD3B769B21C125C 2018-06-01  Mick Semb Wever 
-sig  91D3EB78F8AEBAD3 2010-11-04  Michael Semb Wever (Java Engineer) 

-uid   [ultimate] Mick Semb Wever 
-sig 3FDD3B769B21C125C 2018-06-01  Mick Semb Wever 
-sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
-sig  91D3EB78F8AEBAD3 2010-11-04  Michael Semb Wever (Java Engineer) 

-uid   [ultimate] [jpeg image of size 4671]
-sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
-sig  91D3EB78F8AEBAD3 2010-11-04  Michael Semb Wever (Java Engineer) 

-uid   [ultimate] Mick Semb Wever 
-sig 3FDD3B769B21C125C 2018-06-01  Mick Semb Wever 
-sub   elg4096 2010-04-26 [E]
-sig  FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
-
-pub   dsa3072 2010-04-26 [SC]
-  ABCD3108336F7CC6567E769FFDD3B769B21C125C
-uid   [ultimate] Mick Semb Wever 
-sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
-uid   [ultimate] Mick Semb Wever 
-sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
-uid   [ultimate] [jpeg image of size 4671]
-sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
-sub   elg4096 2010-04-26 [E]
-sig  FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+pub   rsa4096 2017-10-31 [SC]
+  A4C465FEA0C552561A392A61E91335D77E3E87CB
+uid   [ unknown] Michael Semb Wever 
+sig 3E91335D77E3E87CB 2018-01-10  Michael Semb Wever 

+sig  FDD3B769B21C125C 2019-10-17  Mick Semb Wever 
+uid   [ unknown] Michael Semb Wever 
+sig 3E91335D77E3E87CB 2018-01-10  Michael Semb Wever 

+sig  FDD3B769B21C125C 2019-10-17  Mick Semb Wever 
+sub   rsa4096 2017-10-31 [E]
+sig  E91335D77E3E87CB 2017-10-31  Michael Semb Wever 

 
 -BEGIN PGP PUBLIC KEY BLOCK-
 
-mQSuBEvV7NIRDAC1ASnxXKXvnbJi6ZaNXgLiU+A9ziX5/xQy7NfnvBwu26v/Xm5g
-OnMTFIpdQBh1YZtCl4zzFdVPOCb0fYBantKYIyYUDZGtWNPJPezd9pPOMxB//O3z
-C2RhPWB2Hoc3Bjgc1IR1VVLGewX/v7+M1qqSx5D7G8QDMLguJxCuisTw45nfY52M
-EO+y0ZT/oCh4iZTbq+PIMKyLfDLnWF0zyXcFK+iMimb+DEOglBctpmpB3kR6bifx
-BlZzkzO65eiuaxFQ0xZjf0R7WYdmfY8piQRyqh/y/kn8Slk6THz0aYRca8Jf6y0L
-avszdiuCAkh4SK74l61SY/J1oXbBKWZkPMAfAxXyKwzh3Nm8eFyNnS0EBuITilvi
-/lWc6MQIoCh/bMNeGCzeoJWehBbCA1o0OxNpsqHjSffE2cM/r0NOQD1weK54osO0
-p+U0AxhXVHPUOuwqPmSUNTm05rVNCLQLvPaVt1M69MTR7bt/mfOJxrIDPvgThpX6
-5Uo4hVAewIyiH48BAOLcCRVSEy+sxKm69qPfps+amvwvNoY3fNm4esGGKIT5DACT
-MrR2lVPY/XGM0F+AOj51XgmCOGn1wmjZiXe1kMHRZBlVzFXuOfZKfZeYDMT8zK0X
-oSfBCy4OOMG2QRb4ICHiIGMRj7XDrS3MrHgJR2vnQ/ZUo+y7pb03Z5a9sMooh/Uk
-BeF9wMkIt5mbtQyxRZYBvG6e6KrlA/ViG5I9QoTjKsjUj4B5s556Po6n3IJqXpW4
-Abtc+FxjhY3SafyQG+nsVZbrsojtityXo6y6R/yTdNB+N0reIgPs5dSWFt/N7SNq
-cwLLvhbTl27cR1afNkxgh6ULVvSZ6I9KMMZIvdLD+HWZijzSi/pYnZlrrp5NRBWw
-ZwRU4TOXuPiMk4eNW4X3YMR/Z3/XqT2JtGex/x/J03YnEDVET5LAcWZ0Nt9QlVNg
-6DwG/cFCgoySszgyIsNiorVslxG62CsDRLF2p6o8lyYix7uAdqnhVEsEMQpa1Ah/
-sYE8JRcVEDTiPL3VCp3ZG/3jPt6A/KDzrEMC2t11ZcoBkBZoIBKfvRPH02yuM3cL
-/ROU5R7FMIv3mqimg+SojUmn6TWhzlDIo/K9p6v+Sj/ujpR5pzbVUGc3SMhQ+us8
-C5BKPDGMgdP155LS2/C1LnSjKQafsrqAiK9rKmLcmIHK/1WoFt04ckjmzqrIJPJd
-bZU8YYA5TTr/ZR3PPkN5n5NXF1K5nckPSVWRf6wyQhcA8Ao7iZO2uwWR07XGUItT
-/FBGPEmgh2S8GXvzFTRhVozNvvL7+WhdQa6OQe5ICM8Wc67u1wUWzuPZguCfmxyA
-LhdnVRUZAoVdbk3IFsovcnquby6vDduIt9CsNhFm2SadKb1JdxJxsgaDNUYwCSpA
-0GKnu5eYA+O8S/vYHnjZVITWr0V7qk81P92W84OKXwVkJ0QwmXkSFfbF8V/Nvjxp
-01+rz0ctndbYf4mXAyzTqv4iC6IWbJ1Zz1GH5JGw+HMx5hk1rcE8jUItDvPWt8+F
-HxO/K8ufoVx1AJEptnZToKe5QtV4EOmzLNyt9jnCPT2qPNKK0Ad4bsDsbqK4Spyz
-lrQgTWljayBTZW1iIFdldmVyIDxtY2tAYXBhY2hlLm9yZz6IkAQTEQoAOAIbAwUL
-CQgHAwUVCgkICwUWAgMBAAIeAQIXgBYhBKvNMQgzb3zGVn52n/3Tt2myHBJcBQJb
-EI9GAAoJEP3Tt2myHBJc1GoA/R3Z/qs3kYuLgIpMF/bIAFHYJErmEa1gkMTSaYvf
-sR65AP9RQqW9Niy0JCwVtlq5gXSjgcYh6Gh1/w0ehidE+Kp7Toh5BBMRCgAhBQJL
-1ezSAhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEP3Tt2myHBJcf6gA/3PW
-HVINw1kWYrfQDIgMqnkFDbG2UFMP5X6FG5mEo4oKAP4jZQchjncch8abmj1lUz7c
-3aB1+rF+9j4FuHatwzcp9ohGBBARCgAGBQJM0hpoAAoJEJHT63j4rrrTgI8An2wr
-J4j6ATOeDMPNSwad3GQV3zsJAJ0QV7dgVYGjRVt+OLJcqa6Wt0yIELQlTWljayBT
-ZW1iIFdldmVyIDxtaWNrQHNlbWIud2V2ZXIub3JnPoiTBBMRCgA7AhsDBQsJCAcD
-BRUKCQgLBRYCAwEAAh4BAheAFiEEq80xCDNvfMZWfnaf/dO3abIcElwFAlsQj0YC
-GQEACgkQ/dO3abIcElyrpAD/W4Ge09788Ks6Qy+ipBewjeUaOC3SMI8To73CLQC5
-p3MBAMH7qqy6SO/s6OxmDWdVN7wjK+e9LY/PhEqb5ReMMekiiEYEEBEKAAYFAkzS

[jira] [Commented] (CASSANDRA-15505) Add message interceptors to in-jvm dtests

2020-01-30 Thread Alex Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026773#comment-17026773
 ] 

Alex Petrov commented on CASSANDRA-15505:
-

Posting patches for all branches for visibility: 

|2.2|[patch|https://github.com/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-2.2]|[CI|https://circleci.com/gh/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-2.2]|
|3.0|[patch|https://github.com/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-3.0]|[CI|https://circleci.com/gh/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-3.0]|
|3.11|[patch|https://github.com/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-3.11]|[CI|https://circleci.com/gh/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-3.11]|
|trunk|[patch|https://github.com/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-trunk]|[CI|https://circleci.com/gh/ifesdjeen/cassandra/tree/CASSANDRA-15505-interceptors-trunk]|

> Add message interceptors to in-jvm dtests
> -
>
> Key: CASSANDRA-15505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15505
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Test/dtest
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently we only have means to filter messages in in-jvm tests. We need a 
> facility to intercept and modify the messages between nodes for testing 
> purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-15307:
-
Fix Version/s: (was: 4.x)
   4.0

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM ks.test2 
> WHERE id = 1"],
> >   from_mark=mark, filename='debug.log', timeout=60)
> cql_test.py:1150: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> self = 
> exprs = ['operations were slow', 'SELECT \\* FROM 

[jira] [Updated] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-15307:
-
  Since Version: 3.2
Source Control Link: 
https://github.com/apache/cassandra-dtest/commit/986318e6fe027272338ef48da7cb2b86656db94b
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed, thanks!

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM ks.test2 
> WHERE id = 1"],
> >   from_mark=mark, 

[jira] [Updated] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-15307:
-
Reviewers: Brandon Williams, Brandon Williams  (was: Brandon Williams)
   Brandon Williams, Brandon Williams
   Status: Review In Progress  (was: Patch Available)

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM ks.test2 
> WHERE id = 1"],
> >   from_mark=mark, filename='debug.log', timeout=60)
> cql_test.py:1150: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ 

[jira] [Updated] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-15307:
-
Status: Ready to Commit  (was: Review In Progress)

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM ks.test2 
> WHERE id = 1"],
> >   from_mark=mark, filename='debug.log', timeout=60)
> cql_test.py:1150: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> self = 
> exprs = ['operations were slow', 'SELECT \\* FROM 

[jira] [Assigned] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Shuler (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reassigned CASSANDRA-15534:
--

Assignee: Michael Shuler

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Shuler
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15213) DecayingEstimatedHistogramReservoir Inefficiencies

2020-01-30 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026770#comment-17026770
 ] 

Jordan West edited comment on CASSANDRA-15213 at 1/30/20 3:41 PM:
--

Incorporated your change, rebased/squashed, and pushed. Thanks [~benedict].

[branch | https://github.com/jrwest/cassandra/tree/jwest/15213] [tests | 
https://circleci.com/gh/jrwest/cassandra/tree/jwest%2F15213]


was (Author: jrwest):
Incorporated your change, rebased/squashed, and pushed. Thanks [~benedict].

> DecayingEstimatedHistogramReservoir Inefficiencies
> --
>
> Key: CASSANDRA-15213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15213
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Benedict Elliott Smith
>Assignee: Jordan West
>Priority: Normal
> Fix For: 4.0-beta
>
>
> * {{LongAdder}} introduced to trunk consumes 9MiB of heap without user 
> schemas, and this will grow significantly under contention and user schemas 
> with many tables.  This is because {{LongAdder}} is a very heavy class 
> designed for single contended values.  
>  ** This can likely be improved significantly, without significant loss of 
> performance in the contended case, by simply increasing the size of our 
> primitive backing array and providing multiple buckets, with each thread 
> picking a bucket to increment, or simply multiple backing arrays.  Probably a 
> better way still to do this would be to introduce some competition detection 
> to the update, much like {{LongAdder}} utilises, that increases the number of 
> backing arrays under competition.
>  ** To save memory this approach could partition the space into chunks that 
> are likely to be updated together, so that we do not need to duplicate the 
> entire array under competition.
>  * Similarly, binary search is costly and a measurable cost as a share of the 
> new networking work (without filtering it was > 10% of the CPU used overall). 
>  We can compute an approximation floor(log2 n / log2 1.2) extremely cheaply, 
> to save the random memory access costs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-dtest] branch master updated: Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git


The following commit(s) were added to refs/heads/master by this push:
 new 986318e  Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test
986318e is described below

commit 986318e6fe027272338ef48da7cb2b86656db94b
Author: Ekaterina Dimitrova 
AuthorDate: Wed Jan 29 17:18:09 2020 -0500

Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

Patch by Ekaterina Dimitrova, reviwed by brandonwilliams for CASSANDRA-15307
---
 cql_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cql_test.py b/cql_test.py
index 0f49561..84ded2b 100644
--- a/cql_test.py
+++ b/cql_test.py
@@ -1116,7 +1116,7 @@ class TestCQLSlowQuery(CQLTester):
 node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
other node executes queries
 node2.start(wait_for_binary_proto=True,
 jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
-  "-Dcassandra.test.read_iteration_delay_ms=1"])  
# see above for explanation
+  "-Dcassandra.test.read_iteration_delay_ms=2"])  
# see above for explanation
 
 session = self.patient_exclusive_cql_connection(node1)
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Shuler (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-15534:
---
Reviewers: Michael Shuler, Michael Shuler  (was: Michael Shuler)
   Michael Shuler, Michael Shuler
   Status: Review In Progress  (was: Patch Available)

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Shuler
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15213) DecayingEstimatedHistogramReservoir Inefficiencies

2020-01-30 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026770#comment-17026770
 ] 

Jordan West commented on CASSANDRA-15213:
-

Incorporated your change, rebased/squashed, and pushed. Thanks [~benedict].

> DecayingEstimatedHistogramReservoir Inefficiencies
> --
>
> Key: CASSANDRA-15213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15213
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Benedict Elliott Smith
>Assignee: Jordan West
>Priority: Normal
> Fix For: 4.0-beta
>
>
> * {{LongAdder}} introduced to trunk consumes 9MiB of heap without user 
> schemas, and this will grow significantly under contention and user schemas 
> with many tables.  This is because {{LongAdder}} is a very heavy class 
> designed for single contended values.  
>  ** This can likely be improved significantly, without significant loss of 
> performance in the contended case, by simply increasing the size of our 
> primitive backing array and providing multiple buckets, with each thread 
> picking a bucket to increment, or simply multiple backing arrays.  Probably a 
> better way still to do this would be to introduce some competition detection 
> to the update, much like {{LongAdder}} utilises, that increases the number of 
> backing arrays under competition.
>  ** To save memory this approach could partition the space into chunks that 
> are likely to be updated together, so that we do not need to duplicate the 
> entire array under competition.
>  * Similarly, binary search is costly and a measurable cost as a share of the 
> new networking work (without filtering it was > 10% of the CPU used overall). 
>  We can compute an approximation floor(log2 n / log2 1.2) extremely cheaply, 
> to save the random memory access costs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15534:
---
Attachment: 15534.patch

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026769#comment-17026769
 ] 

Michael Semb Wever commented on CASSANDRA-15534:


Yes you are correct [~mshuler].

My understanding was that an old DSA key of length 3072 was ok (for now). But 
the rpm signing has made it clearly evident that DSA should just be avoided.

Given my DSA key has never been used, the patch is updated (and redo the stage 
release artifacts).

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15534:
---
Attachment: (was: 15534.patch)

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15535) Documentation gives the wrong instruction to activate remote jmx

2020-01-30 Thread jean carlo rivera ura (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jean carlo rivera ura reassigned CASSANDRA-15535:
-

Assignee: jean carlo rivera ura

> Documentation gives the wrong instruction to activate remote jmx
> 
>
> Key: CASSANDRA-15535
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15535
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: jean carlo rivera ura
>Assignee: jean carlo rivera ura
>Priority: Normal
>
> In this section [jmx acces 
> |https://cassandra.apache.org/doc/latest/operating/security.html?highlight=local_jmx#jmx-access],
>  in order to activate the remove jmx acces, it says to change the value of 
> LOCAL_JMX to *yes*. However the right configuration is *no*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15535) Documentation gives the wrong instruction to activate remote jmx

2020-01-30 Thread jean carlo rivera ura (Jira)
jean carlo rivera ura created CASSANDRA-15535:
-

 Summary: Documentation gives the wrong instruction to activate 
remote jmx
 Key: CASSANDRA-15535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15535
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation/Website
Reporter: jean carlo rivera ura


In this section [jmx acces 
|https://cassandra.apache.org/doc/latest/operating/security.html?highlight=local_jmx#jmx-access],
 in order to activate the remove jmx acces, it says to change the value of 
LOCAL_JMX to *yes*. However the right configuration is *no*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15307:

Test and Documentation Plan: 
[Patch|https://github.com/ekaterinadimitrova2/cassandra-dtest/tree/CASSANDRA-15307].
  [Pull request|https://github.com/ekaterinadimitrova2/cassandra-dtest/pull/2]
 Status: Patch Available  (was: In Progress)

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM ks.test2 
> WHERE id = 1"],
> >   

[jira] [Updated] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15307:

 Bug Category: Parent values: Correctness(12982)
   Complexity: Low Hanging Fruit
Discovered By: User Report
Fix Version/s: (was: 4.0-alpha)
   4.x
 Severity: Low
   Status: Open  (was: Triage Needed)

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM ks.test2 
> WHERE id = 1"],
> >   from_mark=mark, 

[jira] [Commented] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026739#comment-17026739
 ] 

Ekaterina Dimitrova commented on CASSANDRA-15307:
-

300 runs completed successfully. Attached is the output.

There are a couple of tests that show longer completion as my laptop fall 
asleep. 

[Patch|https://github.com/ekaterinadimitrova2/cassandra-dtest/tree/CASSANDRA-15307].
  [Pull request|https://github.com/ekaterinadimitrova2/cassandra-dtest/pull/2]

[~brandon.williams], can you, please, review it? Thanks

 

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> 

[jira] [Comment Edited] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026739#comment-17026739
 ] 

Ekaterina Dimitrova edited comment on CASSANDRA-15307 at 1/30/20 2:52 PM:
--

300 runs completed successfully. Attached is the output.

There are a couple of tests that show longer completion as my laptop fell 
asleep. 

[Patch|https://github.com/ekaterinadimitrova2/cassandra-dtest/tree/CASSANDRA-15307].
  [Pull request|https://github.com/ekaterinadimitrova2/cassandra-dtest/pull/2]

[~brandon.williams], can you, please, review it? Thanks


was (Author: e.dimitrova):
300 runs completed successfully. Attached is the output.

There are a couple of tests that show longer completion as my laptop fall 
asleep. 

[Patch|https://github.com/ekaterinadimitrova2/cassandra-dtest/tree/CASSANDRA-15307].
  [Pull request|https://github.com/ekaterinadimitrova2/cassandra-dtest/pull/2]

[~brandon.williams], can you, please, review it? Thanks

 

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> 

[jira] [Commented] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Shuler (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026738#comment-17026738
 ] 

Michael Shuler commented on CASSANDRA-15534:


Shouldn't the DSA key that was added to KEYS be removed?

My understanding of the [Signing 
Releases|https://www.apache.org/dev/release-signing.html] doc suggests the DSA 
key should not have been added to the project KEYS file.

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14941) Expired secondary index sstables are not promptly discarded under TWCS

2020-01-30 Thread Samuel Klock (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026736#comment-17026736
 ] 

Samuel Klock commented on CASSANDRA-14941:
--

Pinging again.  If nothing else, it'd be helpful to have confirmation that the 
tweak in the description is safe or, if not, whether it'd be better to make a 
modification to {{Memtable}} instead.  Thanks!

> Expired secondary index sstables are not promptly discarded under TWCS
> --
>
> Key: CASSANDRA-14941
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14941
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/2i Index
>Reporter: Samuel Klock
>Assignee: Marcus Eriksson
>Priority: Normal
>
> We have a table in a cluster running 3.0.17 storing roughly time-series data 
> using TWCS with a secondary index. We've noticed that while expired sstables 
> for the table are discarded mostly when we expect them to be, the expired 
> sstables for the secondary index would linger for weeks longer than expected 
> – essentially indefinitely. Eventually the sstables would fill disks, which 
> would require manual steps (deleting ancient index sstables) to address. We 
> verified with {{sstableexpiredblockers}} that there wasn't anything on disk 
> blocking the expired sstables from being dropped, so this looks like a bug.
> Through some debugging, we traced the problem to the index's memtables, which 
> were consistently (except _just_ after node restarts) reporting a minimum 
> timestamp from September 2015 – much older than any of our live data – which 
> causes {{CompactionController.getFullyExpiredSSTables()}} to consistently 
> return an empty set. The reason that the index sstables report this minimum 
> timestamp is because of how index updates are created, using 
> {{PartitionUpdate.singleRowUpdate()}}:
> {code:java}
> public static PartitionUpdate singleRowUpdate(CFMetaData metadata, 
> DecoratedKey key, Row row, Row staticRow)
> {
> MutableDeletionInfo deletionInfo = MutableDeletionInfo.live();
> Holder holder = new Holder(
> new PartitionColumns(
> staticRow == null ? Columns.NONE : 
> Columns.from(staticRow.columns()),
> row == null ? Columns.NONE : Columns.from(row.columns())
> ),
> row == null ? BTree.empty() : BTree.singleton(row),
> deletionInfo,
> staticRow == null ? Rows.EMPTY_STATIC_ROW : staticRow,
> EncodingStats.NO_STATS
> );
> return new PartitionUpdate(metadata, key, holder, deletionInfo, 
> false);
> }
> {code}
> The use of {{EncodingStats.NO_STATS}} makes it appear as though the earliest 
> timestamp in the resulting {{PartitionUpdate}} is from September 2015. That 
> timestamp becomes the minimum for the memtable.
> Modifying this version of {{PartitionUpdate.singleRowUpdate()}} to:
> {code:java}
> public static PartitionUpdate singleRowUpdate(CFMetaData metadata, 
> DecoratedKey key, Row row, Row staticRow)
> {
> MutableDeletionInfo deletionInfo = MutableDeletionInfo.live();
> staticRow = (staticRow == null ? Rows.EMPTY_STATIC_ROW : staticRow);
> EncodingStats stats = EncodingStats.Collector.collect(staticRow,
>   (row == null ?
>
> Collections.emptyIterator() :
>
> Iterators.singletonIterator(row)),
>   deletionInfo);
> Holder holder = new Holder(
> new PartitionColumns(
> staticRow == Rows.EMPTY_STATIC_ROW ? Columns.NONE : 
> Columns.from(staticRow.columns()),
> row == null ? Columns.NONE : Columns.from(row.columns())
> ),
> row == null ? BTree.empty() : BTree.singleton(row),
> deletionInfo,
> staticRow,
> stats
> );
> return new PartitionUpdate(metadata, key, holder, deletionInfo, 
> false);
> }
> {code}
> (i.e., computing an {{EncodingStats}} from the contents of the update) seems 
> to fix the problem. However, we're not certain whether A) there's a 
> functional reason the method was using {{EncodingStats.NO_STATS}} previously 
> or B) whether the {{EncodingStats}} the revised version creates is correct 
> (in particular, the use of {{deletionInfo}} feels a little suspect). We're 
> also not sure whether there's a more appropriate fix (e.g., changing how the 
> memtables compute the minimum timestamp, particularly in the {{NO_STATS}} 
> case).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CASSANDRA-14415) Performance regression in queries for distinct keys

2020-01-30 Thread Samuel Klock (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026732#comment-17026732
 ] 

Samuel Klock commented on CASSANDRA-14415:
--

Pinging again.  Are there any remaining blockers?

> Performance regression in queries for distinct keys
> ---
>
> Key: CASSANDRA-14415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Samuel Klock
>Assignee: Samuel Klock
>Priority: Normal
>  Labels: performance
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Running Cassandra 3.0.16, we observed a major performance regression 
> affecting {{SELECT DISTINCT keys}}-style queries against certain tables.  
> Based on some investigation (guided by some helpful feedback from Benjamin on 
> the dev list), we tracked the regression down to two problems.
>  * One is that Cassandra was reading more data from disk than was necessary 
> to satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x 
> release.
>  * If the fix for CASSANDRA-10657 is incorporated, the other is this code 
> snippet in {{RebufferingInputStream}}:
> {code:java}
>     @Override
>     public int skipBytes(int n) throws IOException
>     {
>     if (n < 0)
>     return 0;
>     int requested = n;
>     int position = buffer.position(), limit = buffer.limit(), remaining;
>     while ((remaining = limit - position) < n)
>     {
>     n -= remaining;
>     buffer.position(limit);
>     reBuffer();
>     position = buffer.position();
>     limit = buffer.limit();
>     if (position == limit)
>     return requested - n;
>     }
>     buffer.position(position + n);
>     return requested;
>     }
> {code}
> The gist of it is that to skip bytes, the stream needs to read those bytes 
> into memory then throw them away.  In our tests, we were spending a lot of 
> time in this method, so it looked like the chief drag on performance.
> We noticed that the subclass of {{RebufferingInputStream}} in use for our 
> queries, {{RandomAccessReader}} (over compressed sstables), implements a 
> {{seek()}} method.  Overriding {{skipBytes()}} in it to use {{seek()}} 
> instead was sufficient to fix the performance regression.
> The performance difference is significant for tables with large values.  It's 
> straightforward to evaluate with very simple key-value tables, e.g.:
> {{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}
> We did some basic experimentation with the following variations (all in a 
> single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
> workstation):
>  * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 
> 10,000 entries), and much larger values (1 MB, 10,000 entries);
>  * compressible data (a single byte repeated) and uncompressible data (output 
> from {{openssl rand $bytes}}); and
>  * with and without sstable compression.  (With compression, we use 
> Cassandra's defaults.)
> The difference is most conspicuous for tables with large, uncompressible data 
> and sstable decompression (which happens to describe the use case that 
> triggered our investigation).  It is smaller but still readily apparent for 
> tables with effective compression.  For uncompressible data without 
> compression enabled, there is no appreciable difference.
> Here's what the performance looks like without our patch for the 1-MB entries 
> (times in seconds, five consecutive runs for each data set, all exhausting 
> the results from a {{SELECT DISTINCT key FROM ...}} query with a page size of 
> 24):
> {noformat}
> working on compressible
> 5.21180510521
> 5.10270500183
> 5.22311806679
> 4.6732840538
> 4.84219098091
> working on uncompressible_uncompressed
> 55.0423607826
> 0.769015073776
> 0.850513935089
> 0.713396072388
> 0.62596988678
> working on uncompressible
> 413.292617083
> 231.345913887
> 449.524993896
> 425.135111094
> 243.469946861
> {noformat}
> and with the fix:
> {noformat}
> working on compressible
> 2.86733293533
> 1.24895811081
> 1.108907938
> 1.12742400169
> 1.04647302628
> working on uncompressible_uncompressed
> 56.4146180153
> 0.895509958267
> 0.922824144363
> 0.772884130478
> 0.731923818588
> working on uncompressible
> 64.4587619305
> 1.81325793266
> 1.52577018738
> 1.41769099236
> 1.60442209244
> {noformat}
> The long initial runs for the uncompressible data presumably come from 
> repeatedly hitting the disk.  In contrast to the runs without the fix, the 
> initial runs seem to be effective at warming the page cache (as lots of data 
> is skipped, so the data that's read can fit in memory), so subsequent runs 
> are 

[jira] [Updated] (CASSANDRA-15307) Fix flakey test_remote_query - cql_test.TestCQLSlowQuery test

2020-01-30 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15307:

Attachment: CASSANDRA-15307-fixed.txt

> Fix flakey  test_remote_query - cql_test.TestCQLSlowQuery test
> --
>
> Key: CASSANDRA-15307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joey Lynch
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: CASSANDRA-15307-fixed.txt, CASSANDRA-15307.txt
>
>
> Example failure: 
> [https://circleci.com/gh/jolynch/cassandra/554#tests/containers/61]
>  
> {noformat}
> Your job ran 959 tests with 1 failure
> - test_remote_query cql_test.TestCQLSlowQuerycql_test.py
> ccmlib.node.TimeoutError: 05 Sep 2019 23:05:07 [node2] Missing: ['operations 
> were slow', 'SELECT \\* FROM ks.test2 WHERE id = 1']: DEBUG [BatchlogTasks:1] 
> 2019-09-05 23:04:24,437 Ba. See debug.log for remainder
> self = 
> def test_remote_query(self):
> """
> Check that a query running on a node other than the coordinator 
> is reported as slow:
> 
> - populate the cluster with 2 nodes
> - start one node without having it join the ring
> - start the other one node with slow_query_log_timeout_in_ms set 
> to a small value
>   and the read request timeouts set to a large value (to ensure 
> the query is not aborted) and
>   read_iteration_delay set to a value big enough for the query to 
> exceed slow_query_log_timeout_in_ms
>   (this will cause read queries to take longer than the slow 
> query timeout)
> - CREATE a table
> - INSERT 5000 rows on a session on the node that is not a member 
> of the ring
> - run SELECT statements and check that the slow query messages 
> are present in the debug logs
>   (we cannot check the logs at info level because the no spam 
> logger has unpredictable results)
> 
> @jira_ticket CASSANDRA-12403
> """
> cluster = self.cluster
> 
> cluster.set_configuration_options(values={'slow_query_log_timeout_in_ms': 10,
>   'request_timeout_in_ms': 
> 12,
>   
> 'read_request_timeout_in_ms': 12,
>   
> 'range_request_timeout_in_ms': 12})
> 
> cluster.populate(2)
> node1, node2 = cluster.nodelist()
> 
> node1.start(wait_for_binary_proto=True, join_ring=False)  # ensure 
> other node executes queries
> node2.start(wait_for_binary_proto=True,
> jvm_args=["-Dcassandra.monitoring_report_interval_ms=10",
>   "-Dcassandra.test.read_iteration_delay_ms=1"])  
> # see above for explanation
> 
> session = self.patient_exclusive_cql_connection(node1)
> 
> create_ks(session, 'ks', 1)
> session.execute("""
> CREATE TABLE test2 (
> id int,
> col int,
> val text,
> PRIMARY KEY(id, col)
> );
> """)
> 
> for i, j in itertools.product(list(range(100)), list(range(10))):
> session.execute("INSERT INTO test2 (id, col, val) VALUES ({}, {}, 
> 'foo')".format(i, j))
> 
> # only check debug logs because at INFO level the no-spam logger has 
> unpredictable results
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM 
> ks.test2"],
> from_mark=mark, filename='debug.log', timeout=60)
> 
> 
> mark = node2.mark_log(filename='debug.log')
> session.execute(SimpleStatement("SELECT * from test2 where id = 1",
> 
> consistency_level=ConsistencyLevel.ONE,
> 
> retry_policy=FallthroughRetryPolicy()))
> node2.watch_log_for(["operations were slow", "SELECT \* FROM ks.test2 
> WHERE id = 1"],
> >   from_mark=mark, filename='debug.log', timeout=60)
> cql_test.py:1150: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> self = 
> exprs = ['operations were slow', 'SELECT \\* FROM ks.test2 

[jira] [Updated] (CASSANDRA-15529) AbstractLocalAwareExecutorService.java exceptions after upgrade from 2.1.16 to 3.11.4

2020-01-30 Thread Pooja Nair (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pooja Nair updated CASSANDRA-15529:
---
Description: 
Hello Team, 

We have cluster running on cassandra 3.11.4

Following is the table schema of the tables that is being used in our system.
{code:java}
cqlsh> desc KEYSPACE "SAL"
  
  CREATE KEYSPACE "SAL" WITH replication = {'class': 'NetworkTopologyStrategy', 
'DC_EAST': '3', 'DC_WEST': '3'}  AND durable_writes = true;
  
  CREATE TABLE "SAL".sal_purge (
  key text,
  column1 text,
  column2 text,
  value text,
  PRIMARY KEY (key, column1, column2)
  ) WITH COMPACT STORAGE
  AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
  AND bloom_filter_fp_chance = 0.1
  AND caching = '{"keys":"NONE", "rows_per_partition":"NONE"}'
  AND comment = 'Holds items to be removed as [shardid][salid][timestamp]. 
The table records SALIDs to be deleted along with their deletion times (which 
may be modified)'
  AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
  AND compression = {'chunk_length_kb': '64', 'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
  AND dclocal_read_repair_chance = 0.0
  AND default_time_to_live = 0
  AND gc_grace_seconds = 864000
  AND max_index_interval = 2048
  AND memtable_flush_period_in_ms = 0
  AND min_index_interval = 128
  AND read_repair_chance = 0.1
  AND speculative_retry = '99.0PERCENTILE';
  
  CREATE TABLE "SAL".sal_ref (
  key text,
  column1 text,
  column2 text,
  value text,
  PRIMARY KEY (key, column1, column2)
  ) WITH COMPACT STORAGE
  AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
  AND bloom_filter_fp_chance = 0.025
  AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
  AND comment = 'Holds owner references to content as [salid][lcid/opid]'
  AND compaction = {'sstable_size_in_mb': '180', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
  AND compression = {'chunk_length_kb': '64', 'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
  AND dclocal_read_repair_chance = 0.0
  AND default_time_to_live = 0
  AND gc_grace_seconds = 864000
  AND max_index_interval = 2048
  AND memtable_flush_period_in_ms = 0
  AND min_index_interval = 128
  AND read_repair_chance = 0.0
  AND speculative_retry = '99.0PERCENTILE';

{code}
Things to note:
 # The column2 is always passed a null value during insertion 
 # column2 is a part of primary key
 #  Range select and Range delete is done through our app.    

Iniatally the cluster was on casssandra version 2.1.16  and have been recently 
upgraded to 3.11.4 post the upgrade, we see that the nodes are going down, and 
log the below exceptions during startup and even after node is up. This one 
node is causing the whole cluster to behave improperly.


{code:java}
WARN [Native-Transport-Requests-47] 2020-01-29 13:49:05,190 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[Native-Transport-Requests-47,5,main]: {} java.lang.RuntimeException: 
java.lang.IllegalStateException: UnfilteredRowIterator for SAL.sal_purge has an 
open RT bound as its last item at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2588)
 ~[apache-cassandra-3.11.4.jar:3.11.4] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0-internal] at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 [apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
 [apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.concurrent.SEPExecutor.maybeExecuteImmediately(SEPExecutor.java:194)
 [apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:117)
 [apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
 [apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.service.AbstractReadExecutor$SpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:271)
 [apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1778)
 [apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1731) 
[apache-cassandra-3.11.4.jar:3.11.4] at 
org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1671) 
[apache-cassandra-3.11.4.jar:3.11.4] at 

[jira] [Commented] (CASSANDRA-15213) DecayingEstimatedHistogramReservoir Inefficiencies

2020-01-30 Thread Benedict Elliott Smith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026615#comment-17026615
 ] 

Benedict Elliott Smith commented on CASSANDRA-15213:


Awesome.  I think we're ready.  I've pushed one last set of minor suggestions, 
that _should_ permit C2 to produce branchless code for the calculation, at the 
expense only of values 0 through 8 being slower to compute (since these are 
likely extremely uncommon, this is probably preferable).  I think it would 
still be possible to reduce the total work by a few instructions with some time 
to think, but probably not worth it.

Entirely up to you if you prefer to use this suggestion, as it's not 
particularly important, and it's very hard to objectively determine the effect 
(since it will depend on branch predictor pollution).  Once you've decided, 
I'll get this committed.

> DecayingEstimatedHistogramReservoir Inefficiencies
> --
>
> Key: CASSANDRA-15213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15213
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Benedict Elliott Smith
>Assignee: Jordan West
>Priority: Normal
> Fix For: 4.0-beta
>
>
> * {{LongAdder}} introduced to trunk consumes 9MiB of heap without user 
> schemas, and this will grow significantly under contention and user schemas 
> with many tables.  This is because {{LongAdder}} is a very heavy class 
> designed for single contended values.  
>  ** This can likely be improved significantly, without significant loss of 
> performance in the contended case, by simply increasing the size of our 
> primitive backing array and providing multiple buckets, with each thread 
> picking a bucket to increment, or simply multiple backing arrays.  Probably a 
> better way still to do this would be to introduce some competition detection 
> to the update, much like {{LongAdder}} utilises, that increases the number of 
> backing arrays under competition.
>  ** To save memory this approach could partition the space into chunks that 
> are likely to be updated together, so that we do not need to duplicate the 
> entire array under competition.
>  * Similarly, binary search is costly and a measurable cost as a share of the 
> new networking work (without filtering it was > 10% of the CPU used overall). 
>  We can compute an approximation floor(log2 n / log2 1.2) extremely cheaply, 
> to save the random memory access costs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14740) BlockingReadRepair does not maintain monotonicity during range movements

2020-01-30 Thread Benedict Elliott Smith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026597#comment-17026597
 ] 

Benedict Elliott Smith commented on CASSANDRA-14740:


[CircleCI|https://circleci.com/workflow-run/1e09aaed-8345-484f-bc9d-a9b018005520]
 looks clean (I think? I'm losing faith in my ability to understand its UX, or 
its ability to understand our output)



> BlockingReadRepair does not maintain monotonicity during range movements
> 
>
> Key: CASSANDRA-14740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Coordination
>Reporter: Benedict Elliott Smith
>Assignee: Benedict Elliott Smith
>Priority: Urgent
>  Labels: correctness
> Fix For: 4.0, 4.0-beta
>
>
> The BlockingReadRepair code introduced by CASSANDRA-10726 requires that each 
> of the queried nodes are written to, but pending nodes are not considered.  
> If there is a pending range movement, one of these writes may be ‘lost’ when 
> the range movement completes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15258) Cassandra Windows JDK11 not working

2020-01-30 Thread Yuki Morishita (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026577#comment-17026577
 ] 

Yuki Morishita commented on CASSANDRA-15258:


jvm.options is now split into several files (jvm-server.options, 
jvm-client.options, etc) for 4.0 so cassandra.ps1 file needs to be updated in 
order to support running 4.0 on Windows.

Let me work on patch for that.

[~gus] the original description of this issue is for Windows start up script, 
so I will only fix that part. You should open another ticket if not done.

> Cassandra Windows JDK11  not working
> 
>
> Key: CASSANDRA-15258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15258
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Local/Startup and Shutdown
>Reporter: RamyaK
>Assignee: Yuki Morishita
>Priority: Urgent
>  Labels: windows
>
> I'm trying to setup Cassandra 4.0 trunk with OpenJDK11, but getting below 
> error on start up.
>  
> + $content = Get-Content "$env:CASSANDRA_CONF\jvm.options"
> +    ~
>     + CategoryInfo  : ObjectNotFound: 
> (D:\Stuff\save\C...onf\jvm.options:String) [Get-Content], 
> ItemNotFoundException
>     + FullyQualifiedErrorId : 
> PathNotFound,Microsoft.PowerShell.Commands.GetContentCommand
> Also JVM_VERSION is 11, still its showing as
> Cassandra 4.0 requires either Java 8 (update 151 or newer) or Java 11 (or 
> newer). Java 11 is not supported.
>  
>   Please suggest.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15258) Cassandra Windows JDK11 not working

2020-01-30 Thread Yuki Morishita (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita reassigned CASSANDRA-15258:
--

Assignee: Yuki Morishita

> Cassandra Windows JDK11  not working
> 
>
> Key: CASSANDRA-15258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15258
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Local/Startup and Shutdown
>Reporter: RamyaK
>Assignee: Yuki Morishita
>Priority: Urgent
>  Labels: windows
>
> I'm trying to setup Cassandra 4.0 trunk with OpenJDK11, but getting below 
> error on start up.
>  
> + $content = Get-Content "$env:CASSANDRA_CONF\jvm.options"
> +    ~
>     + CategoryInfo  : ObjectNotFound: 
> (D:\Stuff\save\C...onf\jvm.options:String) [Get-Content], 
> ItemNotFoundException
>     + FullyQualifiedErrorId : 
> PathNotFound,Microsoft.PowerShell.Commands.GetContentCommand
> Also JVM_VERSION is 11, still its showing as
> Cassandra 4.0 requires either Java 8 (update 151 or newer) or Java 11 (or 
> newer). Java 11 is not supported.
>  
>   Please suggest.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15079) Secondary Index not returning complete data

2020-01-30 Thread Chakravarthi Manepalli (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026570#comment-17026570
 ] 

Chakravarthi Manepalli commented on CASSANDRA-15079:


[~ifesdjeen], I do not have any backup states. I will try to recreate it from 
scratch. It may take more time to reproduce this issue. But I will keep you 
posted. Please let me know if you need any particular information about any 
configurations/strategies/logs/sstable (.db files) so that I will keep track of 
them or will enable those properties and will provide them here for better 
debugging.

> Secondary Index not returning complete data
> ---
>
> Key: CASSANDRA-15079
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15079
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/2i Index
>Reporter: Chakravarthi Manepalli
>Priority: High
>  Labels: performance
> Attachments: missing_data_cassandra.png
>
>
> The result response when we query using a secondary index does not give 
> complete data. Some of the rows are missing. After dropping the index and 
> creating it again, the query worked fine.
> Observation: The missed data entry is last edited 20 days ago.
> I suspect may be the data which is old is not getting indexed properly 
> through secondary indexes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15079) Secondary Index not returning complete data

2020-01-30 Thread Alex Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026558#comment-17026558
 ] 

Alex Petrov commented on CASSANDRA-15079:
-

I'm afraid it might be rather difficult to debug or understand what's going on 
if you've already dropped and re-created the index. Do you have a state of the 
target node / nodes somewhere in backup maybe?

> Secondary Index not returning complete data
> ---
>
> Key: CASSANDRA-15079
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15079
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/2i Index
>Reporter: Chakravarthi Manepalli
>Priority: High
>  Labels: performance
> Attachments: missing_data_cassandra.png
>
>
> The result response when we query using a secondary index does not give 
> complete data. Some of the rows are missing. After dropping the index and 
> creating it again, the query worked fine.
> Observation: The missed data entry is last edited 20 days ago.
> I suspect may be the data which is old is not getting indexed properly 
> through secondary indexes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15534:
---
Attachment: 15534.patch

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Priority: Normal
> Attachments: 15534.patch
>
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15534:
---
Test and Documentation Plan: 
gpg --import KEYS
gpg --list-keys
 Status: Patch Available  (was: In Progress)

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Priority: Normal
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15534:
---
Change Category: Quality Assurance
 Complexity: Low Hanging Fruit
Component/s: Build
 Status: Open  (was: Triage Needed)

> add mick's second (RSA) gpg key to the project's KEYS file
> --
>
> Key: CASSANDRA-15534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Priority: Normal
>
> Following on from CASSANDRA-15360... 
> The signing RPMs with `rpmsign` during the release process requires a RSA 
> key. So my existing DSA key from 15360 does not work with `rpmsign`
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint
>  A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15534) add mick's second (RSA) gpg key to the project's KEYS file

2020-01-30 Thread Michael Semb Wever (Jira)
Michael Semb Wever created CASSANDRA-15534:
--

 Summary: add mick's second (RSA) gpg key to the project's KEYS file
 Key: CASSANDRA-15534
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15534
 Project: Cassandra
  Issue Type: Task
Reporter: Michael Semb Wever


Following on from CASSANDRA-15360... 

The signing RPMs with `rpmsign` during the release process requires a RSA key. 
So my existing DSA key from 15360 does not work with `rpmsign`

The patch adds my gpg public key to the project's KEYS file found at 
https://dist.apache.org/repos/dist/release/cassandra/KEYS

My gpg public key here has the fingerprint
 A4C4 65FE A0C5 5256 1A39  2A61 E913 35D7 7E3E 87CB



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r37796 - in /dev/cassandra/4.0-alpha3: cassandra-4.0~alpha3-1.noarch.rpm cassandra-4.0~alpha3-1.src.rpm cassandra-tools-4.0~alpha3-1.noarch.rpm

2020-01-30 Thread mck
Author: mck
Date: Thu Jan 30 09:11:44 2020
New Revision: 37796

Log:
For staged release of Cassandra 4.0-alpha3 add signed versions of the RPMs

Modified:
dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.noarch.rpm
dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.src.rpm
dev/cassandra/4.0-alpha3/cassandra-tools-4.0~alpha3-1.noarch.rpm

Modified: dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.noarch.rpm
==
Binary files - no diff available.

Modified: dev/cassandra/4.0-alpha3/cassandra-4.0~alpha3-1.src.rpm
==
Binary files - no diff available.

Modified: dev/cassandra/4.0-alpha3/cassandra-tools-4.0~alpha3-1.noarch.rpm
==
Binary files - no diff available.



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15489) Allow dtest jar directory to be configurable

2020-01-30 Thread Marcus Eriksson (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-15489:

Source Control Link: 
https://github.com/apache/cassandra/commit/37828ee05cb622caaf7366a7cd550544f6cdc14b
  (was: 
https://github.com/apache/cassandra/commit/f7ee96c74f783b42e520d26d278eafaca2a59678)
 Status: Resolved  (was: Ready to Commit)

committed with a small change - we usually prefix system properties with 
{{cassandra.}} and properties in tests with {{cassandra.test.}}

thanks!

>  Allow dtest jar directory to be configurable
> -
>
> Key: CASSANDRA-15489
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15489
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/dtest
>Reporter: Doug Rohrer
>Assignee: Doug Rohrer
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 2.2.16, 3.0.20, 3.11.6, 4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In some circumstances, we may want to use a non-hard-coded directory as the 
> source for dtest jars. We should allow for a system property to change the 
> default `build` directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (729146a -> 5f7c886)

2020-01-30 Thread marcuse
This is an automated email from the ASF dual-hosted git repository.

marcuse pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 729146a  Merge branch 'cassandra-3.11' into trunk
 new 37828ee  Allow end-user to configure dtest jar path
 new cd82046  Merge branch 'cassandra-2.2' into cassandra-3.0
 new ba17cfd  Merge branch 'cassandra-3.0' into cassandra-3.11
 new 5f7c886  Merge branch 'cassandra-3.11' into trunk

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/cassandra/distributed/impl/Versions.java| 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-2.2' into cassandra-3.0

2020-01-30 Thread marcuse
This is an automated email from the ASF dual-hosted git repository.

marcuse pushed a commit to branch cassandra-3.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit cd820469cbfeea74ccaa35f76733d26894b04139
Merge: 7816301 37828ee
Author: Marcus Eriksson 
AuthorDate: Thu Jan 30 09:24:20 2020 +0100

Merge branch 'cassandra-2.2' into cassandra-3.0

 .../org/apache/cassandra/distributed/impl/Versions.java| 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15489) Allow dtest jar directory to be configurable

2020-01-30 Thread Marcus Eriksson (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-15489:

Status: Ready to Commit  (was: Review In Progress)

>  Allow dtest jar directory to be configurable
> -
>
> Key: CASSANDRA-15489
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15489
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/dtest
>Reporter: Doug Rohrer
>Assignee: Doug Rohrer
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 2.2.16, 3.0.20, 3.11.6, 4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In some circumstances, we may want to use a non-hard-coded directory as the 
> source for dtest jars. We should allow for a system property to change the 
> default `build` directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.0' into cassandra-3.11

2020-01-30 Thread marcuse
This is an automated email from the ASF dual-hosted git repository.

marcuse pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit ba17cfdf61f886c7a76983b75dd1688a92b3fd4b
Merge: 37ce461 cd82046
Author: Marcus Eriksson 
AuthorDate: Thu Jan 30 09:24:32 2020 +0100

Merge branch 'cassandra-3.0' into cassandra-3.11

 .../org/apache/cassandra/distributed/impl/Versions.java| 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   >