[jira] [Commented] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-11-08 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649189#comment-15649189
 ] 

Blake Eggleston commented on CASSANDRA-9143:


[~krummas] do you have time to review this?

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-11-08 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-9143:
---
Status: Patch Available  (was: Open)

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-11-08 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649177#comment-15649177
 ] 

Blake Eggleston commented on CASSANDRA-9143:


| [trunk|https://github.com/bdeggleston/cassandra/tree/9143-trunk] | 
[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-9143-trunk-dtest/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-9143-trunk-testall/]
 |
| [3.0|https://github.com/bdeggleston/cassandra/tree/9143-3.0] | 
[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-9143-3.0-dtest/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-9143-3.0-testall/]|

[dtest branch|https://github.com/bdeggleston/cassandra-dtest/tree/9143]

I've tried to break this up into logical commits for each component of the 
change to make reviewing easier.

The new incremental repair would work as follows:
# persist session locally on each repair participant
# anti-compact all unrepaired sstables intersecting with the range being 
repaired into a pending repair bucket
# perform validation/sync against the sstables segregated in the pending anti 
compaction step
# perform 2PC to promote pending repair sstables into repaired
#* If this, or the validation/sync phase fails, the sstables are moved back 
into unrepaired

Since incremental repair is the default in 3.0, I've also included a patch 
which fixes the consistency problems in 3.0, and is backwards compatible with 
the existing repair. That said, I'm not really convinced that making a change 
like this to repair in 3.0.x is a great idea. 

I'd be more in favor of disabling incremental repair, or at least not making it 
the default in 3.0.x. The compaction that gets kicked off after streamed 
sstables are added to the cfs means that whether repaired data is ultimately 
placed in the repaired or unrepaired bucket by anti-compaction is basically a 
crapshoot.

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-11-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-11720:

Fix Version/s: 3.x
   Status: Open  (was: Patch Available)

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Assignee: Hiroyuki Nishi
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-11-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-11720:

Assignee: Hiroyuki Nishi

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Assignee: Hiroyuki Nishi
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-11-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649054#comment-15649054
 ] 

mck edited comment on CASSANDRA-11720 at 11/8/16 10:53 PM:
---

LGTM.

Some small feedback [~hnishi]:
  • can we add a test?
  ∘  dtest (nodetool_test.py) (or a unit test if something can be created…)
  • the docs need to be updated
  ∘ we'd need an entry in {{doc/build/html/tools/nodetool/nodetool.html}}
  • CHANGES.txt needs an entry


was (Author: michaelsembwever):
LGTM.

Some small feedback [~hnishi]:
  • can we add a test?
  ∘  dtest (nodetool_test.py) (or a unit test if something can be created…)
  • the docs need to be updated
  ∘ we'd need an entry in {{doc/build/html/tools/nodetool/nodetool.html}}
  • CHANGES.txt needs an update again

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-11-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649054#comment-15649054
 ] 

mck edited comment on CASSANDRA-11720 at 11/8/16 10:52 PM:
---

LGTM.

Some small feedback [~hnishi]:
  • can we add a test?
  ∘  dtest (nodetool_test.py) (or a unit test if something can be created…)
  • the docs need to be updated
  ∘ we'd need an entry in {{doc/build/html/tools/nodetool/nodetool.html}}
  • CHANGES.txt needs an update again


was (Author: michaelsembwever):
LGTM.

Some small feedback [~hnishi]:
  • can we add a test?
  ∘  dtest (nodetool_test.py) (or a unit test if something can be created…)
  • the docs need to be updated
  ∘ we'd need an entry in {{doc/build/html/tools/nodetool/nodetool.html}}

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-11-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649054#comment-15649054
 ] 

mck edited comment on CASSANDRA-11720 at 11/8/16 10:51 PM:
---

LGTM.

Some small feedback [~hnishi]:
  • can we add a test?
  ∘  dtest (nodetool_test.py) (or a unit test if something can be created…)
  • the docs need to be updated
  ∘ we'd need an entry in {{doc/build/html/tools/nodetool/nodetool.html}}


was (Author: michaelsembwever):
LGTM.

Some small feedback [~ztyx]:
  • can we add a test?
  ∘  dtest (nodetool_test.py) (or a unit test if something can be created…)
  • the docs need to be updated
  ∘ we'd need an entry in {{doc/build/html/tools/nodetool/nodetool.html}}

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-11-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649054#comment-15649054
 ] 

mck commented on CASSANDRA-11720:
-

LGTM.

Some small feedback [~ztyx]:
  • can we add a test?
  ∘  dtest (nodetool_test.py) (or a unit test if something can be created…)
  • the docs need to be updated
  ∘ we'd need an entry in {{doc/build/html/tools/nodetool/nodetool.html}}

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-11-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-11720:

Reviewer: mck

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12858) testall failure in org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression

2016-11-08 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648949#comment-15648949
 ] 

Dikang Gu commented on CASSANDRA-12858:
---

Add a range limit of the ratio in test, to avoid generate very narrow token 
ranges: 
[trunk_patch|https://github.com/DikangGu/cassandra/commit/e452ffa3ef617c9b0c9aaf2265975ec17159ebef].

Will kick off a unit test.

> testall failure in 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression
> 
>
> Key: CASSANDRA-12858
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12858
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Dikang Gu
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/49/testReport/org.apache.cassandra.dht/Murmur3PartitionerTest/testSplitWrapping_compression/
> {code}
> Error Message
> For 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: For 
> 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:138)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:129)
>   at 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping(Murmur3PartitionerTest.java:50)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648932#comment-15648932
 ] 

Kurt Greaves commented on CASSANDRA-12730:
--

Oh I totally agree, just wanted to point out the underlying issue.

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12858) testall failure in org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression

2016-11-08 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648869#comment-15648869
 ] 

Dikang Gu commented on CASSANDRA-12858:
---

looking.

> testall failure in 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression
> 
>
> Key: CASSANDRA-12858
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12858
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Dikang Gu
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/49/testReport/org.apache.cassandra.dht/Murmur3PartitionerTest/testSplitWrapping_compression/
> {code}
> Error Message
> For 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: For 
> 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:138)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:129)
>   at 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping(Murmur3PartitionerTest.java:50)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12821) testall failure in org.apache.cassandra.service.RemoveTest.testNonmemberId

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12821:

Assignee: Joel Knighton

> testall failure in org.apache.cassandra.service.RemoveTest.testNonmemberId
> --
>
> Key: CASSANDRA-12821
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12821
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/41/testReport/org.apache.cassandra.service/RemoveTest/testNonmemberId/
> {code}
> Stacktrace
> java.lang.NullPointerException
>   at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:871)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2226)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1892)
>   at org.apache.cassandra.Util.createInitialRing(Util.java:216)
>   at org.apache.cassandra.service.RemoveTest.setup(RemoveTest.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12834) testall failure in org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12834:

Assignee: Sylvain Lebresne

> testall failure in 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn
> --
>
> Key: CASSANDRA-12834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sylvain Lebresne
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1250/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn/
> {code}
> Error Message
> Error setting schema for test (query was: CREATE INDEX c_index ON 
> cql_test_keyspace.table_20(c))
> {code}{code}
> Stacktrace
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> INDEX c_index ON cql_test_keyspace.table_20(c))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:705)
>   at org.apache.cassandra.cql3.CQLTester.createIndex(CQLTester.java:627)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.access$400(CassandraIndexTest.java:56)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest$TestScript.run(CassandraIndexTest.java:626)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn(CassandraIndexTest.java:86)
> Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Index 
> c_index already exists
>   at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:133)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:696)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12825) testall failure in org.apache.cassandra.db.compaction.CompactionsCQLTest.testTriggerMinorCompactionDTCS-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12825:

Assignee: Marcus Eriksson

> testall failure in 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.testTriggerMinorCompactionDTCS-compression
> ---
>
> Key: CASSANDRA-12825
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12825
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Marcus Eriksson
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1243/testReport/org.apache.cassandra.db.compaction/CompactionsCQLTest/testTriggerMinorCompactionDTCS_compression/
> {code}
> Error Message
> No minor compaction triggered in 5000ms
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: No minor compaction triggered in 5000ms
>   at 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.waitForMinor(CompactionsCQLTest.java:247)
>   at 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.testTriggerMinorCompactionDTCS(CompactionsCQLTest.java:72)
> {code}
> Related failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/47/testReport/org.apache.cassandra.db.compaction/CompactionsCQLTest/testTriggerMinorCompactionDTCS/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12841) testall failure in org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12841:

Assignee: Marcus Eriksson

> testall failure in 
> org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest-compression
> --
>
> Key: CASSANDRA-12841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12841
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Marcus Eriksson
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/597/testReport/org.apache.cassandra.db.compaction/NeverPurgeTest/minorNeverPurgeTombstonesTest_compression/
> {code}
> Error Message
> Memory was freed by Thread[NonPeriodicTasks:1,5,main]
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: Memory was freed by 
> Thread[NonPeriodicTasks:1,5,main]
>   at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:103)
>   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:260)
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:223)
>   at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:168)
>   at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:226)
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:303)
>   at 
> org.apache.cassandra.io.util.AbstractDataInput.readInt(AbstractDataInput.java:202)
>   at 
> org.apache.cassandra.io.util.AbstractDataInput.readLong(AbstractDataInput.java:264)
>   at 
> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:131)
>   at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
>   at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52)
>   at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46)
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:169)
>   at 
> org.apache.cassandra.db.compaction.NeverPurgeTest.verifyContainsTombstones(NeverPurgeTest.java:114)
>   at 
> org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest(NeverPurgeTest.java:85)
> {code}
> Related failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/598/testReport/org.apache.cassandra.db.compaction/NeverPurgeTest/minorNeverPurgeTombstonesTest/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12842) testall failure inorg.apache.cassandra.pig.CqlTableTest.testCqlNativeStorageCompositeKeyTable

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12842:

Assignee: Paulo Motta

> testall failure 
> inorg.apache.cassandra.pig.CqlTableTest.testCqlNativeStorageCompositeKeyTable
> -
>
> Key: CASSANDRA-12842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12842
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Paulo Motta
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/598/testReport/org.apache.cassandra.pig/CqlTableTest/testCqlNativeStorageCompositeKeyTable/
> {code}
> Error Message
> expected:<4> but was:<9>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<4> but was:<9>
>   at 
> org.apache.cassandra.pig.CqlTableTest.compositeKeyTableTest(CqlTableTest.java:200)
>   at 
> org.apache.cassandra.pig.CqlTableTest.testCqlNativeStorageCompositeKeyTable(CqlTableTest.java:172)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12858) testall failure in org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12858:

Assignee: Dikang Gu

> testall failure in 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression
> 
>
> Key: CASSANDRA-12858
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12858
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Dikang Gu
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/49/testReport/org.apache.cassandra.dht/Murmur3PartitionerTest/testSplitWrapping_compression/
> {code}
> Error Message
> For 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: For 
> 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:138)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:129)
>   at 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping(Murmur3PartitionerTest.java:50)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12875) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648802#comment-15648802
 ] 

Joshua McKenzie commented on CASSANDRA-12875:
-

Assigning to [~cnlwsu]

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression
> --
>
> Key: CASSANDRA-12875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12875
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Chris Lohfink
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/54/testReport/org.apache.cassandra.net/MessagingServiceTest/testDCLatency_compression/
> {code}
> Error Message
> expected:<107964792> but was:<129557750>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<107964792> but was:<129557750>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency(MessagingServiceTest.java:115)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12858) testall failure in org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648800#comment-15648800
 ] 

Joshua McKenzie commented on CASSANDRA-12858:
-

Assigning to [~dikanggu]

> testall failure in 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression
> 
>
> Key: CASSANDRA-12858
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12858
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Dikang Gu
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/49/testReport/org.apache.cassandra.dht/Murmur3PartitionerTest/testSplitWrapping_compression/
> {code}
> Error Message
> For 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: For 
> 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:138)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:129)
>   at 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping(Murmur3PartitionerTest.java:50)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12875) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12875:

Assignee: Chris Lohfink

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression
> --
>
> Key: CASSANDRA-12875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12875
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Chris Lohfink
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/54/testReport/org.apache.cassandra.net/MessagingServiceTest/testDCLatency_compression/
> {code}
> Error Message
> expected:<107964792> but was:<129557750>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<107964792> but was:<129557750>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency(MessagingServiceTest.java:115)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2016-11-08 Thread Bing Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Wu updated CASSANDRA-12886:

Description: 
While running "nodetool repair", I see many instances of 
"javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
system.logs on some nodes in the cluster. Timestamps correspond to streaming 
source/initiator's error messages of "sync failed between ..."

Setup: 
- Cassandra 3.7.01 
- CentOS 6.7 in AWS (multi-region)
- JDK version: {noformat}
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
{noformat}
- cassandra.yaml:
{noformat}
server_encryption_options:
internode_encryption: all
keystore: [path]
keystore_password: [password]
truststore: [path]
truststore_password: [password]
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false
{noformat}

Error messages in system.log on the target host:
{noformat}
ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
Streaming error occurred on session with peer 54.247.111.232
javax.net.ssl.SSLException: Connection has been shutdown: 
javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
~[na:1.8.0_102]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
~[na:1.8.0_102]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
~[na:1.8.0_102]
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
~[na:1.8.0_102]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
~[na:1.8.0_102]
at 
org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
 [apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
 [apache-cassandra-3.7.0.jar:3.7.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
reset
{noformat}

  was:
While running "nodetool repair", I see many instances of 
"javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
system.logs on some nodes in the cluster. Timestamp corresponds to streaming 
source/initiator's error messages of "sync failed between ..."

Setup: 
- Cassandra 3.7.01 
- CentOS 6.7 in AWS (multi-region)
- JDK version: {noformat}
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
{noformat}
- cassandra.yaml:
{noformat}
server_encryption_options:
internode_encryption: all
keystore: [path]
keystore_password: [password]
truststore: [path]
truststore_password: [password]
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false
{noformat}

Error messages in system.log on the target host:
{noformat}
ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
Streaming error occurred on session with peer 54.247.111.232
javax.net.ssl.SSLException: Connection has been shutdown: 
javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
~[na:1.8.0_102]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
~[na:1.8.0_102]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
~[na:1.8.0_102]
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
~[na:1.8.0_102]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
~[na:1.8.0_102]
at 
org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 

[jira] [Updated] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2016-11-08 Thread Bing Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Wu updated CASSANDRA-12886:

Description: 
While running "nodetool repair", I see many instances of 
"javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
system.logs on some nodes in the cluster. Timestamp corresponds to streaming 
source/initiator's error messages of "sync failed between ..."

Setup: 
- Cassandra 3.7.01 
- CentOS 6.7 in AWS (multi-region)
- JDK version: {noformat}
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
{noformat}

cassandra.yaml:
{noformat}
server_encryption_options:
internode_encryption: all
keystore: [path]
keystore_password: [password]
truststore: [path]
truststore_password: [password]
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false
{noformat}

Error messages in system.log on the target host:
{noformat}
ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
Streaming error occurred on session with peer 54.247.111.232
javax.net.ssl.SSLException: Connection has been shutdown: 
javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
~[na:1.8.0_102]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
~[na:1.8.0_102]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
~[na:1.8.0_102]
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
~[na:1.8.0_102]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
~[na:1.8.0_102]
at 
org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
 [apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
 [apache-cassandra-3.7.0.jar:3.7.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
reset
{noformat}

  was:
While running "nodetool repair", I see many instances of 
"javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
system.logs on some nodes in the cluster. Timestamp corresponds to streaming 
source/initiator's error messages of "sync failed between ..."

Setup: we are running 3.7.01 on CentOS 6.7 in AWS (multi-region)
jdk version: {noformat}
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
{noformat}

cassandra.yaml:
{noformat}
server_encryption_options:
internode_encryption: all
keystore: [path]
keystore_password: [password]
truststore: [path]
truststore_password: [password]
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false
{noformat}

Error messages in system.log on the target host:
{noformat}
ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
Streaming error occurred on session with peer 54.247.111.232
javax.net.ssl.SSLException: Connection has been shutdown: 
javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
~[na:1.8.0_102]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
~[na:1.8.0_102]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
~[na:1.8.0_102]
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
~[na:1.8.0_102]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
~[na:1.8.0_102]
at 
org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 

[jira] [Updated] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2016-11-08 Thread Bing Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Wu updated CASSANDRA-12886:

Description: 
While running "nodetool repair", I see many instances of 
"javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
system.logs on some nodes in the cluster. Timestamp corresponds to streaming 
source/initiator's error messages of "sync failed between ..."

Setup: 
- Cassandra 3.7.01 
- CentOS 6.7 in AWS (multi-region)
- JDK version: {noformat}
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
{noformat}
- cassandra.yaml:
{noformat}
server_encryption_options:
internode_encryption: all
keystore: [path]
keystore_password: [password]
truststore: [path]
truststore_password: [password]
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false
{noformat}

Error messages in system.log on the target host:
{noformat}
ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
Streaming error occurred on session with peer 54.247.111.232
javax.net.ssl.SSLException: Connection has been shutdown: 
javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
~[na:1.8.0_102]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
~[na:1.8.0_102]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
~[na:1.8.0_102]
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
~[na:1.8.0_102]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
~[na:1.8.0_102]
at 
org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
 [apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
 [apache-cassandra-3.7.0.jar:3.7.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
reset
{noformat}

  was:
While running "nodetool repair", I see many instances of 
"javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
system.logs on some nodes in the cluster. Timestamp corresponds to streaming 
source/initiator's error messages of "sync failed between ..."

Setup: 
- Cassandra 3.7.01 
- CentOS 6.7 in AWS (multi-region)
- JDK version: {noformat}
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
{noformat}

cassandra.yaml:
{noformat}
server_encryption_options:
internode_encryption: all
keystore: [path]
keystore_password: [password]
truststore: [path]
truststore_password: [password]
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false
{noformat}

Error messages in system.log on the target host:
{noformat}
ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
Streaming error occurred on session with peer 54.247.111.232
javax.net.ssl.SSLException: Connection has been shutdown: 
javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
~[na:1.8.0_102]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
~[na:1.8.0_102]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
~[na:1.8.0_102]
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
~[na:1.8.0_102]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
~[na:1.8.0_102]
at 
org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 

[jira] [Created] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2016-11-08 Thread Bing Wu (JIRA)
Bing Wu created CASSANDRA-12886:
---

 Summary: Streaming failed due to SSL Socket connection reset
 Key: CASSANDRA-12886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12886
 Project: Cassandra
  Issue Type: Bug
Reporter: Bing Wu


While running "nodetool repair", I see many instances of 
"javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
system.logs on some nodes in the cluster. Timestamp corresponds to streaming 
source/initiator's error messages of "sync failed between ..."

Setup: we are running 3.7.01 on CentOS 6.7 in AWS (multi-region)
jdk version: {noformat}
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
{noformat}

cassandra.yaml:
{noformat}
server_encryption_options:
internode_encryption: all
keystore: [path]
keystore_password: [password]
truststore: [path]
truststore_password: [password]
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false
{noformat}

Error messages in system.log on the target host:
{noformat}
ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
Streaming error occurred on session with peer 54.247.111.232
javax.net.ssl.SSLException: Connection has been shutdown: 
javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
~[na:1.8.0_102]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
~[na:1.8.0_102]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
~[na:1.8.0_102]
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
~[na:1.8.0_102]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
~[na:1.8.0_102]
at 
org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
 [apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
 [apache-cassandra-3.7.0.jar:3.7.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
reset
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12273) Casandra stess graph: option to create directory for graph if it doesn't exist

2016-11-08 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12273:
--
Assignee: Murukesh Mohanan

> Casandra stess graph: option to create directory for graph if it doesn't exist
> --
>
> Key: CASSANDRA-12273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12273
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Murukesh Mohanan
>Priority: Minor
>  Labels: lhf
> Attachments: 12273.patch
>
>
> I am running it in CI with ephemeral workspace  / build dirs. It would be 
> nice if CS would create the directory so my build tool doesn't have to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12820) testall failure in org.apache.cassandra.db.KeyspaceTest.testLimitSSTables-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12820:

Assignee: Branimir Lambov

> testall failure in 
> org.apache.cassandra.db.KeyspaceTest.testLimitSSTables-compression
> -
>
> Key: CASSANDRA-12820
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12820
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Branimir Lambov
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/38/testReport/org.apache.cassandra.db/KeyspaceTest/testLimitSSTables_compression/
> {code}
> Error Message
> expected:<5.0> but was:<6.0>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<5.0> but was:<6.0>
>   at 
> org.apache.cassandra.db.KeyspaceTest.testLimitSSTables(KeyspaceTest.java:421)
> {code}{code}
> Standard Output
> ERROR [main] 2016-10-20 05:56:18,156 ?:? - SLF4J: stderr
> INFO  [main] 2016-10-20 05:56:18,516 ?:? - Configuration location: 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2016-10-20 05:56:18,532 ?:? - Loading settings from 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2016-10-20 05:56:19,632 ?:? - Node 
> configuration:[allocate_tokens_for_keyspace=null; authenticator=null; 
> authorizer=null; auto_bootstrap=true; auto_snapshot=true; 
> back_pressure_enabled=f
> ...[truncated 453203 chars]...
> ableReader(path='/home/automaton/cassandra/build/test/cassandra/data:108/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-26-big-Data.db')]
>  (1 sstables, 6.278KiB), biggest 6.278KiB, smallest 6.278KiB
> DEBUG [MemtableFlushWriter:2] 2016-10-20 05:56:34,725 ?:? - Flushed to 
> [BigTableReader(path='/home/automaton/cassandra/build/test/cassandra/data:108/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/mc-22-big-Data.db')]
>  (1 sstables, 5.559KiB), biggest 5.559KiB, smallest 5.559KiB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12819) testall failure in org.apache.cassandra.index.sasi.SASIIndexTest.testMultiExpressionQueriesWhereRowSplitBetweenSSTables

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12819:

Assignee: Pavel Yaskevich

> testall failure in 
> org.apache.cassandra.index.sasi.SASIIndexTest.testMultiExpressionQueriesWhereRowSplitBetweenSSTables
> ---
>
> Key: CASSANDRA-12819
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12819
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Pavel Yaskevich
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/36/testReport/org.apache.cassandra.index.sasi/SASIIndexTest/testMultiExpressionQueriesWhereRowSplitBetweenSSTables/
> {code}
> Error Message
> Forked Java VM exited abnormally. Please note the time in the report does not 
> reflect the time until the VM exit.
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: Forked Java VM exited abnormally. 
> Please note the time in the report does not reflect the time until the VM 
> exit.
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12818) testall failure in org.apache.cassandra.db.compaction.LongLeveledCompactionStrategyTest.testParallelLeveledCompaction

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12818:

Assignee: Yuki Morishita

> testall failure in 
> org.apache.cassandra.db.compaction.LongLeveledCompactionStrategyTest.testParallelLeveledCompaction
> -
>
> Key: CASSANDRA-12818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12818
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Yuki Morishita
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_testall/710/testReport/org.apache.cassandra.db.compaction/LongLeveledCompactionStrategyTest/testParallelLeveledCompaction/
> {code}
> Error Message
> Timeout occurred. Please note the time in the report does not reflect the 
> time until the timeout.
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: Timeout occurred. Please note the time 
> in the report does not reflect the time until the timeout.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12817) testall failure in org.apache.cassandra.cql3.validation.entities.UFTest.testAllNativeTypes

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12817:

Assignee: Robert Stupp

> testall failure in 
> org.apache.cassandra.cql3.validation.entities.UFTest.testAllNativeTypes
> --
>
> Key: CASSANDRA-12817
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12817
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Robert Stupp
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/595/testReport/org.apache.cassandra.cql3.validation.entities/UFTest/testAllNativeTypes/
> {code}
> Error Message
> Timeout occurred. Please note the time in the report does not reflect the 
> time until the timeout.
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Timeout occurred. Please note the time 
> in the report does not reflect the time until the timeout.
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12273) Casandra stess graph: option to create directory for graph if it doesn't exist

2016-11-08 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12273:
--
Assignee: (was: Christopher Batey)

> Casandra stess graph: option to create directory for graph if it doesn't exist
> --
>
> Key: CASSANDRA-12273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12273
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Priority: Minor
>  Labels: lhf
> Attachments: 12273.patch
>
>
> I am running it in CI with ephemeral workspace  / build dirs. It would be 
> nice if CS would create the directory so my build tool doesn't have to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12811) testall failure in org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12811:

Assignee: Alex Petrov

> testall failure in 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression
> 
>
> Key: CASSANDRA-12811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12811
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/34/testReport/org.apache.cassandra.cql3.validation.operations/DeleteTest/testDeleteWithOneClusteringColumns_compression/
> {code}
> Error Message
> Expected empty result but got 1 rows
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected empty result but got 1 rows
>   at org.apache.cassandra.cql3.CQLTester.assertEmpty(CQLTester.java:1089)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:463)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12808) testall failure inorg.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex

2016-11-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12808:

Assignee: Sam Tunnicliffe

> testall failure 
> inorg.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex
> -
>
> Key: CASSANDRA-12808
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12808
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sam Tunnicliffe
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/594/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex/
> {code}
> Error Message
> Expected compaction interrupted exception
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected compaction interrupted 
> exception
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex(IndexSummaryManagerTest.java:641)
> {code}
> Related failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/600/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12832) SASI index corruption on too many overflow items

2016-11-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12832:

Status: Open  (was: Patch Available)

> SASI index corruption on too many overflow items
> 
>
> Key: CASSANDRA-12832
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12832
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> When SASI index has too many overflow items, it currently writes a corrupted 
> index file:
> {code}
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 15
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.flush(OnDiskIndexBuilder.java:446)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.finalFlush(OnDiskIndexBuilder.java:451)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:296)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:258)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:241)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$scheduleSegmentFlush$0(PerSSTableIndexWriter.java:267)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$complete$1(PerSSTableIndexWriter.java:296)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> ERROR [MemtableFlushWriter:4] 2016-10-23 23:17:19,920 DataTracker.java:168 - 
> Can't open index file at , skipping.
> java.lang.IllegalArgumentException: position: -524200, limit: 12288
> at 
> org.apache.cassandra.index.sasi.utils.MappedBuffer.position(MappedBuffer.java:106)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndex.(OnDiskIndex.java:155) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.SSTableIndex.(SSTableIndex.java:62) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.getIndexes(DataTracker.java:150)
>  [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.update(DataTracker.java:69) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.update(ColumnIndex.java:147) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.SASIIndex.handleNotification(SASIIndex.java:320)
>  [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:421) 
> [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.replaceFlushed(CompactionStrategyManager.java:317)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1569)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1197)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1100)
>  [main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  

[jira] [Commented] (CASSANDRA-12832) SASI index corruption on too many overflow items

2016-11-08 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648636#comment-15648636
 ] 

Alex Petrov commented on CASSANDRA-12832:
-

True. Although that wouldn't even be a problem if not the issue described in 
[CASSANDRA-12877]...

> SASI index corruption on too many overflow items
> 
>
> Key: CASSANDRA-12832
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12832
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> When SASI index has too many overflow items, it currently writes a corrupted 
> index file:
> {code}
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 15
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.flush(OnDiskIndexBuilder.java:446)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.finalFlush(OnDiskIndexBuilder.java:451)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:296)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:258)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:241)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$scheduleSegmentFlush$0(PerSSTableIndexWriter.java:267)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$complete$1(PerSSTableIndexWriter.java:296)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> ERROR [MemtableFlushWriter:4] 2016-10-23 23:17:19,920 DataTracker.java:168 - 
> Can't open index file at , skipping.
> java.lang.IllegalArgumentException: position: -524200, limit: 12288
> at 
> org.apache.cassandra.index.sasi.utils.MappedBuffer.position(MappedBuffer.java:106)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndex.(OnDiskIndex.java:155) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.SSTableIndex.(SSTableIndex.java:62) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.getIndexes(DataTracker.java:150)
>  [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.update(DataTracker.java:69) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.update(ColumnIndex.java:147) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.SASIIndex.handleNotification(SASIIndex.java:320)
>  [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:421) 
> [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.replaceFlushed(CompactionStrategyManager.java:317)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1569)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1197)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1100)
>  [main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>  

[jira] [Assigned] (CASSANDRA-12832) SASI index corruption on too many overflow items

2016-11-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-12832:
---

Assignee: (was: Alex Petrov)

> SASI index corruption on too many overflow items
> 
>
> Key: CASSANDRA-12832
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12832
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>
> When SASI index has too many overflow items, it currently writes a corrupted 
> index file:
> {code}
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 15
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.flush(OnDiskIndexBuilder.java:446)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.finalFlush(OnDiskIndexBuilder.java:451)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:296)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:258)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:241)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$scheduleSegmentFlush$0(PerSSTableIndexWriter.java:267)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$complete$1(PerSSTableIndexWriter.java:296)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> ERROR [MemtableFlushWriter:4] 2016-10-23 23:17:19,920 DataTracker.java:168 - 
> Can't open index file at , skipping.
> java.lang.IllegalArgumentException: position: -524200, limit: 12288
> at 
> org.apache.cassandra.index.sasi.utils.MappedBuffer.position(MappedBuffer.java:106)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndex.(OnDiskIndex.java:155) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.SSTableIndex.(SSTableIndex.java:62) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.getIndexes(DataTracker.java:150)
>  [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.update(DataTracker.java:69) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.update(ColumnIndex.java:147) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.SASIIndex.handleNotification(SASIIndex.java:320)
>  [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:421) 
> [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.replaceFlushed(CompactionStrategyManager.java:317)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1569)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1197)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1100)
>  [main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at 

[jira] [Commented] (CASSANDRA-10446) Run repair with down replicas

2016-11-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648625#comment-15648625
 ] 

Paulo Motta commented on CASSANDRA-10446:
-

bq. doesn't CASSANDRA-6503 handle this issue?

Good point! I didn't recall this so this is not as bad as I though initially 
but there is still at least one hairy scenario where things could go wrong:
{noformat}
A: unrepaired={1} repaired={}
B: unrepaired={2} repaired={}
C: unrepaired={3} repaired={}
{noformat}

During incremental repair, A sends key 1 to B and dies. B and C stream 
successful. At the end of the failed repair session, things will look like:
{noformat}
A: unrepaired={1} repaired={}
B: unrepaired={2} repaired={1, 2, 3}
C: unrepaired={3} repaired={2, 3}
{noformat}

If A dies permanently before next repair, key 1 will never be incrementally 
repaired between B and C. Likewise, if C dies, A will never get key 3 from B 
via incremental repair. Maybe this is such an edge case it that wouldn't 
justify a change per se, but if we defer setting repairedAt of streamed 
sstables to anti-compaction phase then we could make this slightly more correct 
while supporting session-based --force repair without adding a new repairedAt 
field to {{SyncRequest}}.

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9512) CqlTableTest.testCqlNativeStorageCollectionColumnTable failed in trunk

2016-11-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-9512.

   Resolution: Not A Problem
Fix Version/s: (was: 3.x)

I don't even think we have this test anymore. Closing this.

> CqlTableTest.testCqlNativeStorageCollectionColumnTable failed in trunk
> --
>
> Key: CASSANDRA-9512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9512
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Michael Shuler
>  Labels: test-failure
>
> Error:
> {{expected:<1> but was:<2>}}
> The trace shows:
> {noformat}
> java.io.IOException: java.lang.RuntimeException: failed to prepare cql query 
> update cql3ks.collectiontable set n = ? WHERE "m" = ?
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:357)
>  ~[main/:na]
> {noformat}
> http://cassci.datastax.com/view/trunk/job/trunk_testall/123/testReport/junit/org.apache.cassandra.pig/CqlTableTest/testCqlNativeStorageCollectionColumnTable/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9512) CqlTableTest.testCqlNativeStorageCollectionColumnTable failed in trunk

2016-11-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648616#comment-15648616
 ] 

Joshua McKenzie commented on CASSANDRA-9512:


[~philipthompson] - is this still an issue and/or a duplicate at this point?

> CqlTableTest.testCqlNativeStorageCollectionColumnTable failed in trunk
> --
>
> Key: CASSANDRA-9512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9512
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Michael Shuler
>  Labels: test-failure
> Fix For: 3.x
>
>
> Error:
> {{expected:<1> but was:<2>}}
> The trace shows:
> {noformat}
> java.io.IOException: java.lang.RuntimeException: failed to prepare cql query 
> update cql3ks.collectiontable set n = ? WHERE "m" = ?
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:357)
>  ~[main/:na]
> {noformat}
> http://cassci.datastax.com/view/trunk/job/trunk_testall/123/testReport/junit/org.apache.cassandra.pig/CqlTableTest/testCqlNativeStorageCollectionColumnTable/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10446) Run repair with down replicas

2016-11-08 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648584#comment-15648584
 ] 

Blake Eggleston commented on CASSANDRA-10446:
-

In the {{--force}} case it doesn't because {{RepairMessageVerbHandler}} will 
apply the repairedAt value computed at the beginning of the parent session, 
even if some nodes are being left out of the repair.

In the normal case, CASSANDRA-6503 helps, but the inconsistency is still 
possible because {{OnCompletionRunnable}} is run once a node has received all 
the files _it's_ expecting, but not necessarily before other nodes involved in 
the repair have received all their data, and there could still be a failure in 
that time.

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12832) SASI index corruption on too many overflow items

2016-11-08 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648569#comment-15648569
 ] 

Pavel Yaskevich commented on CASSANDRA-12832:
-

[~ifesdjeen] I don't think it's a good idea to log instead of throwing an 
exception in there, because throwing exception gives a clear indication that 
file is unusable where logging it would still make it look "ok" and loadable 
altough data is going to be missing...

> SASI index corruption on too many overflow items
> 
>
> Key: CASSANDRA-12832
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12832
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> When SASI index has too many overflow items, it currently writes a corrupted 
> index file:
> {code}
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 15
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.flush(OnDiskIndexBuilder.java:446)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.finalFlush(OnDiskIndexBuilder.java:451)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:296)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:258)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:241)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$scheduleSegmentFlush$0(PerSSTableIndexWriter.java:267)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$complete$1(PerSSTableIndexWriter.java:296)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> ERROR [MemtableFlushWriter:4] 2016-10-23 23:17:19,920 DataTracker.java:168 - 
> Can't open index file at , skipping.
> java.lang.IllegalArgumentException: position: -524200, limit: 12288
> at 
> org.apache.cassandra.index.sasi.utils.MappedBuffer.position(MappedBuffer.java:106)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndex.(OnDiskIndex.java:155) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.SSTableIndex.(SSTableIndex.java:62) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.getIndexes(DataTracker.java:150)
>  [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.DataTracker.update(DataTracker.java:69) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.update(ColumnIndex.java:147) 
> [main/:na]
> at 
> org.apache.cassandra.index.sasi.SASIIndex.handleNotification(SASIIndex.java:320)
>  [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:421) 
> [main/:na]
> at 
> org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.replaceFlushed(CompactionStrategyManager.java:317)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1569)
>  [main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1197)
>  [main/:na]
> at 
> 

[jira] [Updated] (CASSANDRA-12651) Failure in SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex

2016-11-08 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12651:

Labels: test-failure  (was: )

> Failure in 
> SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex
> 
>
> Key: CASSANDRA-12651
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12651
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>  Labels: test-failure
>
> This has failed with/without compression.
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: Got less rows than expected. Expected 2 
> but got 0
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:909)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.lambda$testAllowFilteringOnPartitionKeyWithSecondaryIndex$78(SecondaryIndexTest.java:1228)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest$$Lambda$293/218688965.apply(Unknown
>  Source)
>   at 
> org.apache.cassandra.cql3.CQLTester.beforeAndAfterFlush(CQLTester.java:1215)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex(SecondaryIndexTest.java:1218)
> {code}
> Examples:
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex_compression/
> http://cassci.datastax.com/job/trunk_testall/1219/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1216/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1208/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1175/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> May or may not be related, but there's a test failure (index duplicate):
> http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-11803-3.X-testall/lastCompletedBuild/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/
> http://cassci.datastax.com/job/ifesdjeen-11803-test-fix-trunk-testall/1/testReport/junit/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[cassandra] Git Push Summary

2016-11-08 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.10-tentative [created] 072b5271a


[cassandra] Git Push Summary

2016-11-08 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.0.10-tentative [created] 4e0bced5e


[jira] [Updated] (CASSANDRA-12676) Message coalescing regression

2016-11-08 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12676:

Reproduced In: 2.2.x, 3.0.x, 3.x

> Message coalescing regression
> -
>
> Key: CASSANDRA-12676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12676
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>  Labels: docs-impacting
> Fix For: 4.0, 3.x
>
>
> The default in 2.2+ was to enable TIMEHORIZON message coalescing.  After 
> reports of performance regressions after upgrading from 2.1 to 2.2/3.0 we 
> have discovered the issue to be this default.
> We need to re-test our assumptions on this feature but in the meantime we 
> should default back to disabled.
> Here is a performance run [with and without message 
> coalescing|http://cstar.datastax.com/graph?command=one_job=9a26b5f2-7f48-11e6-92e7-0256e416528f=op_rate=2_user=1_aggregates=true=0=508.86=0=91223]
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12676) Message coalescing regression

2016-11-08 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12676:

Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   3.x
   4.0

> Message coalescing regression
> -
>
> Key: CASSANDRA-12676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12676
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>  Labels: docs-impacting
> Fix For: 4.0, 3.x
>
>
> The default in 2.2+ was to enable TIMEHORIZON message coalescing.  After 
> reports of performance regressions after upgrading from 2.1 to 2.2/3.0 we 
> have discovered the issue to be this default.
> We need to re-test our assumptions on this feature but in the meantime we 
> should default back to disabled.
> Here is a performance run [with and without message 
> coalescing|http://cstar.datastax.com/graph?command=one_job=9a26b5f2-7f48-11e6-92e7-0256e416528f=op_rate=2_user=1_aggregates=true=0=508.86=0=91223]
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10446) Run repair with down replicas

2016-11-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648431#comment-15648431
 ] 

Marcus Eriksson commented on CASSANDRA-10446:
-

doesn't CASSANDRA-6503 handle this issue?

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-11-08 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/072b5271
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/072b5271
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/072b5271

Branch: refs/heads/cassandra-3.X
Commit: 072b5271a88328b909b230d0e30df1c7476fdb3f
Parents: 8ae3139 4e0bced
Author: Michael Shuler 
Authored: Tue Nov 8 12:42:44 2016 -0600
Committer: Michael Shuler 
Committed: Tue Nov 8 12:42:44 2016 -0600

--

--




[3/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-11-08 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/072b5271
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/072b5271
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/072b5271

Branch: refs/heads/trunk
Commit: 072b5271a88328b909b230d0e30df1c7476fdb3f
Parents: 8ae3139 4e0bced
Author: Michael Shuler 
Authored: Tue Nov 8 12:42:44 2016 -0600
Committer: Michael Shuler 
Committed: Tue Nov 8 12:42:44 2016 -0600

--

--




[1/5] cassandra git commit: Add 3.0.10 NEWS section

2016-11-08 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X 8ae31392d -> 072b5271a
  refs/heads/trunk 1eea75bcc -> 0d813fe88


Add 3.0.10 NEWS section


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e0bced5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e0bced5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e0bced5

Branch: refs/heads/cassandra-3.X
Commit: 4e0bced5e6a82ebd22b074b8ef96d930c5f3159d
Parents: 472f616
Author: Michael Shuler 
Authored: Tue Nov 8 12:21:49 2016 -0600
Committer: Michael Shuler 
Committed: Tue Nov 8 12:21:49 2016 -0600

--
 NEWS.txt | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e0bced5/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 0bd3920..8f05c4b 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.10
+=
+
+Upgrading
+-
+   - Nothing specific to this release, but please see previous versions 
upgrading section,
+ especially if you are upgrading from 2.2.
+
 3.0.9
 =
 



[5/5] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-11-08 Thread mshuler
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d813fe8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d813fe8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d813fe8

Branch: refs/heads/trunk
Commit: 0d813fe885b61dbe4e84a04e93a72e0ee798d7f5
Parents: 1eea75b 072b527
Author: Michael Shuler 
Authored: Tue Nov 8 12:43:05 2016 -0600
Committer: Michael Shuler 
Committed: Tue Nov 8 12:43:05 2016 -0600

--

--




[2/5] cassandra git commit: Add 3.0.10 NEWS section

2016-11-08 Thread mshuler
Add 3.0.10 NEWS section


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e0bced5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e0bced5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e0bced5

Branch: refs/heads/trunk
Commit: 4e0bced5e6a82ebd22b074b8ef96d930c5f3159d
Parents: 472f616
Author: Michael Shuler 
Authored: Tue Nov 8 12:21:49 2016 -0600
Committer: Michael Shuler 
Committed: Tue Nov 8 12:21:49 2016 -0600

--
 NEWS.txt | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e0bced5/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 0bd3920..8f05c4b 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.10
+=
+
+Upgrading
+-
+   - Nothing specific to this release, but please see previous versions 
upgrading section,
+ especially if you are upgrading from 2.2.
+
 3.0.9
 =
 



cassandra git commit: Add 3.0.10 NEWS section

2016-11-08 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 472f61613 -> 4e0bced5e


Add 3.0.10 NEWS section


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e0bced5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e0bced5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e0bced5

Branch: refs/heads/cassandra-3.0
Commit: 4e0bced5e6a82ebd22b074b8ef96d930c5f3159d
Parents: 472f616
Author: Michael Shuler 
Authored: Tue Nov 8 12:21:49 2016 -0600
Committer: Michael Shuler 
Committed: Tue Nov 8 12:21:49 2016 -0600

--
 NEWS.txt | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e0bced5/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 0bd3920..8f05c4b 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.10
+=
+
+Upgrading
+-
+   - Nothing specific to this release, but please see previous versions 
upgrading section,
+ especially if you are upgrading from 2.2.
+
 3.0.9
 =
 



[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648353#comment-15648353
 ] 

Blake Eggleston commented on CASSANDRA-12730:
-

[~pauloricardomg], CASSANDRA-9143 will properly isolate repaired, 
repair-in-progress, and unrepaired data for normal tables. I'm not familiar 
with the details of how MVs work, but looking at [the relevant parts of 
StreamReceiveTask|https://github.com/bdeggleston/cassandra/blob/de86ccf3a3b21e406a3e337019c2197bf15d8053/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java#L185-L185],
 it _looks_ like repairedAt value on the incoming sstables is basically 
discarded, which would explain why [~brstgt] hasn't had much luck using 
incremental repairs with them. So yeah, for MVs repairedAt values (and 
pendingRepair value added in CASSANDRA-9143)  will probably need to be added to 
the mutation class or something and taken into consideration when flushed.

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-11-08 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648298#comment-15648298
 ] 

Blake Eggleston commented on CASSANDRA-9143:


Just wanted to point out that [~pauloricardomg] found another source of 
repaired data inconsistency in CASSANDRA-10446. Since streamed data includes 
the repairedAt value for the in progress session, if the session fails, it's 
possible that a node will consider data repaired that another node may have 
never seen.

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12877) SASI index throwing AssertionError on creation/flush

2016-11-08 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648205#comment-15648205
 ] 

Alex Petrov commented on CASSANDRA-12877:
-

I'm reopening it as after giving it more thought I realised that the problem 
with overflows is much deeper than I initially thought. In the previous 
version, the collision would happen only when there were multiple offsets per 
token (which would happen on murmur hash collision), as there's a long set that 
holds partition position, that makes sure we're tracking unique partition 
positions per sstable, and create overflow entries when there's more than one 
partition position per token.

Introduction of the row offset now results into the problem that, when there 
are many equal items in the same partition, we're reaching the overflow. 
Unfortunately, there's no quick fix for that, although we already were 
discussing one of the changes that will solve this problem.

> SASI index throwing AssertionError on creation/flush
> 
>
> Key: CASSANDRA-12877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12877
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.9 and 3.10 tested on both linux and osx
>Reporter: Voytek Jarnot
>Assignee: Alex Petrov
>
> Possibly a 3.10 regression?  The exact test shown below does not error in 3.9.
> I built and installed a 3.10 snapshot (built 04-Nov-2016) to get around 
> CASSANDRA-11670, CASSANDRA-12689, and CASSANDRA-12223 which are holding me 
> back when using 3.9.
> Now I'm able to make nodetool flush (or a scheduled flush) produce an 
> unhandled error easily with a SASI:
> {code}
> CREATE KEYSPACE vjtest WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> use vjtest ;
> create table tester(id1 text, id2 text, id3 text, val1 text, primary 
> key((id1, id2), id3));
> create custom index tester_idx_val1 on tester(val1) using 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','1-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','2-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','3-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','4-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','5-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','6-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','7-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','8-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','9-3','asdf');
> {code}
> Not enough going on here to trigger a flush, so following a manual {{nodetool 
> flush vjtest}} I get the following in {{system.log}}:
> {code}
> INFO  [MemtableFlushWriter:3] 2016-11-04 22:19:35,412 
> PerSSTableIndexWriter.java:284 - Scheduling index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
> INFO  [SASI-Memtable:1] 2016-11-04 22:19:35,447 
> PerSSTableIndexWriter.java:335 - Index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
>  took 16 ms.
> ERROR [SASI-Memtable:1] 2016-11-04 22:19:35,449 CassandraDaemon.java:229 - 
> Exception in thread Thread[SASI-Memtable:1,5,RMI Runtime]
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 9
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.flush(OnDiskIndexBuilder.java:446)
>  

[jira] [Updated] (CASSANDRA-12283) CommitLogSegmentManagerTest.testCompressedCommitLogBackpressure is flaky

2016-11-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12283:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

The fix for {{Util.spinAssertEquals}} has been committed into 2.2 at 
3de6e9d327fc13cdb1b81cec918ab90a1a524fbe and merged into 3.0 and 3.X. The fix 
for the test has been committed into 3.X at 
8ae31392d66a9004b01bc40a267f0c8b34fc028f and merged into trunk. 

> CommitLogSegmentManagerTest.testCompressedCommitLogBackpressure is flaky
> 
>
> Key: CASSANDRA-12283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12283
> Project: Cassandra
>  Issue Type: Test
>Reporter: Joshua McKenzie
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: unittest
>
> Failed 3 of the last 38 runs.
> [Failure|http://cassci.datastax.com/job/cassandra-3.9_testall/lastCompletedBuild/testReport/org.apache.cassandra.db.commitlog/CommitLogSegmentManagerTest/testCompressedCommitLogBackpressure/]
> Details:
> Error Message
> Timeout occurred. Please note the time in the report does not reflect the 
> time until the timeout.
> Stacktrace
> junit.framework.AssertionFailedError: Timeout occurred. Please note the time 
> in the report does not reflect the time until the timeout.
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/5] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2016-11-08 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/472f6161
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/472f6161
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/472f6161

Branch: refs/heads/trunk
Commit: 472f61613e4bf2a2f492f49300e6e3d06c5ad728
Parents: 92594d8 3de6e9d
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:01:55 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:01:55 2016 +0100

--
 CHANGES.txt  | 1 +
 test/unit/org/apache/cassandra/Util.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/472f6161/CHANGES.txt
--
diff --cc CHANGES.txt
index d3043b8,b550885..cc5b003
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,35 -1,5 +1,36 @@@
 -2.2.9
 +3.0.10
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
+  * Fix Util.spinAssertEquals (CASSANDRA-12283)
   * Fix potential NPE for compactionstats (CASSANDRA-12462)
   * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
   * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/472f6161/test/unit/org/apache/cassandra/Util.java
--
diff --cc test/unit/org/apache/cassandra/Util.java
index d04ca9b,f6b4771..e8b42bc
--- a/test/unit/org/apache/cassandra/Util.java
+++ b/test/unit/org/apache/cassandra/Util.java
@@@ -28,29 -27,21 +28,31 @@@ import java.nio.ByteBuffer
  import java.util.*;
  import java.util.concurrent.Callable;
  import java.util.concurrent.Future;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.function.Supplier;
  
 -import com.google.common.base.Supplier;
 +import com.google.common.base.Function;
 +import com.google.common.base.Preconditions;
 +import com.google.common.collect.Iterators;
 +import org.apache.commons.lang3.StringUtils;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.cql3.ColumnIdentifier;
+ 
  import org.apache.cassandra.db.*;
 -import 

[1/5] cassandra git commit: Fix Util.spinAssertEquals

2016-11-08 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk b6cb2ab6b -> 1eea75bcc


Fix Util.spinAssertEquals

patch by Benjamin Lerer; reviewed by Joshua McKenzie for CASSANDRA-12283


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3de6e9d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3de6e9d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3de6e9d3

Branch: refs/heads/trunk
Commit: 3de6e9d327fc13cdb1b81cec918ab90a1a524fbe
Parents: cbebb29
Author: Benjamin Lerer 
Authored: Tue Nov 8 17:53:27 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 17:53:27 2016 +0100

--
 CHANGES.txt  |  1 +
 test/unit/org/apache/cassandra/Util.java | 23 +++
 2 files changed, 4 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9d328ae..b550885 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix Util.spinAssertEquals (CASSANDRA-12283)
  * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/test/unit/org/apache/cassandra/Util.java
--
diff --git a/test/unit/org/apache/cassandra/Util.java 
b/test/unit/org/apache/cassandra/Util.java
index 91aa5fd..f6b4771 100644
--- a/test/unit/org/apache/cassandra/Util.java
+++ b/test/unit/org/apache/cassandra/Util.java
@@ -24,18 +24,12 @@ import java.io.*;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
-import java.nio.channels.FileChannel;
 import java.util.*;
 import java.util.concurrent.Callable;
 import java.util.concurrent.Future;
 
 import com.google.common.base.Supplier;
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
 
-import org.apache.cassandra.cache.CachingOptions;
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.composites.*;
 import org.apache.cassandra.db.compaction.AbstractCompactionTask;
@@ -46,27 +40,16 @@ import org.apache.cassandra.db.filter.QueryFilter;
 import org.apache.cassandra.db.filter.SliceQueryFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.marshal.AbstractType;
-import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.dht.*;
 import org.apache.cassandra.dht.RandomPartitioner.BigIntegerToken;
 import org.apache.cassandra.gms.ApplicationState;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.gms.VersionedValue;
-import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.Descriptor;
-import org.apache.cassandra.io.sstable.IndexSummary;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
-import org.apache.cassandra.io.sstable.format.big.BigTableReader;
-import org.apache.cassandra.io.sstable.metadata.MetadataCollector;
-import org.apache.cassandra.io.sstable.metadata.MetadataType;
-import org.apache.cassandra.io.sstable.metadata.StatsMetadata;
-import org.apache.cassandra.io.util.*;
-import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.service.StorageService;
-import org.apache.cassandra.utils.AlwaysPresentFilter;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.CounterId;
-import org.apache.hadoop.fs.FileUtil;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -80,7 +63,7 @@ public class Util
 return 
StorageService.getPartitioner().decorateKey(ByteBufferUtil.bytes(key));
 }
 
-public static DecoratedKey dk(String key, AbstractType type)
+public static DecoratedKey dk(String key, AbstractType type)
 {
 return 
StorageService.getPartitioner().decorateKey(type.fromString(key));
 }
@@ -386,8 +369,8 @@ public class Util
 
 public static void spinAssertEquals(Object expected, Supplier s, 
int timeoutInSeconds)
 {
-long now = System.currentTimeMillis();
-while (System.currentTimeMillis() - now < now + (1000 * 
timeoutInSeconds))
+long start = System.currentTimeMillis();
+while (System.currentTimeMillis() < start + (1000 * timeoutInSeconds))
 {
 if (s.get().equals(expected))
  

[4/5] cassandra git commit: Fix CommitLogSegmentManagerTest

2016-11-08 Thread blerer
Fix CommitLogSegmentManagerTest

patch by Benjamin Lerer; reviewed by Joshua McKenzie for CASSANDRA-12283


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8ae31392
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8ae31392
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8ae31392

Branch: refs/heads/trunk
Commit: 8ae31392d66a9004b01bc40a267f0c8b34fc028f
Parents: 875c107
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:19:33 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:19:33 2016 +0100

--
 CHANGES.txt |  1 +
 .../commitlog/CommitLogSegmentManagerTest.java  | 93 
 2 files changed, 55 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ae31392/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 37e38e4..dd9088b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
  * Fix cassandra-stress truncate option (CASSANDRA-12695)
  * Fix crossNode value when receiving messages (CASSANDRA-12791)
  * Don't load MX4J beans twice (CASSANDRA-12869)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ae31392/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
index af23821..cc31874 100644
--- 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
+++ 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
@@ -24,11 +24,9 @@ import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Random;
 import java.util.concurrent.Semaphore;
-import javax.naming.ConfigurationException;
 
 import com.google.common.collect.ImmutableMap;
 import org.junit.Assert;
-import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 
@@ -46,23 +44,36 @@ import org.apache.cassandra.db.marshal.AsciiType;
 import org.apache.cassandra.db.marshal.BytesType;
 import org.apache.cassandra.schema.KeyspaceParams;
 import org.jboss.byteman.contrib.bmunit.BMRule;
+import org.jboss.byteman.contrib.bmunit.BMRules;
 import org.jboss.byteman.contrib.bmunit.BMUnitRunner;
 
 @RunWith(BMUnitRunner.class)
 public class CommitLogSegmentManagerTest
 {
 //Block commit log service from syncing
-@SuppressWarnings("unused")
-private static final Semaphore allowSync = new Semaphore(0);
+private static final Semaphore allowSync = new Semaphore(1);
 
 private static final String KEYSPACE1 = "CommitLogTest";
 private static final String STANDARD1 = "Standard1";
 private static final String STANDARD2 = "Standard2";
 
 private final static byte[] entropy = new byte[1024 * 256];
-@BeforeClass
-public static void defineSchema()
+
+@Test
+@BMRules(rules = {@BMRule(name = "Acquire Semaphore before sync",
+  targetClass = "AbstractCommitLogService$1",
+  targetMethod = "run",
+  targetLocation = "AT INVOKE 
org.apache.cassandra.db.commitlog.CommitLog.sync",
+  action = 
"org.apache.cassandra.db.commitlog.CommitLogSegmentManagerTest.allowSync.acquire()"),
+  @BMRule(name = "Release Semaphore after sync",
+  targetClass = "AbstractCommitLogService$1",
+  targetMethod = "run",
+  targetLocation = "AFTER INVOKE 
org.apache.cassandra.db.commitlog.CommitLog.sync",
+  action = 
"org.apache.cassandra.db.commitlog.CommitLogSegmentManagerTest.allowSync.release()")})
+public void testCompressedCommitLogBackpressure() throws Throwable
 {
+// Perform all initialization before making CommitLog.Sync blocking
+// Doing the initialization within the method guarantee that Byteman 
has performed its injections when we start
 new Random().nextBytes(entropy);
 DatabaseDescriptor.daemonInitialization();
 DatabaseDescriptor.setCommitLogCompression(new 
ParameterizedClass("LZ4Compressor", ImmutableMap.of()));
@@ -77,50 +88,54 @@ public class CommitLogSegmentManagerTest
 SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD2, 0, AsciiType.instance, BytesType.instance));
 
 CompactionManager.instance.disableAutoCompaction();
-}
 
-@Test
-@BMRule(name = 

[3/5] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.X

2016-11-08 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/875c1076
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/875c1076
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/875c1076

Branch: refs/heads/trunk
Commit: 875c1076784932a935874dc880a18030cfd711b0
Parents: bc78a2a 472f616
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:05:20 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:05:20 2016 +0100

--
 CHANGES.txt  | 1 +
 test/unit/org/apache/cassandra/Util.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/875c1076/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/875c1076/test/unit/org/apache/cassandra/Util.java
--



[5/5] cassandra git commit: Merge branch cassandra-3.X into trunk

2016-11-08 Thread blerer
Merge branch cassandra-3.X into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1eea75bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1eea75bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1eea75bc

Branch: refs/heads/trunk
Commit: 1eea75bcc4cc88365ea4ac1058a63ad93d139003
Parents: b6cb2ab 8ae3139
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:22:53 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:23:15 2016 +0100

--
 CHANGES.txt |  2 +
 test/unit/org/apache/cassandra/Util.java|  6 +-
 .../commitlog/CommitLogSegmentManagerTest.java  | 93 
 3 files changed, 60 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1eea75bc/CHANGES.txt
--
diff --cc CHANGES.txt
index 0332f50,dd9088b..69a05c2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,12 -1,5 +1,13 @@@
 +4.0
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter 
(CASSANDRA-12422)
 +
 +
  3.10
+  * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
   * Fix cassandra-stress truncate option (CASSANDRA-12695)
   * Fix crossNode value when receiving messages (CASSANDRA-12791)
   * Don't load MX4J beans twice (CASSANDRA-12869)



cassandra git commit: Fix CommitLogSegmentManagerTest

2016-11-08 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X 875c10767 -> 8ae31392d


Fix CommitLogSegmentManagerTest

patch by Benjamin Lerer; reviewed by Joshua McKenzie for CASSANDRA-12283


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8ae31392
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8ae31392
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8ae31392

Branch: refs/heads/cassandra-3.X
Commit: 8ae31392d66a9004b01bc40a267f0c8b34fc028f
Parents: 875c107
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:19:33 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:19:33 2016 +0100

--
 CHANGES.txt |  1 +
 .../commitlog/CommitLogSegmentManagerTest.java  | 93 
 2 files changed, 55 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ae31392/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 37e38e4..dd9088b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
  * Fix cassandra-stress truncate option (CASSANDRA-12695)
  * Fix crossNode value when receiving messages (CASSANDRA-12791)
  * Don't load MX4J beans twice (CASSANDRA-12869)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ae31392/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
index af23821..cc31874 100644
--- 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
+++ 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
@@ -24,11 +24,9 @@ import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Random;
 import java.util.concurrent.Semaphore;
-import javax.naming.ConfigurationException;
 
 import com.google.common.collect.ImmutableMap;
 import org.junit.Assert;
-import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 
@@ -46,23 +44,36 @@ import org.apache.cassandra.db.marshal.AsciiType;
 import org.apache.cassandra.db.marshal.BytesType;
 import org.apache.cassandra.schema.KeyspaceParams;
 import org.jboss.byteman.contrib.bmunit.BMRule;
+import org.jboss.byteman.contrib.bmunit.BMRules;
 import org.jboss.byteman.contrib.bmunit.BMUnitRunner;
 
 @RunWith(BMUnitRunner.class)
 public class CommitLogSegmentManagerTest
 {
 //Block commit log service from syncing
-@SuppressWarnings("unused")
-private static final Semaphore allowSync = new Semaphore(0);
+private static final Semaphore allowSync = new Semaphore(1);
 
 private static final String KEYSPACE1 = "CommitLogTest";
 private static final String STANDARD1 = "Standard1";
 private static final String STANDARD2 = "Standard2";
 
 private final static byte[] entropy = new byte[1024 * 256];
-@BeforeClass
-public static void defineSchema()
+
+@Test
+@BMRules(rules = {@BMRule(name = "Acquire Semaphore before sync",
+  targetClass = "AbstractCommitLogService$1",
+  targetMethod = "run",
+  targetLocation = "AT INVOKE 
org.apache.cassandra.db.commitlog.CommitLog.sync",
+  action = 
"org.apache.cassandra.db.commitlog.CommitLogSegmentManagerTest.allowSync.acquire()"),
+  @BMRule(name = "Release Semaphore after sync",
+  targetClass = "AbstractCommitLogService$1",
+  targetMethod = "run",
+  targetLocation = "AFTER INVOKE 
org.apache.cassandra.db.commitlog.CommitLog.sync",
+  action = 
"org.apache.cassandra.db.commitlog.CommitLogSegmentManagerTest.allowSync.release()")})
+public void testCompressedCommitLogBackpressure() throws Throwable
 {
+// Perform all initialization before making CommitLog.Sync blocking
+// Doing the initialization within the method guarantee that Byteman 
has performed its injections when we start
 new Random().nextBytes(entropy);
 DatabaseDescriptor.daemonInitialization();
 DatabaseDescriptor.setCommitLogCompression(new 
ParameterizedClass("LZ4Compressor", ImmutableMap.of()));
@@ -77,50 +88,54 @@ public class CommitLogSegmentManagerTest
 SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD2, 0, AsciiType.instance, BytesType.instance));
 
  

[3/3] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.X

2016-11-08 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/875c1076
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/875c1076
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/875c1076

Branch: refs/heads/cassandra-3.X
Commit: 875c1076784932a935874dc880a18030cfd711b0
Parents: bc78a2a 472f616
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:05:20 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:05:20 2016 +0100

--
 CHANGES.txt  | 1 +
 test/unit/org/apache/cassandra/Util.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/875c1076/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/875c1076/test/unit/org/apache/cassandra/Util.java
--



[2/3] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2016-11-08 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/472f6161
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/472f6161
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/472f6161

Branch: refs/heads/cassandra-3.X
Commit: 472f61613e4bf2a2f492f49300e6e3d06c5ad728
Parents: 92594d8 3de6e9d
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:01:55 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:01:55 2016 +0100

--
 CHANGES.txt  | 1 +
 test/unit/org/apache/cassandra/Util.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/472f6161/CHANGES.txt
--
diff --cc CHANGES.txt
index d3043b8,b550885..cc5b003
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,35 -1,5 +1,36 @@@
 -2.2.9
 +3.0.10
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
+  * Fix Util.spinAssertEquals (CASSANDRA-12283)
   * Fix potential NPE for compactionstats (CASSANDRA-12462)
   * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
   * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/472f6161/test/unit/org/apache/cassandra/Util.java
--
diff --cc test/unit/org/apache/cassandra/Util.java
index d04ca9b,f6b4771..e8b42bc
--- a/test/unit/org/apache/cassandra/Util.java
+++ b/test/unit/org/apache/cassandra/Util.java
@@@ -28,29 -27,21 +28,31 @@@ import java.nio.ByteBuffer
  import java.util.*;
  import java.util.concurrent.Callable;
  import java.util.concurrent.Future;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.function.Supplier;
  
 -import com.google.common.base.Supplier;
 +import com.google.common.base.Function;
 +import com.google.common.base.Preconditions;
 +import com.google.common.collect.Iterators;
 +import org.apache.commons.lang3.StringUtils;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.cql3.ColumnIdentifier;
+ 
  import org.apache.cassandra.db.*;
 -import 

[1/3] cassandra git commit: Fix Util.spinAssertEquals

2016-11-08 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X bc78a2afa -> 875c10767


Fix Util.spinAssertEquals

patch by Benjamin Lerer; reviewed by Joshua McKenzie for CASSANDRA-12283


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3de6e9d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3de6e9d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3de6e9d3

Branch: refs/heads/cassandra-3.X
Commit: 3de6e9d327fc13cdb1b81cec918ab90a1a524fbe
Parents: cbebb29
Author: Benjamin Lerer 
Authored: Tue Nov 8 17:53:27 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 17:53:27 2016 +0100

--
 CHANGES.txt  |  1 +
 test/unit/org/apache/cassandra/Util.java | 23 +++
 2 files changed, 4 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9d328ae..b550885 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix Util.spinAssertEquals (CASSANDRA-12283)
  * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/test/unit/org/apache/cassandra/Util.java
--
diff --git a/test/unit/org/apache/cassandra/Util.java 
b/test/unit/org/apache/cassandra/Util.java
index 91aa5fd..f6b4771 100644
--- a/test/unit/org/apache/cassandra/Util.java
+++ b/test/unit/org/apache/cassandra/Util.java
@@ -24,18 +24,12 @@ import java.io.*;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
-import java.nio.channels.FileChannel;
 import java.util.*;
 import java.util.concurrent.Callable;
 import java.util.concurrent.Future;
 
 import com.google.common.base.Supplier;
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
 
-import org.apache.cassandra.cache.CachingOptions;
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.composites.*;
 import org.apache.cassandra.db.compaction.AbstractCompactionTask;
@@ -46,27 +40,16 @@ import org.apache.cassandra.db.filter.QueryFilter;
 import org.apache.cassandra.db.filter.SliceQueryFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.marshal.AbstractType;
-import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.dht.*;
 import org.apache.cassandra.dht.RandomPartitioner.BigIntegerToken;
 import org.apache.cassandra.gms.ApplicationState;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.gms.VersionedValue;
-import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.Descriptor;
-import org.apache.cassandra.io.sstable.IndexSummary;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
-import org.apache.cassandra.io.sstable.format.big.BigTableReader;
-import org.apache.cassandra.io.sstable.metadata.MetadataCollector;
-import org.apache.cassandra.io.sstable.metadata.MetadataType;
-import org.apache.cassandra.io.sstable.metadata.StatsMetadata;
-import org.apache.cassandra.io.util.*;
-import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.service.StorageService;
-import org.apache.cassandra.utils.AlwaysPresentFilter;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.CounterId;
-import org.apache.hadoop.fs.FileUtil;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -80,7 +63,7 @@ public class Util
 return 
StorageService.getPartitioner().decorateKey(ByteBufferUtil.bytes(key));
 }
 
-public static DecoratedKey dk(String key, AbstractType type)
+public static DecoratedKey dk(String key, AbstractType type)
 {
 return 
StorageService.getPartitioner().decorateKey(type.fromString(key));
 }
@@ -386,8 +369,8 @@ public class Util
 
 public static void spinAssertEquals(Object expected, Supplier s, 
int timeoutInSeconds)
 {
-long now = System.currentTimeMillis();
-while (System.currentTimeMillis() - now < now + (1000 * 
timeoutInSeconds))
+long start = System.currentTimeMillis();
+while (System.currentTimeMillis() < start + (1000 * timeoutInSeconds))
 {
 if 

[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648134#comment-15648134
 ] 

Benjamin Roth commented on CASSANDRA-12730:
---

Thanks for that hint! I already stumbled across your fix in CASSANDRA-12580 and 
thought I already have it but indeed this is not the case! I will check this 
but probably I wont be able to do that before next week. Thanks again!

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-12877) SASI index throwing AssertionError on creation/flush

2016-11-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reopened CASSANDRA-12877:
-

> SASI index throwing AssertionError on creation/flush
> 
>
> Key: CASSANDRA-12877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12877
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.9 and 3.10 tested on both linux and osx
>Reporter: Voytek Jarnot
>Assignee: Alex Petrov
>
> Possibly a 3.10 regression?  The exact test shown below does not error in 3.9.
> I built and installed a 3.10 snapshot (built 04-Nov-2016) to get around 
> CASSANDRA-11670, CASSANDRA-12689, and CASSANDRA-12223 which are holding me 
> back when using 3.9.
> Now I'm able to make nodetool flush (or a scheduled flush) produce an 
> unhandled error easily with a SASI:
> {code}
> CREATE KEYSPACE vjtest WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> use vjtest ;
> create table tester(id1 text, id2 text, id3 text, val1 text, primary 
> key((id1, id2), id3));
> create custom index tester_idx_val1 on tester(val1) using 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','1-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','2-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','3-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','4-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','5-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','6-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','7-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','8-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','9-3','asdf');
> {code}
> Not enough going on here to trigger a flush, so following a manual {{nodetool 
> flush vjtest}} I get the following in {{system.log}}:
> {code}
> INFO  [MemtableFlushWriter:3] 2016-11-04 22:19:35,412 
> PerSSTableIndexWriter.java:284 - Scheduling index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
> INFO  [SASI-Memtable:1] 2016-11-04 22:19:35,447 
> PerSSTableIndexWriter.java:335 - Index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
>  took 16 ms.
> ERROR [SASI-Memtable:1] 2016-11-04 22:19:35,449 CassandraDaemon.java:229 - 
> Exception in thread Thread[SASI-Memtable:1,5,RMI Runtime]
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 9
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.flush(OnDiskIndexBuilder.java:446)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.finalFlush(OnDiskIndexBuilder.java:451)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:296)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:258)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:241)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.lambda$scheduleSegmentFlush$0(PerSSTableIndexWriter.java:267)
>  

[2/2] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2016-11-08 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/472f6161
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/472f6161
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/472f6161

Branch: refs/heads/cassandra-3.0
Commit: 472f61613e4bf2a2f492f49300e6e3d06c5ad728
Parents: 92594d8 3de6e9d
Author: Benjamin Lerer 
Authored: Tue Nov 8 18:01:55 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 18:01:55 2016 +0100

--
 CHANGES.txt  | 1 +
 test/unit/org/apache/cassandra/Util.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/472f6161/CHANGES.txt
--
diff --cc CHANGES.txt
index d3043b8,b550885..cc5b003
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,35 -1,5 +1,36 @@@
 -2.2.9
 +3.0.10
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
+  * Fix Util.spinAssertEquals (CASSANDRA-12283)
   * Fix potential NPE for compactionstats (CASSANDRA-12462)
   * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
   * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/472f6161/test/unit/org/apache/cassandra/Util.java
--
diff --cc test/unit/org/apache/cassandra/Util.java
index d04ca9b,f6b4771..e8b42bc
--- a/test/unit/org/apache/cassandra/Util.java
+++ b/test/unit/org/apache/cassandra/Util.java
@@@ -28,29 -27,21 +28,31 @@@ import java.nio.ByteBuffer
  import java.util.*;
  import java.util.concurrent.Callable;
  import java.util.concurrent.Future;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.function.Supplier;
  
 -import com.google.common.base.Supplier;
 +import com.google.common.base.Function;
 +import com.google.common.base.Preconditions;
 +import com.google.common.collect.Iterators;
 +import org.apache.commons.lang3.StringUtils;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.cql3.ColumnIdentifier;
+ 
  import org.apache.cassandra.db.*;
 -import 

[1/2] cassandra git commit: Fix Util.spinAssertEquals

2016-11-08 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 92594d8b8 -> 472f61613


Fix Util.spinAssertEquals

patch by Benjamin Lerer; reviewed by Joshua McKenzie for CASSANDRA-12283


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3de6e9d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3de6e9d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3de6e9d3

Branch: refs/heads/cassandra-3.0
Commit: 3de6e9d327fc13cdb1b81cec918ab90a1a524fbe
Parents: cbebb29
Author: Benjamin Lerer 
Authored: Tue Nov 8 17:53:27 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 17:53:27 2016 +0100

--
 CHANGES.txt  |  1 +
 test/unit/org/apache/cassandra/Util.java | 23 +++
 2 files changed, 4 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9d328ae..b550885 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix Util.spinAssertEquals (CASSANDRA-12283)
  * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/test/unit/org/apache/cassandra/Util.java
--
diff --git a/test/unit/org/apache/cassandra/Util.java 
b/test/unit/org/apache/cassandra/Util.java
index 91aa5fd..f6b4771 100644
--- a/test/unit/org/apache/cassandra/Util.java
+++ b/test/unit/org/apache/cassandra/Util.java
@@ -24,18 +24,12 @@ import java.io.*;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
-import java.nio.channels.FileChannel;
 import java.util.*;
 import java.util.concurrent.Callable;
 import java.util.concurrent.Future;
 
 import com.google.common.base.Supplier;
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
 
-import org.apache.cassandra.cache.CachingOptions;
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.composites.*;
 import org.apache.cassandra.db.compaction.AbstractCompactionTask;
@@ -46,27 +40,16 @@ import org.apache.cassandra.db.filter.QueryFilter;
 import org.apache.cassandra.db.filter.SliceQueryFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.marshal.AbstractType;
-import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.dht.*;
 import org.apache.cassandra.dht.RandomPartitioner.BigIntegerToken;
 import org.apache.cassandra.gms.ApplicationState;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.gms.VersionedValue;
-import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.Descriptor;
-import org.apache.cassandra.io.sstable.IndexSummary;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
-import org.apache.cassandra.io.sstable.format.big.BigTableReader;
-import org.apache.cassandra.io.sstable.metadata.MetadataCollector;
-import org.apache.cassandra.io.sstable.metadata.MetadataType;
-import org.apache.cassandra.io.sstable.metadata.StatsMetadata;
-import org.apache.cassandra.io.util.*;
-import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.service.StorageService;
-import org.apache.cassandra.utils.AlwaysPresentFilter;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.CounterId;
-import org.apache.hadoop.fs.FileUtil;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -80,7 +63,7 @@ public class Util
 return 
StorageService.getPartitioner().decorateKey(ByteBufferUtil.bytes(key));
 }
 
-public static DecoratedKey dk(String key, AbstractType type)
+public static DecoratedKey dk(String key, AbstractType type)
 {
 return 
StorageService.getPartitioner().decorateKey(type.fromString(key));
 }
@@ -386,8 +369,8 @@ public class Util
 
 public static void spinAssertEquals(Object expected, Supplier s, 
int timeoutInSeconds)
 {
-long now = System.currentTimeMillis();
-while (System.currentTimeMillis() - now < now + (1000 * 
timeoutInSeconds))
+long start = System.currentTimeMillis();
+while (System.currentTimeMillis() < start + (1000 * timeoutInSeconds))
 {
 if 

[jira] [Comment Edited] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648092#comment-15648092
 ] 

Paulo Motta edited comment on CASSANDRA-12730 at 11/8/16 4:59 PM:
--

bq.  Once the mutations have been applied to memtable and flushed to disk, the 
resulting sstables will not be flaged with a repairedAt timestamp. The next 
repair process will pick up from there and "repair" the flushed sstables again 
back to the other nodes, as the rows can't be found in the unrepaired set 
there. This will go back and forth and each repair inconsistency found will 
probably further aggravate the issue.

Good catch [~spo...@gmail.com], I just noticed this yesterday while reviewing 
CASSANDRA-10446 and was going to open a ticket but at the time I didn't think 
it was so critical because I thought the source node would also not mark the 
data as repaired so in the next repair round things would be fixed, but the 
fact that at least one replica will mark as repaired while others will not will 
make mismatches on MV tables bounce forever when running incremental repair. 
Since it seems this is not what is causing the explosion of sstables here, 
would you mind creating a ticket for that and posting your findings/repro 
steps? Although we could probably reuse CASSANDRA-12489, it's unclear to me if 
it's the same since that affects non-incremental subrange repair.

[~brstgt] It seems your fork does not contain CASSANDRA-12580 which should help 
a lot with overstreaming, so could you try applying that and check if that will 
at least mitigate this issue?

bq. Maybe this is offtopic to this issue but for my understanding it sounds 
like doing incremental repairs with MVs always produced crap. In order to 
guarantee a consistent "repairedAt" state, you probably need sth like a 
sandboxed write path that is separated from the regular write path to be sure 
that streaming mutations and regular mutations are completely separated. and 
when streaming finished, you can flush all tables on all nodes and flag the 
newly created SSTables as repaired. But that again sounds like a very complex 
change.

Perhaps this is something that can be addressed by [~bdeggleston] on 
CASSANDRA-9143, but it will probably be a bit more involved since to be 
properly fixed it needs the memtable to distinguish between repaired and 
non-repaired mutations and we don't have this infrastructure now. Perhaps a 
simpler approach would be to skip anti-compaction altogether when there are 
mismatches on MV table, but we should probably move this discussion to another 
ticket since this is a different issue.


was (Author: pauloricardomg):
bq.  Once the mutations have been applied to memtable and flushed to disk, the 
resulting sstables will not be flaged with a repairedAt timestamp. The next 
repair process will pick up from there and "repair" the flushed sstables again 
back to the other nodes, as the rows can't be found in the unrepaired set 
there. This will go back and forth and each repair inconsistency found will 
probably further aggravate the issue.

Good catch [~spo...@gmail.com], I just noticed this yesterday while reviewing 
CASSANDRA-10446 and was going to open a ticket but at the time I didn't think 
it was so critical because I thought the source node would also not mark the 
data as repaired so in the next repair round things would be fixed, but the 
fact that at least one replica will mark as repaired while others will not will 
make mismatches on MV tables bounce forever when running incremental repair. 
Since it seems this is not what is causing the explosion of sstables here, 
would you mind creating a ticket for that and posting your findings/repro 
steps? Although we could probably reuse CASSANDRA-12489, it's unclear to me if 
it's the same since that affects non-incremental subrange repair.

[~brstgt] It seems your fork does not contain CASSANDRA-12580 which should help 
a lot with overstreaming, so could you try applying that and check if that will 
at least mitigate this issue?

bq. Maybe this is offtopic to this issue but for my understanding it sounds 
like doing incremental repairs with MVs always produced crap. In order to 
guarantee a consistent "repairedAt" state, you probably need sth like a 
sandboxed write path that is separated from the regular write path to be sure 
that streaming mutations and regular mutations are completely separated. and 
when streaming finished, you can flush all tables on all nodes and flag the 
newly created SSTables as repaired. But that again sounds like a very complex 
change.

Perhaps this is something that can be addressed by [~bdeggleston] on 
CASSANDRA-10446, but it will probably be a bit more involved since to be 
properly fixed it needs the memtable to distinguish between repaired and 
non-repaired mutations and we don't have this infrastructure now. Perhaps a 
simpler 

[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648092#comment-15648092
 ] 

Paulo Motta commented on CASSANDRA-12730:
-

bq.  Once the mutations have been applied to memtable and flushed to disk, the 
resulting sstables will not be flaged with a repairedAt timestamp. The next 
repair process will pick up from there and "repair" the flushed sstables again 
back to the other nodes, as the rows can't be found in the unrepaired set 
there. This will go back and forth and each repair inconsistency found will 
probably further aggravate the issue.

Good catch [~spo...@gmail.com], I just noticed this yesterday while reviewing 
CASSANDRA-10446 and was going to open a ticket but at the time I didn't think 
it was so critical because I thought the source node would also not mark the 
data as repaired so in the next repair round things would be fixed, but the 
fact that at least one replica will mark as repaired while others will not will 
make mismatches on MV tables bounce forever when running incremental repair. 
Since it seems this is not what is causing the explosion of sstables here, 
would you mind creating a ticket for that and posting your findings/repro 
steps? Although we could probably reuse CASSANDRA-12489, it's unclear to me if 
it's the same since that affects non-incremental subrange repair.

[~brstgt] It seems your fork does not contain CASSANDRA-12580 which should help 
a lot with overstreaming, so could you try applying that and check if that will 
at least mitigate this issue?

bq. Maybe this is offtopic to this issue but for my understanding it sounds 
like doing incremental repairs with MVs always produced crap. In order to 
guarantee a consistent "repairedAt" state, you probably need sth like a 
sandboxed write path that is separated from the regular write path to be sure 
that streaming mutations and regular mutations are completely separated. and 
when streaming finished, you can flush all tables on all nodes and flag the 
newly created SSTables as repaired. But that again sounds like a very complex 
change.

Perhaps this is something that can be addressed by [~bdeggleston] on 
CASSANDRA-10446, but it will probably be a bit more involved since to be 
properly fixed it needs the memtable to distinguish between repaired and 
non-repaired mutations and we don't have this infrastructure now. Perhaps a 
simpler approach would be to skip anti-compaction altogether when there are 
mismatches on MV table, but we should probably move this discussion to another 
ticket since this is a different issue.

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix Util.spinAssertEquals

2016-11-08 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 cbebb29ad -> 3de6e9d32


Fix Util.spinAssertEquals

patch by Benjamin Lerer; reviewed by Joshua McKenzie for CASSANDRA-12283


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3de6e9d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3de6e9d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3de6e9d3

Branch: refs/heads/cassandra-2.2
Commit: 3de6e9d327fc13cdb1b81cec918ab90a1a524fbe
Parents: cbebb29
Author: Benjamin Lerer 
Authored: Tue Nov 8 17:53:27 2016 +0100
Committer: Benjamin Lerer 
Committed: Tue Nov 8 17:53:27 2016 +0100

--
 CHANGES.txt  |  1 +
 test/unit/org/apache/cassandra/Util.java | 23 +++
 2 files changed, 4 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9d328ae..b550885 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix Util.spinAssertEquals (CASSANDRA-12283)
  * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3de6e9d3/test/unit/org/apache/cassandra/Util.java
--
diff --git a/test/unit/org/apache/cassandra/Util.java 
b/test/unit/org/apache/cassandra/Util.java
index 91aa5fd..f6b4771 100644
--- a/test/unit/org/apache/cassandra/Util.java
+++ b/test/unit/org/apache/cassandra/Util.java
@@ -24,18 +24,12 @@ import java.io.*;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
-import java.nio.channels.FileChannel;
 import java.util.*;
 import java.util.concurrent.Callable;
 import java.util.concurrent.Future;
 
 import com.google.common.base.Supplier;
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.ImmutableSet;
 
-import org.apache.cassandra.cache.CachingOptions;
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.composites.*;
 import org.apache.cassandra.db.compaction.AbstractCompactionTask;
@@ -46,27 +40,16 @@ import org.apache.cassandra.db.filter.QueryFilter;
 import org.apache.cassandra.db.filter.SliceQueryFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.marshal.AbstractType;
-import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.dht.*;
 import org.apache.cassandra.dht.RandomPartitioner.BigIntegerToken;
 import org.apache.cassandra.gms.ApplicationState;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.gms.VersionedValue;
-import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.Descriptor;
-import org.apache.cassandra.io.sstable.IndexSummary;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
-import org.apache.cassandra.io.sstable.format.big.BigTableReader;
-import org.apache.cassandra.io.sstable.metadata.MetadataCollector;
-import org.apache.cassandra.io.sstable.metadata.MetadataType;
-import org.apache.cassandra.io.sstable.metadata.StatsMetadata;
-import org.apache.cassandra.io.util.*;
-import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.service.StorageService;
-import org.apache.cassandra.utils.AlwaysPresentFilter;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.CounterId;
-import org.apache.hadoop.fs.FileUtil;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -80,7 +63,7 @@ public class Util
 return 
StorageService.getPartitioner().decorateKey(ByteBufferUtil.bytes(key));
 }
 
-public static DecoratedKey dk(String key, AbstractType type)
+public static DecoratedKey dk(String key, AbstractType type)
 {
 return 
StorageService.getPartitioner().decorateKey(type.fromString(key));
 }
@@ -386,8 +369,8 @@ public class Util
 
 public static void spinAssertEquals(Object expected, Supplier s, 
int timeoutInSeconds)
 {
-long now = System.currentTimeMillis();
-while (System.currentTimeMillis() - now < now + (1000 * 
timeoutInSeconds))
+long start = System.currentTimeMillis();
+while (System.currentTimeMillis() < start + (1000 * timeoutInSeconds))
 {
 if 

[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648065#comment-15648065
 ] 

Benjamin Roth commented on CASSANDRA-12730:
---

Maybe this is offtopic to this issue but for my understanding it sounds like 
doing incremental repairs with MVs always produced crap. In order to guarantee 
a consistent "repairedAt" state, you probably need sth like a sandboxed write 
path that is separated from the regular write path to be sure that streaming 
mutations and regular mutations are completely separated. and when streaming 
finished, you can flush all tables on all nodes and flag the newly created 
SSTables as repaired. But that again sounds like a very complex change.

If the write path stays local, say you have only views with the same partition 
key, that process could be simplified a bit e.g. by streaming that SSTable 
directly to disk (like having no view) and then building a view from that 
single SSTable. But that required an "offline" view creation not going through 
the regular write path. Then you could also flag the view SSTable as repaired. 
Maybe that sounds easier than it actually is and maybe I missed sth - I am 
quite new to the CS codebase. Just added my thoughts.

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12857) Upgrade procedure between 2.1.x and 3.0.x is broken

2016-11-08 Thread Alexander Yasnogor (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Yasnogor updated CASSANDRA-12857:
---
Attachment: cassandra.schema

> Upgrade procedure between 2.1.x and 3.0.x is broken
> ---
>
> Key: CASSANDRA-12857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12857
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alexander Yasnogor
>Priority: Critical
> Attachments: cassandra.schema
>
>
> It is not possible safely to do Cassandra in place upgrade from 2.1.14 to 
> 3.0.9.
> Distribution: deb packages from datastax community repo.
> The upgrade was performed according to procedure from this docu: 
> https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgrdCassandraDetails.html
> Potential reason: The upgrade procedure creates corrupted system_schema and 
> this keyspace get populated in the cluster and kills it.
> We started with one datacenter which contains 19 nodes divided to two racks.
> First rack was successfully upgraded and nodetool describecluster reported 
> two schema versions. One for upgraded nodes, another for non-upgraded nodes.
> On starting new version on a first node from the second rack:
> {code:java}
> INFO  [main] 2016-10-25 13:06:12,103 LegacySchemaMigrator.java:87 - Moving 11 
> keyspaces from legacy schema tables to the new schema keyspace (system_schema)
> INFO  [main] 2016-10-25 13:06:12,104 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7505e6ac
> INFO  [main] 2016-10-25 13:06:12,200 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@64414574
> INFO  [main] 2016-10-25 13:06:12,204 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@3f2c5f45
> INFO  [main] 2016-10-25 13:06:12,207 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2bc2d64d
> INFO  [main] 2016-10-25 13:06:12,301 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@77343846
> INFO  [main] 2016-10-25 13:06:12,305 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@19b0b931
> INFO  [main] 2016-10-25 13:06:12,308 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@44bb0b35
> INFO  [main] 2016-10-25 13:06:12,311 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@79f6cd51
> INFO  [main] 2016-10-25 13:06:12,319 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2fcd363b
> INFO  [main] 2016-10-25 13:06:12,356 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@609eead6
> INFO  [main] 2016-10-25 13:06:12,358 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7eb7f5d0
> INFO  [main] 2016-10-25 13:06:13,958 LegacySchemaMigrator.java:97 - 
> Truncating legacy schema tables
> INFO  [main] 2016-10-25 13:06:26,474 LegacySchemaMigrator.java:103 - 
> Completed migration of legacy schema tables
> INFO  [main] 2016-10-25 13:06:26,474 StorageService.java:521 - Populating 
> token metadata from system tables
> INFO  [main] 2016-10-25 13:06:26,796 StorageService.java:528 - Token 
> metadata: Normal Tokens: [HUGE LIST of tokens]
> INFO  [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - 
> Initializing ...
> INFO  [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - 
> Initializing ...
> INFO  [main] 2016-10-25 13:06:45,894 AutoSavingCache.java:165 - Completed 
> loading (2 ms; 460 keys) KeyCache cache
> INFO  [main] 2016-10-25 13:06:46,982 StorageService.java:521 - Populating 
> token metadata from system tables
> INFO  [main] 2016-10-25 13:06:47,394 StorageService.java:528 - Token 
> metadata: Normal Tokens:[HUGE LIST of tokens]
> INFO  [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:88 - Migrating 
> legacy hints to new storage
> INFO  [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:91 - Forcing a 
> major compaction of system.hints table
> INFO  [main] 2016-10-25 13:06:50,587 LegacyHintsMigrator.java:95 - Writing 
> legacy hints to the new storage
> INFO  [main] 2016-10-25 13:06:53,927 LegacyHintsMigrator.java:99 - Truncating 
> system.hints table
> 
> INFO  [main] 2016-10-25 13:06:56,572 MigrationManager.java:342 - Create new 
> table: 
> 

[jira] [Issue Comment Deleted] (CASSANDRA-12857) Upgrade procedure between 2.1.x and 3.0.x is broken

2016-11-08 Thread Alexander Yasnogor (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Yasnogor updated CASSANDRA-12857:
---
Comment: was deleted

(was: Is there a way to share the schema by not publicly attaching it?)

> Upgrade procedure between 2.1.x and 3.0.x is broken
> ---
>
> Key: CASSANDRA-12857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12857
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alexander Yasnogor
>Priority: Critical
> Attachments: cassandra.schema
>
>
> It is not possible safely to do Cassandra in place upgrade from 2.1.14 to 
> 3.0.9.
> Distribution: deb packages from datastax community repo.
> The upgrade was performed according to procedure from this docu: 
> https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgrdCassandraDetails.html
> Potential reason: The upgrade procedure creates corrupted system_schema and 
> this keyspace get populated in the cluster and kills it.
> We started with one datacenter which contains 19 nodes divided to two racks.
> First rack was successfully upgraded and nodetool describecluster reported 
> two schema versions. One for upgraded nodes, another for non-upgraded nodes.
> On starting new version on a first node from the second rack:
> {code:java}
> INFO  [main] 2016-10-25 13:06:12,103 LegacySchemaMigrator.java:87 - Moving 11 
> keyspaces from legacy schema tables to the new schema keyspace (system_schema)
> INFO  [main] 2016-10-25 13:06:12,104 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7505e6ac
> INFO  [main] 2016-10-25 13:06:12,200 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@64414574
> INFO  [main] 2016-10-25 13:06:12,204 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@3f2c5f45
> INFO  [main] 2016-10-25 13:06:12,207 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2bc2d64d
> INFO  [main] 2016-10-25 13:06:12,301 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@77343846
> INFO  [main] 2016-10-25 13:06:12,305 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@19b0b931
> INFO  [main] 2016-10-25 13:06:12,308 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@44bb0b35
> INFO  [main] 2016-10-25 13:06:12,311 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@79f6cd51
> INFO  [main] 2016-10-25 13:06:12,319 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2fcd363b
> INFO  [main] 2016-10-25 13:06:12,356 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@609eead6
> INFO  [main] 2016-10-25 13:06:12,358 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7eb7f5d0
> INFO  [main] 2016-10-25 13:06:13,958 LegacySchemaMigrator.java:97 - 
> Truncating legacy schema tables
> INFO  [main] 2016-10-25 13:06:26,474 LegacySchemaMigrator.java:103 - 
> Completed migration of legacy schema tables
> INFO  [main] 2016-10-25 13:06:26,474 StorageService.java:521 - Populating 
> token metadata from system tables
> INFO  [main] 2016-10-25 13:06:26,796 StorageService.java:528 - Token 
> metadata: Normal Tokens: [HUGE LIST of tokens]
> INFO  [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - 
> Initializing ...
> INFO  [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - 
> Initializing ...
> INFO  [main] 2016-10-25 13:06:45,894 AutoSavingCache.java:165 - Completed 
> loading (2 ms; 460 keys) KeyCache cache
> INFO  [main] 2016-10-25 13:06:46,982 StorageService.java:521 - Populating 
> token metadata from system tables
> INFO  [main] 2016-10-25 13:06:47,394 StorageService.java:528 - Token 
> metadata: Normal Tokens:[HUGE LIST of tokens]
> INFO  [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:88 - Migrating 
> legacy hints to new storage
> INFO  [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:91 - Forcing a 
> major compaction of system.hints table
> INFO  [main] 2016-10-25 13:06:50,587 LegacyHintsMigrator.java:95 - Writing 
> legacy hints to the new storage
> INFO  [main] 2016-10-25 13:06:53,927 LegacyHintsMigrator.java:99 - Truncating 
> system.hints table
> 
> INFO  [main] 2016-10-25 13:06:56,572 MigrationManager.java:342 - Create new 
> table: 
> 

[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648009#comment-15648009
 ] 

Benjamin Roth commented on CASSANDRA-12730:
---

Unfortunately logs are gone, SSTables have all been compacted away and I dont 
really remember if it was MV or base table. In my initial post I stated that 
only the MV was affected - so if I trust my own post, it was the MV ;)

I will verify this, as soon as I observe that misbehaviour again.

But what I can tell for sure: There is no MV propagation to other nodes as I 
ONLY use MVs with same partition key as the base table. Using MVs with 
different partition keys behaved terribly (especially in case of streaming), I 
had to get rid of them! At least that's how I understood MVs - view replicas 
with same PK as base table always reside on the same node.

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648004#comment-15648004
 ] 

Michael Shuler commented on CASSANDRA-12730:


Great, thanks for the clarification!

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647987#comment-15647987
 ] 

Benjamin Roth commented on CASSANDRA-12730:
---

I get your point, but to be clear, in THIS case, we do full repairs as already 
mentioned. So these are maybe 2 different issues.
The case you describe reminds me very much of that other ticket I created: 
CASSANDRA-12489

Once again: We use reaper to do parallel, full, subrange repairs

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647975#comment-15647975
 ] 

Benjamin Roth commented on CASSANDRA-12730:
---

[cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.3 | Native protocol v4]

But as mentioned, this is a fork with its latest common ancestor at commit 
bddfd643b0d1ccebf129a10fa0e0a60289c9dea0 and an added fix for 12689.

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12651) Failure in SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex

2016-11-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12651:

Status: Open  (was: Patch Available)

> Failure in 
> SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex
> 
>
> Key: CASSANDRA-12651
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12651
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>
> This has failed with/without compression.
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: Got less rows than expected. Expected 2 
> but got 0
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:909)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.lambda$testAllowFilteringOnPartitionKeyWithSecondaryIndex$78(SecondaryIndexTest.java:1228)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest$$Lambda$293/218688965.apply(Unknown
>  Source)
>   at 
> org.apache.cassandra.cql3.CQLTester.beforeAndAfterFlush(CQLTester.java:1215)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex(SecondaryIndexTest.java:1218)
> {code}
> Examples:
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex_compression/
> http://cassci.datastax.com/job/trunk_testall/1219/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1216/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1208/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1175/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> May or may not be related, but there's a test failure (index duplicate):
> http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-11803-3.X-testall/lastCompletedBuild/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/
> http://cassci.datastax.com/job/ifesdjeen-11803-test-fix-trunk-testall/1/testReport/junit/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12462) NullPointerException in CompactionInfo.getId(CompactionInfo.java:65)

2016-11-08 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-12462:
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.10
   3.0.10
   2.2.9
   Status: Resolved  (was: Patch Available)

Tests look good, so I committed as 
{{cbebb29adf5d8b13e75fe60c2f7aa312420be35c}}. Thanks!

> NullPointerException in CompactionInfo.getId(CompactionInfo.java:65)
> 
>
> Key: CASSANDRA-12462
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12462
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jonathan DePrizio
>Assignee: Simon Zhou
> Fix For: 2.2.9, 3.0.10, 3.10
>
> Attachments: 
> 0001-Fix-NPE-when-running-nodetool-compactionstats.patch, 
> CASSANDRA-12462-v2.patch
>
>
> Note: The same trace is cited in the last comment of 
> https://issues.apache.org/jira/browse/CASSANDRA-11961
> I've noticed that some of my nodes in my 2.1 cluster have fallen way behind 
> on compactions, and have huge numbers (thousands) of uncompacted, tiny 
> SSTables (~30MB or so).
> In diagnosing the issue, I've found that "nodetool compactionstats" returns 
> the exception below.  Restarting cassandra on the node here causes the 
> pending tasks count to jump to ~2000.  Compactions run properly for about an 
> hour, until this exception occurs again.  Once it occurs, I see the pending 
> tasks value rapidly drop towards zero, but without any compactions actually 
> running (the logs show no compactions finishing).  It would seem that this is 
> causing compactions to fail on this node, which is leading to it running out 
> of space, etc.
> [redacted]# nodetool compactionstats
> xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms12G -Xmx12G 
> -Xmn1000M -Xss255k
> pending tasks: 5
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.db.compaction.CompactionInfo.getId(CompactionInfo.java:65)
>   at 
> org.apache.cassandra.db.compaction.CompactionInfo.asMap(CompactionInfo.java:118)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getCompactions(CompactionManager.java:1405)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at sun.reflect.misc.Trampoline.invoke(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
>   at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
>   at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
>   at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
>   at com.sun.jmx.mbeanserver.PerInterface.getAttribute(Unknown Source)
>   at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(Unknown Source)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(Unknown 
> Source)
>   at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(Unknown Source)
>   at javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown 
> Source)
>   at javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown 
> Source)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown 
> Source)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown 
> Source)
>   at javax.management.remote.rmi.RMIConnectionImpl.getAttribute(Unknown 
> Source)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
>   at sun.rmi.transport.Transport$1.run(Unknown Source)
>   at sun.rmi.transport.Transport$1.run(Unknown Source)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Unknown Source)
>   at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
>   at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown 
> Source)
>   at 

[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-11-08 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc78a2af
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc78a2af
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc78a2af

Branch: refs/heads/cassandra-3.X
Commit: bc78a2afac1bca4cd17ae3e156033ff0b205f3fc
Parents: d582d03 92594d8
Author: Yuki Morishita 
Authored: Tue Nov 8 09:42:54 2016 -0600
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:42:54 2016 -0600

--
 CHANGES.txt  | 1 +
 .../org/apache/cassandra/db/compaction/CompactionInfo.java   | 8 +++-
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc78a2af/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc78a2af/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index 535217f,3cd8737..344fa58
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@@ -23,8 -23,8 +23,6 @@@ import java.util.Map
  import java.util.UUID;
  
  import org.apache.cassandra.config.CFMetaData;
--import org.apache.cassandra.metrics.StorageMetrics;
--import org.apache.cassandra.service.StorageService;
  
  /** Implements serializable to allow structured info to be returned via JMX. 
*/
  public final class CompactionInfo implements Serializable



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-11-08 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92594d8b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92594d8b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92594d8b

Branch: refs/heads/trunk
Commit: 92594d8b89746e91a302e53cf17f1e27891a8913
Parents: 78fdfe2 cbebb29
Author: Yuki Morishita 
Authored: Tue Nov 8 09:41:32 2016 -0600
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:41:32 2016 -0600

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/db/compaction/CompactionInfo.java | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92594d8b/CHANGES.txt
--
diff --cc CHANGES.txt
index 1d2c8f3,9d328ae..d3043b8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,35 -1,5 +1,36 @@@
 -2.2.9
 +3.0.10
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
+  * Fix potential NPE for compactionstats (CASSANDRA-12462)
   * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
   * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
   * Clean up permissions when a UDA is dropped (CASSANDRA-12720)



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-11-08 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92594d8b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92594d8b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92594d8b

Branch: refs/heads/cassandra-3.0
Commit: 92594d8b89746e91a302e53cf17f1e27891a8913
Parents: 78fdfe2 cbebb29
Author: Yuki Morishita 
Authored: Tue Nov 8 09:41:32 2016 -0600
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:41:32 2016 -0600

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/db/compaction/CompactionInfo.java | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92594d8b/CHANGES.txt
--
diff --cc CHANGES.txt
index 1d2c8f3,9d328ae..d3043b8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,35 -1,5 +1,36 @@@
 -2.2.9
 +3.0.10
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
+  * Fix potential NPE for compactionstats (CASSANDRA-12462)
   * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
   * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
   * Clean up permissions when a UDA is dropped (CASSANDRA-12720)



[03/10] cassandra git commit: Fix potential NPE for compactionstats

2016-11-08 Thread yukim
Fix potential NPE for compactionstats

patch by Simon Zhou; reviewed by yukim for CASSANDRA-12462


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cbebb29a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cbebb29a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cbebb29a

Branch: refs/heads/cassandra-3.X
Commit: cbebb29adf5d8b13e75fe60c2f7aa312420be35c
Parents: 312e21b
Author: Simon Zhou 
Authored: Thu Sep 22 16:35:52 2016 -0700
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:31:15 2016 -0600

--
 CHANGES.txt| 3 +--
 .../org/apache/cassandra/db/compaction/CompactionInfo.java | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b33ef8d..9d328ae 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
  * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
@@ -9,8 +10,6 @@
  * Better handle invalid system roles table (CASSANDRA-12700)
  * Split consistent range movement flag correction (CASSANDRA-12786)
 Merged from 2.1:
-===
-2.1.17
  * Don't skip sstables based on maxLocalDeletionTime (CASSANDRA-12765)
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index fe81eac..3cd8737 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@ -65,17 +65,17 @@ public final class CompactionInfo implements Serializable
 
 public UUID getId()
 {
-return cfm.cfId;
+return cfm != null ? cfm.cfId : null;
 }
 
 public String getKeyspace()
 {
-return cfm.ksName;
+return cfm != null ? cfm.ksName : null;
 }
 
 public String getColumnFamily()
 {
-return cfm.cfName;
+return cfm != null ? cfm.cfName : null;
 }
 
 public CFMetaData getCFMetaData()



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-11-08 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92594d8b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92594d8b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92594d8b

Branch: refs/heads/cassandra-3.X
Commit: 92594d8b89746e91a302e53cf17f1e27891a8913
Parents: 78fdfe2 cbebb29
Author: Yuki Morishita 
Authored: Tue Nov 8 09:41:32 2016 -0600
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:41:32 2016 -0600

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/db/compaction/CompactionInfo.java | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92594d8b/CHANGES.txt
--
diff --cc CHANGES.txt
index 1d2c8f3,9d328ae..d3043b8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,35 -1,5 +1,36 @@@
 -2.2.9
 +3.0.10
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
+  * Fix potential NPE for compactionstats (CASSANDRA-12462)
   * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
   * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
   * Clean up permissions when a UDA is dropped (CASSANDRA-12720)



[04/10] cassandra git commit: Fix potential NPE for compactionstats

2016-11-08 Thread yukim
Fix potential NPE for compactionstats

patch by Simon Zhou; reviewed by yukim for CASSANDRA-12462


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cbebb29a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cbebb29a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cbebb29a

Branch: refs/heads/trunk
Commit: cbebb29adf5d8b13e75fe60c2f7aa312420be35c
Parents: 312e21b
Author: Simon Zhou 
Authored: Thu Sep 22 16:35:52 2016 -0700
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:31:15 2016 -0600

--
 CHANGES.txt| 3 +--
 .../org/apache/cassandra/db/compaction/CompactionInfo.java | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b33ef8d..9d328ae 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
  * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
@@ -9,8 +10,6 @@
  * Better handle invalid system roles table (CASSANDRA-12700)
  * Split consistent range movement flag correction (CASSANDRA-12786)
 Merged from 2.1:
-===
-2.1.17
  * Don't skip sstables based on maxLocalDeletionTime (CASSANDRA-12765)
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index fe81eac..3cd8737 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@ -65,17 +65,17 @@ public final class CompactionInfo implements Serializable
 
 public UUID getId()
 {
-return cfm.cfId;
+return cfm != null ? cfm.cfId : null;
 }
 
 public String getKeyspace()
 {
-return cfm.ksName;
+return cfm != null ? cfm.ksName : null;
 }
 
 public String getColumnFamily()
 {
-return cfm.cfName;
+return cfm != null ? cfm.cfName : null;
 }
 
 public CFMetaData getCFMetaData()



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-11-08 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc78a2af
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc78a2af
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc78a2af

Branch: refs/heads/trunk
Commit: bc78a2afac1bca4cd17ae3e156033ff0b205f3fc
Parents: d582d03 92594d8
Author: Yuki Morishita 
Authored: Tue Nov 8 09:42:54 2016 -0600
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:42:54 2016 -0600

--
 CHANGES.txt  | 1 +
 .../org/apache/cassandra/db/compaction/CompactionInfo.java   | 8 +++-
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc78a2af/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc78a2af/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index 535217f,3cd8737..344fa58
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@@ -23,8 -23,8 +23,6 @@@ import java.util.Map
  import java.util.UUID;
  
  import org.apache.cassandra.config.CFMetaData;
--import org.apache.cassandra.metrics.StorageMetrics;
--import org.apache.cassandra.service.StorageService;
  
  /** Implements serializable to allow structured info to be returned via JMX. 
*/
  public final class CompactionInfo implements Serializable



[02/10] cassandra git commit: Fix potential NPE for compactionstats

2016-11-08 Thread yukim
Fix potential NPE for compactionstats

patch by Simon Zhou; reviewed by yukim for CASSANDRA-12462


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cbebb29a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cbebb29a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cbebb29a

Branch: refs/heads/cassandra-3.0
Commit: cbebb29adf5d8b13e75fe60c2f7aa312420be35c
Parents: 312e21b
Author: Simon Zhou 
Authored: Thu Sep 22 16:35:52 2016 -0700
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:31:15 2016 -0600

--
 CHANGES.txt| 3 +--
 .../org/apache/cassandra/db/compaction/CompactionInfo.java | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b33ef8d..9d328ae 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
  * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
@@ -9,8 +10,6 @@
  * Better handle invalid system roles table (CASSANDRA-12700)
  * Split consistent range movement flag correction (CASSANDRA-12786)
 Merged from 2.1:
-===
-2.1.17
  * Don't skip sstables based on maxLocalDeletionTime (CASSANDRA-12765)
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index fe81eac..3cd8737 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@ -65,17 +65,17 @@ public final class CompactionInfo implements Serializable
 
 public UUID getId()
 {
-return cfm.cfId;
+return cfm != null ? cfm.cfId : null;
 }
 
 public String getKeyspace()
 {
-return cfm.ksName;
+return cfm != null ? cfm.ksName : null;
 }
 
 public String getColumnFamily()
 {
-return cfm.cfName;
+return cfm != null ? cfm.cfName : null;
 }
 
 public CFMetaData getCFMetaData()



[10/10] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-11-08 Thread yukim
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b6cb2ab6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b6cb2ab6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b6cb2ab6

Branch: refs/heads/trunk
Commit: b6cb2ab6b4f579c69835bd519343e59401a4dd74
Parents: 138f7b5 bc78a2a
Author: Yuki Morishita 
Authored: Tue Nov 8 09:43:00 2016 -0600
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:43:00 2016 -0600

--
 CHANGES.txt  | 1 +
 .../org/apache/cassandra/db/compaction/CompactionInfo.java   | 8 +++-
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b6cb2ab6/CHANGES.txt
--



[01/10] cassandra git commit: Fix potential NPE for compactionstats

2016-11-08 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 312e21bda -> cbebb29ad
  refs/heads/cassandra-3.0 78fdfe233 -> 92594d8b8
  refs/heads/cassandra-3.X d582d0340 -> bc78a2afa
  refs/heads/trunk 138f7b5d0 -> b6cb2ab6b


Fix potential NPE for compactionstats

patch by Simon Zhou; reviewed by yukim for CASSANDRA-12462


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cbebb29a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cbebb29a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cbebb29a

Branch: refs/heads/cassandra-2.2
Commit: cbebb29adf5d8b13e75fe60c2f7aa312420be35c
Parents: 312e21b
Author: Simon Zhou 
Authored: Thu Sep 22 16:35:52 2016 -0700
Committer: Yuki Morishita 
Committed: Tue Nov 8 09:31:15 2016 -0600

--
 CHANGES.txt| 3 +--
 .../org/apache/cassandra/db/compaction/CompactionInfo.java | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b33ef8d..9d328ae 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix potential NPE for compactionstats (CASSANDRA-12462)
  * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
  * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
  * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
@@ -9,8 +10,6 @@
  * Better handle invalid system roles table (CASSANDRA-12700)
  * Split consistent range movement flag correction (CASSANDRA-12786)
 Merged from 2.1:
-===
-2.1.17
  * Don't skip sstables based on maxLocalDeletionTime (CASSANDRA-12765)
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbebb29a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
index fe81eac..3cd8737 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionInfo.java
@@ -65,17 +65,17 @@ public final class CompactionInfo implements Serializable
 
 public UUID getId()
 {
-return cfm.cfId;
+return cfm != null ? cfm.cfId : null;
 }
 
 public String getKeyspace()
 {
-return cfm.ksName;
+return cfm != null ? cfm.ksName : null;
 }
 
 public String getColumnFamily()
 {
-return cfm.cfName;
+return cfm != null ? cfm.cfName : null;
 }
 
 public CFMetaData getCFMetaData()



[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647854#comment-15647854
 ] 

Yuki Morishita commented on CASSANDRA-12730:


Just to clarify, number of SSTables increased for Materialized View, not the 
base table?
As Stefan pointed out, for 3.0+, in order to update MV, base table is updated 
through Mutation, and propagates MV updates to other nodes.





> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647828#comment-15647828
 ] 

Stefan Podkowinski commented on CASSANDRA-12730:


The whole mutation based repair approach seems to be a bit at odd with the 
incremental repair concept. Once the mutations have been applied to memtable 
and flushed to disk, the resulting sstables will not be flaged with a 
{{repairedAt}} timestamp. The next repair process will pick up from there and 
"repair" the flushed sstables again back to the other nodes, as the rows can't 
be found in the unrepaired set there. This will go back and forth and each 
repair inconsistency found will probably further aggravate the issue. See 
[here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8] 
for an example how to reproduce this locally.

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death

2016-11-08 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647813#comment-15647813
 ] 

Michael Shuler commented on CASSANDRA-12730:


Could you be a little more precise, please? What's the version on the 
{{apache-cassandra-*.jar}} or what does the cqlsh banner say? {{[cqlsh 5.0.1 | 
Cassandra 3.10-SNAPSHOT | CQL spec 3.4.3 | Native protocol v4]}}, for example 
in my case. Is this the 3.7++ lts release someone announced?

> Thousands of empty SSTables created during repair - TMOF death
> --
>
> Key: CASSANDRA-12730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each 
> containing a few hundret million records. After a few hours a node died 
> because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. 
> The problem was that the repair created roughly over 100k SSTables for a 
> certain MV. The strange thing is that these SSTables had almost no data (like 
> 53bytes, 90bytes, ...). Some of them (<5%) had a few 100 KB, very few (<1% 
> had normal sizes like >= few MB). I could understand, that SSTables queue up 
> as they are flushed and not compacted in time but then they should have at 
> least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to 
> raise the limit even higher as I expect that this would just create even more 
> empty SSTables before dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty 
> SSTables have been created equally over time. 100-150 every minute. Among the 
> empty SSTables there are also Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just 
> tons of streams due to the repair (which I actually run over cs-reaper as 
> subrange, full repairs).
> After having restarted that node (and no more repair running), the number of 
> SSTables went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + 
> CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12877) SASI index throwing AssertionError on creation/flush

2016-11-08 Thread Voytek Jarnot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647722#comment-15647722
 ] 

Voytek Jarnot edited comment on CASSANDRA-12877 at 11/8/16 2:37 PM:


Appreciate the look/triage - apologies for the dupe.

With regard to regression - in 3.9 I get 9 rows from:
{code}
select * from tester where val1='asdf';
{code}

In my 3.10 build, I get 0 rows.  To confirm: 9 rows is expected behavior in 
3.10 as well, right?  Just want to make sure my team and I are not headed down 
a dead-end path based on getting lucky with anomalous indexing behavior in 
3.9...


was (Author: voytek.jarnot):
Appreciate the look/triage - apologies for the dupe.

With regard to regression - in 3.9 I get 9 rows from:
{code}
select * from tester where val1='asdf';
{code}

In my 3.10 build, I get 0 rows.  To confirm: 9 rows is expected behavior in 
3.10 as well, right?  Just want to make sure we're not headed down a dead-end 
path based on getting lucky with anomalous indexing behavior in 3.9...

> SASI index throwing AssertionError on creation/flush
> 
>
> Key: CASSANDRA-12877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12877
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.9 and 3.10 tested on both linux and osx
>Reporter: Voytek Jarnot
>Assignee: Alex Petrov
>
> Possibly a 3.10 regression?  The exact test shown below does not error in 3.9.
> I built and installed a 3.10 snapshot (built 04-Nov-2016) to get around 
> CASSANDRA-11670, CASSANDRA-12689, and CASSANDRA-12223 which are holding me 
> back when using 3.9.
> Now I'm able to make nodetool flush (or a scheduled flush) produce an 
> unhandled error easily with a SASI:
> {code}
> CREATE KEYSPACE vjtest WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> use vjtest ;
> create table tester(id1 text, id2 text, id3 text, val1 text, primary 
> key((id1, id2), id3));
> create custom index tester_idx_val1 on tester(val1) using 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','1-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','2-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','3-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','4-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','5-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','6-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','7-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','8-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','9-3','asdf');
> {code}
> Not enough going on here to trigger a flush, so following a manual {{nodetool 
> flush vjtest}} I get the following in {{system.log}}:
> {code}
> INFO  [MemtableFlushWriter:3] 2016-11-04 22:19:35,412 
> PerSSTableIndexWriter.java:284 - Scheduling index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
> INFO  [SASI-Memtable:1] 2016-11-04 22:19:35,447 
> PerSSTableIndexWriter.java:335 - Index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
>  took 16 ms.
> ERROR [SASI-Memtable:1] 2016-11-04 22:19:35,449 CassandraDaemon.java:229 - 
> Exception in thread Thread[SASI-Memtable:1,5,RMI Runtime]
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 9
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> 

[jira] [Commented] (CASSANDRA-12877) SASI index throwing AssertionError on creation/flush

2016-11-08 Thread Voytek Jarnot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647722#comment-15647722
 ] 

Voytek Jarnot commented on CASSANDRA-12877:
---

Appreciate the look/triage - apologies for the dupe.

With regard to regression - in 3.9 I get 9 rows from:
{code}
select * from tester where val1='asdf';
{code}

In my 3.10 build, I get 0 rows.  To confirm: 9 rows is expected behavior in 
3.10 as well, right?  Just want to make sure we're not headed down a dead-end 
path based on getting lucky with anomalous indexing behavior in 3.9...

> SASI index throwing AssertionError on creation/flush
> 
>
> Key: CASSANDRA-12877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12877
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.9 and 3.10 tested on both linux and osx
>Reporter: Voytek Jarnot
>Assignee: Alex Petrov
>
> Possibly a 3.10 regression?  The exact test shown below does not error in 3.9.
> I built and installed a 3.10 snapshot (built 04-Nov-2016) to get around 
> CASSANDRA-11670, CASSANDRA-12689, and CASSANDRA-12223 which are holding me 
> back when using 3.9.
> Now I'm able to make nodetool flush (or a scheduled flush) produce an 
> unhandled error easily with a SASI:
> {code}
> CREATE KEYSPACE vjtest WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> use vjtest ;
> create table tester(id1 text, id2 text, id3 text, val1 text, primary 
> key((id1, id2), id3));
> create custom index tester_idx_val1 on tester(val1) using 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','1-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','2-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','3-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','4-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','5-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','6-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','7-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','8-3','asdf');
> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','9-3','asdf');
> {code}
> Not enough going on here to trigger a flush, so following a manual {{nodetool 
> flush vjtest}} I get the following in {{system.log}}:
> {code}
> INFO  [MemtableFlushWriter:3] 2016-11-04 22:19:35,412 
> PerSSTableIndexWriter.java:284 - Scheduling index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
> INFO  [SASI-Memtable:1] 2016-11-04 22:19:35,447 
> PerSSTableIndexWriter.java:335 - Index flush to 
> /mydir/apache-cassandra-3.10-SNAPSHOT/data/data/vjtest/tester-6f1fdff0a30611e692c087673c5ef8d4/mc-1-big-SI_tester_idx_val1.db
>  took 16 ms.
> ERROR [SASI-Memtable:1] 2016-11-04 22:19:35,449 CassandraDaemon.java:229 - 
> Exception in thread Thread[SASI-Memtable:1,5,RMI Runtime]
> java.lang.AssertionError: cannot have more than 8 overflow collisions per 
> leaf, but had: 9
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder$DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilder.write(AbstractTokenTreeBuilder.java:90)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.flush(OnDiskIndexBuilder.java:446)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$MutableLevel.finalFlush(OnDiskIndexBuilder.java:451)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.finish(OnDiskIndexBuilder.java:296)
>  ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
> at 
> 

[jira] [Comment Edited] (CASSANDRA-12796) Heap exhaustion when rebuilding secondary index over a table with wide partitions

2016-11-08 Thread Milan Majercik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647229#comment-15647229
 ] 

Milan Majercik edited comment on CASSANDRA-12796 at 11/8/16 2:28 PM:
-

I'll post the formal patch for 2.2 shortly.


was (Author: mmajercik):
I'll post the formal patches shortly

> Heap exhaustion when rebuilding secondary index over a table with wide 
> partitions
> -
>
> Key: CASSANDRA-12796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12796
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Milan Majercik
>Priority: Critical
>
> We have a table with rather wide partition and a secondary index defined over 
> it. As soon as we try to rebuild the index we observed exhaustion of Java 
> heap and eventual OOM error. After a lengthy investigation we have managed to 
> find a culprit which appears to be a wrong granule of barrier issuances in 
> method {{org.apache.cassandra.db.Keyspace.indexRow}}:
> {code}
> try (OpOrder.Group opGroup = cfs.keyspace.writeOrder.start()){html}
> {
> Set indexes = 
> cfs.indexManager.getIndexesByNames(idxNames);
> Iterator pager = QueryPagers.pageRowLocally(cfs, 
> key.getKey(), DEFAULT_PAGE_SIZE);
> while (pager.hasNext())
> {
> ColumnFamily cf = pager.next();
> ColumnFamily cf2 = cf.cloneMeShallow();
> for (Cell cell : cf)
> {
> if (cfs.indexManager.indexes(cell.name(), indexes))
> cf2.addColumn(cell);
> }
> cfs.indexManager.indexRow(key.getKey(), cf2, opGroup);
> }
> }
> {code}
> Please note the operation group granule is a partition of the source table 
> which poses a problem for wide partition tables as flush runnable 
> ({{org.apache.cassandra.db.ColumnFamilyStore.Flush.run()}}) won't proceed 
> with flushing secondary index memtable before completing operations prior 
> recent issue of the barrier. In our situation the flush runnable waits until 
> whole wide partition gets indexed into the secondary index memtable before 
> flushing it. This causes an exhaustion of the heap and eventual OOM error.
> After we changed granule of barrier issue in method 
> {{org.apache.cassandra.db.Keyspace.indexRow}} to query page as opposed to 
> table partition secondary index (see 
> [https://github.com/mmajercik/cassandra/commit/7e10e5aa97f1de483c2a5faf867315ecbf65f3d6?diff=unified]),
>  rebuild started to work without heap exhaustion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12885) BatchStatement::verifyBatchSize is only called for batch without conditions

2016-11-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-12885.

   Resolution: Invalid
Reproduced In: 3.9, 3.0.9, 2.2.8  (was: 2.2.8, 3.0.9, 3.9)

> BatchStatement::verifyBatchSize is only called for batch without conditions
> ---
>
> Key: CASSANDRA-12885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12885
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>
> Why looking at the code, I noticed that {{BatchStatement::verifyBatchSize}} 
> is only called for batch without conditions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >