[jira] [Commented] (CASSANDRA-13884) sstableloader option to accept target keyspace name

2018-05-01 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460557#comment-16460557
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-13884:
---

Thanks [~jay.zhuang] for confirming this, I've filed a separate Jira 
https://issues.apache.org/jira/browse/CASSANDRA-14434 to track this failure, 
will investigate this tomorrow PST.

Jaydeep

> sstableloader option to accept target keyspace name
> ---
>
> Key: CASSANDRA-13884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13884
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 4.0
>
>
> Often as part of backup people store entire {{data}} directory. When they see 
> some corruption in data then they would like to restore data in same cluster 
> (for large clusters 200 nodes) but with different keyspace name. 
> Currently {{sstableloader}} uses parent folder as {{keyspace}}, it would be 
> nice to have an option to specify target keyspace name as part of 
> {{sstableloader}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14434) ant eclipse-warnings failing on trunk

2018-05-01 Thread Jaydeepkumar Chovatia (JIRA)
Jaydeepkumar Chovatia created CASSANDRA-14434:
-

 Summary: ant eclipse-warnings failing on trunk
 Key: CASSANDRA-14434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14434
 Project: Cassandra
  Issue Type: Bug
  Components: Build
Reporter: Jaydeepkumar Chovatia
Assignee: Jaydeepkumar Chovatia
 Fix For: 4.x


{{ant eclipse-warnings}} is been failing on last few builds

e.g.

https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test-all/569/console

{quote}
eclipse-warnings:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/build/ecj
 [echo] Running Eclipse Code Analysis.  Output logged to 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/build/ecj/eclipse_compiler_checks.txt
 [java] --
 [java] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/src/java/org/apache/cassandra/net/MessageIn.java
 (at line 120)
 [java] builder.put(type, type.serializer.deserialize(new 
DataInputBuffer(value), version));
 [java]   
^^
 [java] Potential resource leak: '' may not be 
closed
 [java] --
 [java] --
 [java] 2. ERROR in 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/src/java/org/apache/cassandra/net/async/MessageInHandler.java
 (at line 216)
 [java] parameters.put(parameterType, 
parameterType.serializer.deserialize(new DataInputBuffer(value), 
messagingVersion));
 [java] 
   ^^
 [java] Potential resource leak: '' may not be 
closed
 [java] --
 [java] --
 [java] 3. ERROR in 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/src/java/org/apache/cassandra/db/CassandraKeyspaceWriteHandler.java
 (at line 42)
 [java] group = Keyspace.writeOrder.start();
 [java] ^^^
 [java] Resource 'group' should be managed by try-with-resource
 [java] --
 [java] 4. ERROR in 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/src/java/org/apache/cassandra/db/CassandraKeyspaceWriteHandler.java
 (at line 68)
 [java] group = Keyspace.writeOrder.start();
 [java] ^^^
 [java] Resource 'group' should be managed by try-with-resource
 [java] --
 [java] --
 [java] 5. ERROR in 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/src/java/org/apache/cassandra/db/streaming/CassandraStreamReader.java
 (at line 159)
 [java] return writer;
 [java] ^^
 [java] Potential resource leak: 'txn' may not be closed at this location
 [java] --
 [java] --
 [java] 6. ERROR in 
/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-test-all/src/java/org/apache/cassandra/db/streaming/CassandraStreamReceiver.java
 (at line 106)
 [java] SSTableMultiWriter sstable = file.getSSTable();
 [java]^^^
 [java] Potential resource leak: 'sstable' may not be closed
 [java] --
 [java] 6 problems (6 errors)
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13884) sstableloader option to accept target keyspace name

2018-05-01 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460549#comment-16460549
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-13884:
---

[~michaelsembwever] I've just tried running {{ant test-all}} locally reverting 
this fix and seeing the same problem. I am trying to see the state of 
{{test-all}} before this fix on 
[519|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test-all/519/],
 
[518|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test-all/518/],
 
[517|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test-all/517/]
 but it seems these links do not exist anymore. I don't have permission to fire 
the build, is it possible for you to fire build w/o this fix and share link 
with me?

We can easily fix them by adding {{@SuppressWarnings("resource")}}, before we 
do that let me find out exact root cause for this problem, so far as per my 
investigation I am unable to find correlation with this fix, investigating 
further

> sstableloader option to accept target keyspace name
> ---
>
> Key: CASSANDRA-13884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13884
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 4.0
>
>
> Often as part of backup people store entire {{data}} directory. When they see 
> some corruption in data then they would like to restore data in same cluster 
> (for large clusters 200 nodes) but with different keyspace name. 
> Currently {{sstableloader}} uses parent folder as {{keyspace}}, it would be 
> nice to have an option to specify target keyspace name as part of 
> {{sstableloader}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14335) C* nodetool should report the lowest of the highest CQL protocol version supported by all clients connecting to it

2018-05-01 Thread Dinesh Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460545#comment-16460545
 ] 

Dinesh Joshi commented on CASSANDRA-14335:
--

I have fixed some minor issues with the patch that I spotted. I am not 100% 
certain that this class needs to be thread safe. Thoughts?

> C* nodetool should report the lowest of the highest CQL protocol version 
> supported by all clients connecting to it
> --
>
> Key: CASSANDRA-14335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14335
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> While upgrading C*, it makes it hard to tell whether any client will be 
> affected if C* is upgraded. C* should internally store the highest protocol 
> version of all clients connecting to it. The lowest supported version will 
> help determining if any client will be adversely affected by the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14335) C* nodetool should report the lowest of the highest CQL protocol version supported by all clients connecting to it

2018-05-01 Thread Dinesh Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460542#comment-16460542
 ] 

Dinesh Joshi commented on CASSANDRA-14335:
--

Thank you [~jasobrown]. [~zznate] just so you have more background, I have a 
paid CircleCI account that supports higher quotas. In order to use it, I need 
to update the settings. So my patches are typically two commits. You can cherry 
pick just the patch and ignore the CircleCI commit. If there is a better way to 
do this, I'm happy to discuss it :)

> C* nodetool should report the lowest of the highest CQL protocol version 
> supported by all clients connecting to it
> --
>
> Key: CASSANDRA-14335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14335
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> While upgrading C*, it makes it hard to tell whether any client will be 
> affected if C* is upgraded. C* should internally store the highest protocol 
> version of all clients connecting to it. The lowest supported version will 
> help determining if any client will be adversely affected by the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13884) sstableloader option to accept target keyspace name

2018-05-01 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460540#comment-16460540
 ] 

Jay Zhuang commented on CASSANDRA-13884:


Hi [~michaelsembwever], seems it's not introduced by this patch. Tried 
reverting the change and rolling back the commit before the change, {{$ ant 
eclipse-warnings}} still fails. We will take a look tomorrow (PDT).

> sstableloader option to accept target keyspace name
> ---
>
> Key: CASSANDRA-13884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13884
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 4.0
>
>
> Often as part of backup people store entire {{data}} directory. When they see 
> some corruption in data then they would like to restore data in same cluster 
> (for large clusters 200 nodes) but with different keyspace name. 
> Currently {{sstableloader}} uses parent folder as {{keyspace}}, it would be 
> nice to have an option to specify target keyspace name as part of 
> {{sstableloader}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12244) progress in compactionstats is reported wrongly for view builds

2018-05-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460524#comment-16460524
 ] 

mck commented on CASSANDRA-12244:
-

Patch looks good. Note in trunk it was fixed in a different manner, but the 
clash with the human readable flag was still there so I kept the introduction 
of the {{Unit}} enum.


I've put your patch into relevant branches, and will commit once they go green.
In the meantime [~jasonstack], could you please check i've applied your patch 
appropriately in each branch and commit.

|| Branch || uTest || dTest ||
|[cassandra-3.0_12244|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_12244]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_12244.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_12244]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/540/
 |
|[cassandra-3.11_12244|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12244]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12244.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12244]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/541/
 |
|[trunk_12244|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12244]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12244.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12244]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/541/
 |

> progress in compactionstats is reported wrongly for view builds
> ---
>
> Key: CASSANDRA-12244
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12244
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van der Woerdt
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x
>
>
> In the view build progress given by compactionstats, there are several issues 
> :
> {code}
>  id   compaction type   keyspace 
> table   completed   total unit   progress
>038d3690-4dbe-11e6-b207-21ec388d48e6View build  mykeyspace   
> mytable   844 bytes   967 bytes   ranges 87.28%
> Active compaction remaining time :n/a
> {code}
> 1) those are ranges, not bytes
> 2) it's not at 87.28%, it's at ~4%. the method for calculating progress in 
> Cassandra is wrong: it neglects to sort the tokens it's iterating through 
> (ViewBuilder.java) and thus ends up with a random number.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12244) progress in compactionstats is reported wrongly for view builds

2018-05-01 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-12244:

Reviewer: mck

> progress in compactionstats is reported wrongly for view builds
> ---
>
> Key: CASSANDRA-12244
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12244
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van der Woerdt
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x
>
>
> In the view build progress given by compactionstats, there are several issues 
> :
> {code}
>  id   compaction type   keyspace 
> table   completed   total unit   progress
>038d3690-4dbe-11e6-b207-21ec388d48e6View build  mykeyspace   
> mytable   844 bytes   967 bytes   ranges 87.28%
> Active compaction remaining time :n/a
> {code}
> 1) those are ranges, not bytes
> 2) it's not at 87.28%, it's at ~4%. the method for calculating progress in 
> Cassandra is wrong: it neglects to sort the tokens it's iterating through 
> (ViewBuilder.java) and thus ends up with a random number.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13884) sstableloader option to accept target keyspace name

2018-05-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460487#comment-16460487
 ] 

mck commented on CASSANDRA-13884:
-

HI [~jay.zhuang] and [~chovatia.jayd...@gmail.com], it looks like this broke 
the `eclipse-warnings`, as jenkins test-all has been broken since: 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test-all/520/



> sstableloader option to accept target keyspace name
> ---
>
> Key: CASSANDRA-13884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13884
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 4.0
>
>
> Often as part of backup people store entire {{data}} directory. When they see 
> some corruption in data then they would like to restore data in same cluster 
> (for large clusters 200 nodes) but with different keyspace name. 
> Currently {{sstableloader}} uses parent folder as {{keyspace}}, it would be 
> nice to have an option to specify target keyspace name as part of 
> {{sstableloader}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459637#comment-16459637
 ] 

mck edited comment on CASSANDRA-10751 at 5/2/18 2:35 AM:
-

Sorry [~cscetbon] that this got completely forgotten. The patch makes sense.

I've put your patch into relevant branches, and will commit once they go green.

|| Branch || uTest || dTest ||
|[cassandra-2.2_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-2.2_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/534/
 |
|[cassandra-3.0_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/539/
 |
|[cassandra-3.11_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/538/
 |
|[trunk_10751|https://github.com/thelastpickle/cassandra/tree/mck/trunk_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/537/
 |


was (Author: michaelsembwever):
Sorry [~cscetbon] that this got completely forgotten. The patch makes sense.

I've put your patch into relevant branches, and will commit once they go green.

|| Branch || uTest || dTest ||
|[cassandra-2.2_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-2.2_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/534/
 |
|[cassandra-3.0_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/538/
 |
|[cassandra-3.11_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/536/
 |
|[trunk_10751|https://github.com/thelastpickle/cassandra/tree/mck/trunk_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/537/
 |

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with 

[jira] [Updated] (CASSANDRA-14427) Bump jackson version to >= 2.9.5

2018-05-01 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14427:
-
Status: Patch Available  (was: Open)

> Bump jackson version to >= 2.9.5
> 
>
> Key: CASSANDRA-14427
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14427
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Lerh Chuan Low
>Assignee: Lerh Chuan Low
>Priority: Major
> Attachments: 2.1-14427.txt, 2.2-14427.txt, 3.0-14427.txt, 
> 3.X-14427.txt, trunk-14427.txt
>
>
> The Jackson being used by Cassandra is really old (1.9.2, and still 
> references codehaus (Jackson 1) instead of fasterxml (Jackson 2)). 
> There have been a few jackson vulnerabilities recently (mostly around 
> deserialization which allows arbitrary code execution)
> [https://nvd.nist.gov/vuln/detail/CVE-2017-7525]
>  [https://nvd.nist.gov/vuln/detail/CVE-2017-15095]
>  [https://nvd.nist.gov/vuln/detail/CVE-2018-1327]
>  [https://nvd.nist.gov/vuln/detail/CVE-2018-7489]
> Given that Jackson in Cassandra is really old and seems to be used also for 
> reading in values, it looks worthwhile to update Jackson to 2.9.5. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14431:

   Resolution: Fixed
Fix Version/s: 4.0
   Status: Resolved  (was: Patch Available)

{{SASIIndexTest}} needed to be fixed, as well, as it asserted on doubles. 
Committed as {{01439740bc26804b10c4cf6f6061925175598241}}, thanks, 
[~iuri_sitinschi]!

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Assignee: Iuri Sitinschi
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 14431-trunk.txt
>
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/trunk e9418f808 -> 01439740b


Replace deprecated junit.framework.Assert usages with org.junit.Assert

patch by Iuri Sitinschi; reviewed by jasobrown for CASSANDRA-14431


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/01439740
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/01439740
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/01439740

Branch: refs/heads/trunk
Commit: 01439740bc26804b10c4cf6f6061925175598241
Parents: e9418f8
Author: Iuri Sitinschi 
Authored: Tue May 1 23:05:09 2018 +0200
Committer: Jason Brown 
Committed: Tue May 1 19:20:35 2018 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/cql3/CachingBench.java |  2 +-
 .../cassandra/cql3/GcCompactionBench.java   |  2 +-
 .../NoReplicationTokenAllocatorTest.java|  2 +-
 ...buggableScheduledThreadPoolExecutorTest.java |  2 +-
 .../cassandra/cql3/ColumnIdentifierTest.java|  2 +-
 .../cassandra/cql3/PstmtPersistenceTest.java|  2 +-
 .../cassandra/cql3/ReservedKeywordsTest.java|  2 +-
 .../cassandra/cql3/SerializationMirrorTest.java |  2 +-
 .../cassandra/cql3/ViewFilteringTest.java   |  2 +-
 .../apache/cassandra/cql3/ViewSchemaTest.java   |  2 +-
 .../org/apache/cassandra/cql3/ViewTest.java |  2 +-
 .../cql3/validation/entities/TimestampTest.java |  2 +-
 .../miscellaneous/CrcCheckChanceTest.java   | 42 ++--
 test/unit/org/apache/cassandra/db/CellTest.java |  2 +-
 .../org/apache/cassandra/db/ColumnsTest.java|  2 +-
 .../apache/cassandra/db/TransformerTest.java|  2 +-
 .../db/commitlog/CommitLogUpgradeTest.java  |  2 +-
 .../AbstractCompactionStrategyTest.java |  2 +-
 .../LeveledCompactionStrategyTest.java  |  2 +-
 .../cassandra/db/filter/ColumnFilterTest.java   |  2 +-
 .../cassandra/db/lifecycle/HelpersTest.java |  2 +-
 .../db/lifecycle/LifecycleTransactionTest.java  |  2 +-
 .../db/lifecycle/LogTransactionTest.java|  2 +-
 .../db/lifecycle/RealTransactionsTest.java  |  2 +-
 .../cassandra/db/lifecycle/TrackerTest.java |  2 +-
 .../apache/cassandra/db/lifecycle/ViewTest.java |  2 +-
 .../cassandra/db/marshal/TimeUUIDTypeTest.java  |  2 +-
 .../cassandra/db/marshal/UUIDTypeTest.java  |  2 +-
 .../db/partition/PartitionUpdateTest.java   |  2 +-
 .../db/rows/UnfilteredRowIteratorsTest.java |  2 +-
 .../apache/cassandra/db/view/ViewUtilsTest.java |  2 +-
 .../cassandra/index/sasi/SASICQLTest.java   |  2 +-
 .../cassandra/index/sasi/SASIIndexTest.java | 18 -
 .../index/sasi/disk/OnDiskIndexTest.java|  2 +-
 .../index/sasi/disk/TokenTreeTest.java  |  2 +-
 .../CompressedSequentialWriterTest.java |  2 +-
 .../io/sstable/BigTableWriterTest.java  |  2 +-
 .../format/SSTableFlushObserverTest.java|  2 +-
 .../util/ChecksummedSequentialWriterTest.java   |  2 +-
 .../apache/cassandra/io/util/MemoryTest.java|  2 +-
 .../cassandra/io/util/SequentialWriterTest.java |  2 +-
 .../cassandra/net/WriteCallbackInfoTest.java|  2 +-
 .../streaming/StreamTransferTaskTest.java   |  2 +-
 .../streaming/StreamingTransferTest.java|  2 +-
 .../org/apache/cassandra/utils/BTreeTest.java   |  2 +-
 .../apache/cassandra/utils/TopKSamplerTest.java |  2 +-
 .../concurrent/AbstractTransactionalTest.java   |  2 +-
 .../utils/concurrent/RefCountedTest.java|  2 +-
 .../utils/memory/NativeAllocatorTest.java   |  2 +-
 .../cassandra/utils/vint/VIntCodingTest.java|  2 +-
 51 files changed, 79 insertions(+), 78 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/01439740/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2545e83..27e69b8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Replace deprecated junit.framework.Assert usages with org.junit.Assert 
(CASSANDRA-14431)
  * cassandra-stress throws NPE if insert section isn't specified in user 
profile (CASSSANDRA-14426)
  * Improve LatencyMetrics performance by reducing write path processing 
(CASSANDRA-14281)
  * Add network authz (CASSANDRA-13985)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/01439740/test/long/org/apache/cassandra/cql3/CachingBench.java
--
diff --git a/test/long/org/apache/cassandra/cql3/CachingBench.java 
b/test/long/org/apache/cassandra/cql3/CachingBench.java
index a0e4226..0a6657f 100644
--- a/test/long/org/apache/cassandra/cql3/CachingBench.java
+++ b/test/long/org/apache/cassandra/cql3/CachingBench.java
@@ -33,7 +33,7 @@ import org.junit.Before;
 

[jira] [Commented] (CASSANDRA-14335) C* nodetool should report the lowest of the highest CQL protocol version supported by all clients connecting to it

2018-05-01 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460397#comment-16460397
 ] 

Jason Brown commented on CASSANDRA-14335:
-

[~zznate] the circleci changes are just for running the utests/dtests, and not 
part of the real commit. We make sure to not commit those.

> C* nodetool should report the lowest of the highest CQL protocol version 
> supported by all clients connecting to it
> --
>
> Key: CASSANDRA-14335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14335
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> While upgrading C*, it makes it hard to tell whether any client will be 
> affected if C* is upgraded. C* should internally store the highest protocol 
> version of all clients connecting to it. The lowest supported version will 
> help determining if any client will be adversely affected by the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14432) Docs container leaves build artifacts behind

2018-05-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460396#comment-16460396
 ] 

ASF GitHub Bot commented on CASSANDRA-14432:


Github user joaquincasares commented on the issue:

https://github.com/apache/cassandra/pull/222
  
Thanks for cleaning those items up.

+1 from me. Committers, do please merge.


> Docs container leaves build artifacts behind
> 
>
> Key: CASSANDRA-14432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14432
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: James Lamb
>Priority: Trivial
>
> Hello!
> I was looking at [the 
> repo|https://github.com/apache/cassandra/blob/db81f6bffef1a8215fec28bb0522dc9684870627/doc/Dockerfile]
>  tonight and tried to build the documentation locally. While doing this, I 
> noticed that the container here does not clean up intermediate build 
> artifacts.
> Will you please consider a PR to address that? When I ran locally, removing 
> build artifacts reduced the size of the container by 14 MB. I will post a 
> link to the PR shortly.
>  
> Thank you very much,
>  
> -James
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14432) Docs container leaves build artifacts behind

2018-05-01 Thread Joaquin Casares (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460395#comment-16460395
 ] 

Joaquin Casares commented on CASSANDRA-14432:
-

That looks great! Thanks for the PR!

Committers, that was originally my code and the fix is ideal. Thanks again!

> Docs container leaves build artifacts behind
> 
>
> Key: CASSANDRA-14432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14432
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: James Lamb
>Priority: Trivial
>
> Hello!
> I was looking at [the 
> repo|https://github.com/apache/cassandra/blob/db81f6bffef1a8215fec28bb0522dc9684870627/doc/Dockerfile]
>  tonight and tried to build the documentation locally. While doing this, I 
> noticed that the container here does not clean up intermediate build 
> artifacts.
> Will you please consider a PR to address that? When I ran locally, removing 
> build artifacts reduced the size of the container by 14 MB. I will post a 
> link to the PR shortly.
>  
> Thank you very much,
>  
> -James
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14431:

Reviewer: Jason Brown

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Assignee: Iuri Sitinschi
>Priority: Minor
> Attachments: 14431-trunk.txt
>
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460382#comment-16460382
 ] 

Jason Brown commented on CASSANDRA-14431:
-

[~iuri_sitinschi] fwiw, we a guide for new contibutors 
[here|http://cassandra.apache.org/doc/latest/development/patches.html#creating-a-patch].
 I can review the attached patch without a problem, though - assuming it 
applies cleanly ;)

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Assignee: Iuri Sitinschi
>Priority: Minor
> Attachments: 14431-trunk.txt
>
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall reassigned CASSANDRA-14431:
---

Assignee: Iuri Sitinschi

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Assignee: Iuri Sitinschi
>Priority: Minor
> Attachments: 14431-trunk.txt
>
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14335) C* nodetool should report the lowest of the highest CQL protocol version supported by all clients connecting to it

2018-05-01 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460378#comment-16460378
 ] 

Nate McCall commented on CASSANDRA-14335:
-

[~djoshi3] You have some config changes to CircleCI as part of this branch - 
was that intentional? Can we put those in a separate ticket if so?

> C* nodetool should report the lowest of the highest CQL protocol version 
> supported by all clients connecting to it
> --
>
> Key: CASSANDRA-14335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14335
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> While upgrading C*, it makes it hard to tell whether any client will be 
> affected if C* is upgraded. C* should internally store the highest protocol 
> version of all clients connecting to it. The lowest supported version will 
> help determining if any client will be adversely affected by the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2018-05-01 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12743:

Fix Version/s: 3.11.3
   3.0.17
   2.2.13

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2018-05-01 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12743:

Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   (was: 2.2.x)
   4.0

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 4.0
>
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460355#comment-16460355
 ] 

Cyril Scetbon commented on CASSANDRA-10751:
---

[~michaelsembwever] I got it in my cassandra package and happy that now it's in 
the mainstream !

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> 

[jira] [Updated] (CASSANDRA-14433) DoS attack through PagingState

2018-05-01 Thread Yang Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yu updated CASSANDRA-14433:

Description: 
According to [this manual 
page|https://docs.datastax.com/en/developer/java-driver/3.5/manual/paging/], 
the paging state can be returned to and received from end users. This means end 
users can inject malicious content into the paging state in order to attack the 
server.

One way is to forge a paging state with a very large partition key size. The 
forged paging state will be passed through the driver and consumed by the 
server and cause OutOfMemoryError:
{noformat}
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(ByteBufferUtil.java:340)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.service.pager.PagingState.deserialize(PagingState.java:78) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.cql3.QueryOptions$Codec.decode(QueryOptions.java:432) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.cql3.QueryOptions$Codec.decode(QueryOptions.java:366) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.messages.ExecuteMessage$1.decode(ExecuteMessage.java:46)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.messages.ExecuteMessage$1.decode(ExecuteMessage.java:42)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:281) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:262) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
 {noformat}

The paging state used to cause the above exception is shown below. The encoded 
partition key size is 2G.
{noformat}
00180010f077359400736f6d654b6579090002633104002a0a006a66e551aa30a3ac47e693ab43bd29a90004
{noformat}

Essentially, this issue is very similar to the "DoS User Specified Object 
Allocation" example in [this OWASP 
page|https://www.owasp.org/index.php/Denial_of_Service]. It is especially 
serious in a multi-tenant environment, as one malicious tenant can affect all 
other tenants.

  was:
According to this manual 
[page|https://docs.datastax.com/en/developer/java-driver/3.5/manual/paging/], 
the paging state can be returned to and received from end users. This means end 
users can inject malicious content into the paging state in order to attack the 
server.

One way is to forge a paging state with a very large partition key size. The 
forged paging state will be passed through the driver and consumed by the 
server and cause OutOfMemoryError:
{noformat}
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(ByteBufferUtil.java:340)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.service.pager.PagingState.deserialize(PagingState.java:78) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.cql3.QueryOptions$Codec.decode(QueryOptions.java:432) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.cql3.QueryOptions$Codec.decode(QueryOptions.java:366) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.messages.ExecuteMessage$1.decode(ExecuteMessage.java:46)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.messages.ExecuteMessage$1.decode(ExecuteMessage.java:42)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:281) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:262) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
 {noformat}

The paging state used to cause the above exception is shown below. The encoded 
partition key size is 2G.
{noformat}
00180010f077359400736f6d654b6579090002633104002a0a006a66e551aa30a3ac47e693ab43bd29a90004
{noformat}

This issue is especially serious is a multi-tenant environment, as one 
malicious tenant can affect all other tenants.


> DoS attack through PagingState
> --
>
> Key: CASSANDRA-14433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14433
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Yang Yu
>Priority: Major
>
> According to [this manual 
> 

[jira] [Created] (CASSANDRA-14433) DoS attack through PagingState

2018-05-01 Thread Yang Yu (JIRA)
Yang Yu created CASSANDRA-14433:
---

 Summary: DoS attack through PagingState
 Key: CASSANDRA-14433
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14433
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Yang Yu


According to this manual 
[page|https://docs.datastax.com/en/developer/java-driver/3.5/manual/paging/], 
the paging state can be returned to and received from end users. This means end 
users can inject malicious content into the paging state in order to attack the 
server.

One way is to forge a paging state with a very large partition key size. The 
forged paging state will be passed through the driver and consumed by the 
server and cause OutOfMemoryError:
{noformat}
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(ByteBufferUtil.java:340)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.service.pager.PagingState.deserialize(PagingState.java:78) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.cql3.QueryOptions$Codec.decode(QueryOptions.java:432) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.cql3.QueryOptions$Codec.decode(QueryOptions.java:366) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.messages.ExecuteMessage$1.decode(ExecuteMessage.java:46)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.messages.ExecuteMessage$1.decode(ExecuteMessage.java:42)
 ~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:281) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:262) 
~[apache-cassandra-3.11.2.jar:3.11.2]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
 {noformat}

The paging state used to cause the above exception is shown below. The encoded 
partition key size is 2G.
{noformat}
00180010f077359400736f6d654b6579090002633104002a0a006a66e551aa30a3ac47e693ab43bd29a90004
{noformat}

This issue is especially serious is a multi-tenant environment, as one 
malicious tenant can affect all other tenants.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459637#comment-16459637
 ] 

mck edited comment on CASSANDRA-10751 at 5/1/18 11:51 PM:
--

Sorry [~cscetbon] that this got completely forgotten. The patch makes sense.

I've put your patch into relevant branches, and will commit once they go green.

|| Branch || uTest || dTest ||
|[cassandra-2.2_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-2.2_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/534/
 |
|[cassandra-3.0_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/538/
 |
|[cassandra-3.11_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/536/
 |
|[trunk_10751|https://github.com/thelastpickle/cassandra/tree/mck/trunk_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/537/
 |


was (Author: michaelsembwever):
Sorry [~cscetbon] that this got completely forgotten. The patch makes sense.

I've put your patch into relevant branches, and will commit once they go green.

|| Branch || uTest || dTest ||
|[cassandra-2.2_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-2.2_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/534/
 |
|[cassandra-3.0_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/535/
 |
|[cassandra-3.11_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/536/
 |
|[trunk_10751|https://github.com/thelastpickle/cassandra/tree/mck/trunk_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/537/
 |

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed 

[jira] [Commented] (CASSANDRA-14432) Docs container leaves build artifacts behind

2018-05-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460290#comment-16460290
 ] 

ASF GitHub Bot commented on CASSANDRA-14432:


GitHub user jameslamb opened a pull request:

https://github.com/apache/cassandra/pull/222

CASSANDRA-14432: clean up build artifacts in docs container

Small PR to cut out some unnecessary stuff in the docs container. Please 
see the PR description in 
[CASSANDRA-14432](https://issues.apache.org/jira/browse/CASSANDRA-14432).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jameslamb/cassandra trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cassandra/pull/222.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #222


commit bf1c2b6d0d320d345b78aee759b7ad67f5835cc5
Author: James Lamb 
Date:   2018-05-01T23:16:21Z

CASSANDRA-14432: clean up build artifacts in docs container




> Docs container leaves build artifacts behind
> 
>
> Key: CASSANDRA-14432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14432
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: James Lamb
>Priority: Trivial
>
> Hello!
> I was looking at [the 
> repo|https://github.com/apache/cassandra/blob/db81f6bffef1a8215fec28bb0522dc9684870627/doc/Dockerfile]
>  tonight and tried to build the documentation locally. While doing this, I 
> noticed that the container here does not clean up intermediate build 
> artifacts.
> Will you please consider a PR to address that? When I ran locally, removing 
> build artifacts reduced the size of the container by 14 MB. I will post a 
> link to the PR shortly.
>  
> Thank you very much,
>  
> -James
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14432) Docs container leaves build artifacts behind

2018-05-01 Thread James Lamb (JIRA)
James Lamb created CASSANDRA-14432:
--

 Summary: Docs container leaves build artifacts behind
 Key: CASSANDRA-14432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14432
 Project: Cassandra
  Issue Type: Improvement
Reporter: James Lamb


Hello!

I was looking at [the 
repo|https://github.com/apache/cassandra/blob/db81f6bffef1a8215fec28bb0522dc9684870627/doc/Dockerfile]
 tonight and tried to build the documentation locally. While doing this, I 
noticed that the container here does not clean up intermediate build artifacts.

Will you please consider a PR to address that? When I ran locally, removing 
build artifacts reduced the size of the container by 14 MB. I will post a link 
to the PR shortly.

 

Thank you very much,

 

-James

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2018-05-01 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-12743:
---
Reproduced In: 3.0.14, 2.2.7  (was: 2.2.7, 3.0.14)
   Status: Patch Available  (was: Open)

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2018-05-01 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-12743:
---
   Resolution: Fixed
Reproduced In: 3.0.14, 2.2.7  (was: 2.2.7, 3.0.14)
   Status: Resolved  (was: Patch Available)

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2018-05-01 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-12743:
---
Status: In Progress  (was: Ready to Commit)

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12743) Assertion error while running compaction

2018-05-01 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460237#comment-16460237
 ] 

Jay Zhuang commented on CASSANDRA-12743:


Thank you Marcus again for the review. Committed as 
[3a71382|https://github.com/apache/cassandra/commit/3a713827f48399f389ea851a19b8ec8cd2cc5773].

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-05-01 Thread jzhuang
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/733f6b0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/733f6b0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/733f6b0c

Branch: refs/heads/cassandra-3.11
Commit: 733f6b0cf8c5f8d89b9a9bf102e9e37548bba601
Parents: e16f0ed 3a71382
Author: Jay Zhuang 
Authored: Tue May 1 15:08:51 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:10:13 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../cassandra/io/util/SequentialWriter.java |   2 +
 .../CompressedSequentialWriterReopenTest.java   | 148 +++
 .../CompressedSequentialWriterTest.java |  53 +++
 .../cassandra/io/util/SequentialWriterTest.java |  41 +
 6 files changed, 260 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/733f6b0c/CHANGES.txt
--
diff --cc CHANGES.txt
index 857cf96,22ee346..9992802
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,25 -1,5 +1,26 @@@
 -2.2.13
 +3.0.17
 + * Delay hints store excise by write timeout to avoid race with decommission 
(CASSANDRA-13740)
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
+  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
   * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/733f6b0c/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 74258cf,a7f9bb4..43f1fd0
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@@ -132,7 -129,10 +132,7 @@@ public class CompressedSequentialWrite
  // write corresponding checksum
  compressed.rewind();
  crcMetadata.appendDirect(compressed, true);
- lastFlushOffset += compressedLength + 4;
+ lastFlushOffset = uncompressedSize;
 -
 -// adjust our bufferOffset to account for the new uncompressed 
data we've now written out
 -resetBuffer();
  }
  catch (IOException e)
  {
@@@ -240,6 -239,19 +240,19 @@@
  metadataWriter.resetAndTruncate(realMark.nextChunkIndex - 1);
  }
  
+ private void truncate(long toFileSize, long toBufferOffset)
+ {
+ try
+ {
 -channel.truncate(toFileSize);
++fchannel.truncate(toFileSize);
+ lastFlushOffset = toBufferOffset;
+ }
+ catch (IOException e)
+ {
+ throw new FSWriteError(e, getPath());
+ }
+ }
+ 
  /**
   * Seek to the offset where next compressed data chunk should be stored.
   */


[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-05-01 Thread jzhuang
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/783bbb3c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/783bbb3c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/783bbb3c

Branch: refs/heads/trunk
Commit: 783bbb3c817e7dbfee8181d210487edc13414ac1
Parents: b67d6fb 733f6b0
Author: Jay Zhuang 
Authored: Tue May 1 15:11:22 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:12:14 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../CompressedSequentialWriterReopenTest.java   | 148 +++
 .../CompressedSequentialWriterTest.java |   6 +
 4 files changed, 170 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/783bbb3c/CHANGES.txt
--
diff --cc CHANGES.txt
index c392059,9992802..443c298
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -31,8 -20,10 +31,9 @@@ Merged from 3.0
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
  Merged from 2.2:
+  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/783bbb3c/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/783bbb3c/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
--
diff --cc 
test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
index a088e20,f04439a..52b18a9
--- 
a/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
+++ 
b/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
@@@ -26,10 -27,9 +26,11 @@@ import java.util.*
  
  import static org.apache.commons.io.FileUtils.readFileToByteArray;
  import static org.junit.Assert.assertEquals;
+ import static org.junit.Assert.assertTrue;
  
 +import com.google.common.io.Files;
  import org.junit.After;
 +import org.junit.BeforeClass;
  import org.junit.Test;
  
  import junit.framework.Assert;
@@@ -89,42 -88,46 +90,47 @@@ public class CompressedSequentialWriter
  private void testWrite(File f, int bytesToTest) throws IOException
  {
  final String filename = f.getAbsolutePath();
 -final ChannelProxy channel = new ChannelProxy(f);
 -
 -try
 +MetadataCollector sstableMetadataCollector = new 
MetadataCollector(new 
ClusteringComparator(Collections.singletonList(BytesType.instance)));
 +
 +byte[] dataPre = new byte[bytesToTest];
 +byte[] rawPost = new byte[bytesToTest];
 +try (CompressedSequentialWriter writer = new 
CompressedSequentialWriter(f, filename + ".metadata",
 +null, SequentialWriterOption.DEFAULT,
 +compressionParameters,
 +sstableMetadataCollector))
  {
 -MetadataCollector sstableMetadataCollector = new 
MetadataCollector(new 
ClusteringComparator(Arrays.asList(BytesType.instance)));
 +Random r = new Random(42);
 +
 +// Test both write with byte[] and ByteBuffer
 +r.nextBytes(dataPre);
 +r.nextBytes(rawPost);
 +ByteBuffer dataPost = makeBB(bytesToTest);
 +dataPost.put(rawPost);
 +dataPost.flip();
 +
 +writer.write(dataPre);
 +DataPosition mark = writer.mark();
  
 -byte[] dataPre = new byte[bytesToTest];
 -byte[] rawPost = new byte[bytesToTest];
 -try (CompressedSequentialWriter writer = new 
CompressedSequentialWriter(f, filename + ".metadata", compressionParameters, 
sstableMetadataCollector);)
 +// Write enough garbage to transition chunk
 +for (int i = 0; i < CompressionParams.DEFAULT_CHUNK_LENGTH; i++)
  {
 -Random r = new Random(42);
 -
 -// Test both write with byte[] and ByteBuffer
 -r.nextBytes(dataPre);
 -r.nextBytes(rawPost);
 -

[jira] [Updated] (CASSANDRA-14335) C* nodetool should report the lowest of the highest CQL protocol version supported by all clients connecting to it

2018-05-01 Thread Dinesh Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14335:
-
Reviewer: Jason Brown
  Status: Patch Available  (was: Open)

> C* nodetool should report the lowest of the highest CQL protocol version 
> supported by all clients connecting to it
> --
>
> Key: CASSANDRA-14335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14335
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> While upgrading C*, it makes it hard to tell whether any client will be 
> affected if C* is upgraded. C* should internally store the highest protocol 
> version of all clients connecting to it. The lowest supported version will 
> help determining if any client will be adversely affected by the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[03/10] cassandra git commit: Fix compaction failure caused by reading un-flushed data

2018-05-01 Thread jzhuang
Fix compaction failure caused by reading un-flushed data

patch by Jay Zhuang; reviewed by Marcus Eriksson for CASSANDRA-12743


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a713827
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a713827
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a713827

Branch: refs/heads/cassandra-3.11
Commit: 3a713827f48399f389ea851a19b8ec8cd2cc5773
Parents: 334dca9
Author: Jay Zhuang 
Authored: Sat Apr 21 11:15:06 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:07:01 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../cassandra/io/util/SequentialWriter.java |   2 +
 .../CompressedSequentialWriterReopenTest.java   | 153 +++
 .../CompressedSequentialWriterTest.java |  52 +++
 .../cassandra/io/util/SequentialWriterTest.java |  41 +
 6 files changed, 264 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5f6189f..22ee346 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 9c7c776..a7f9bb4 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -129,7 +129,7 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 // write corresponding checksum
 compressed.rewind();
 crcMetadata.appendDirect(compressed, true);
-lastFlushOffset += compressedLength + 4;
+lastFlushOffset = uncompressedSize;
 
 // adjust our bufferOffset to account for the new uncompressed 
data we've now written out
 resetBuffer();
@@ -235,10 +235,23 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 chunkCount = realMark.nextChunkIndex - 1;
 
 // truncate data and index file
-truncate(chunkOffset);
+truncate(chunkOffset, bufferOffset);
 metadataWriter.resetAndTruncate(realMark.nextChunkIndex - 1);
 }
 
+private void truncate(long toFileSize, long toBufferOffset)
+{
+try
+{
+channel.truncate(toFileSize);
+lastFlushOffset = toBufferOffset;
+}
+catch (IOException e)
+{
+throw new FSWriteError(e, getPath());
+}
+}
+
 /**
  * Seek to the offset where next compressed data chunk should be stored.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/util/SequentialWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/util/SequentialWriter.java 
b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
index 0c39469..452318e 100644
--- a/src/java/org/apache/cassandra/io/util/SequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
@@ -430,6 +430,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 throw new FSReadError(e, getPath());
 }
 
+bufferOffset = truncateTarget;
 resetBuffer();
 }
 
@@ -443,6 +444,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 try
 {
 channel.truncate(toSize);
+lastFlushOffset = toSize;
 }
 catch (IOException e)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
 
b/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
new 

[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-05-01 Thread jzhuang
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/733f6b0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/733f6b0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/733f6b0c

Branch: refs/heads/cassandra-3.0
Commit: 733f6b0cf8c5f8d89b9a9bf102e9e37548bba601
Parents: e16f0ed 3a71382
Author: Jay Zhuang 
Authored: Tue May 1 15:08:51 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:10:13 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../cassandra/io/util/SequentialWriter.java |   2 +
 .../CompressedSequentialWriterReopenTest.java   | 148 +++
 .../CompressedSequentialWriterTest.java |  53 +++
 .../cassandra/io/util/SequentialWriterTest.java |  41 +
 6 files changed, 260 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/733f6b0c/CHANGES.txt
--
diff --cc CHANGES.txt
index 857cf96,22ee346..9992802
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,25 -1,5 +1,26 @@@
 -2.2.13
 +3.0.17
 + * Delay hints store excise by write timeout to avoid race with decommission 
(CASSANDRA-13740)
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
+  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
   * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/733f6b0c/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 74258cf,a7f9bb4..43f1fd0
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@@ -132,7 -129,10 +132,7 @@@ public class CompressedSequentialWrite
  // write corresponding checksum
  compressed.rewind();
  crcMetadata.appendDirect(compressed, true);
- lastFlushOffset += compressedLength + 4;
+ lastFlushOffset = uncompressedSize;
 -
 -// adjust our bufferOffset to account for the new uncompressed 
data we've now written out
 -resetBuffer();
  }
  catch (IOException e)
  {
@@@ -240,6 -239,19 +240,19 @@@
  metadataWriter.resetAndTruncate(realMark.nextChunkIndex - 1);
  }
  
+ private void truncate(long toFileSize, long toBufferOffset)
+ {
+ try
+ {
 -channel.truncate(toFileSize);
++fchannel.truncate(toFileSize);
+ lastFlushOffset = toBufferOffset;
+ }
+ catch (IOException e)
+ {
+ throw new FSWriteError(e, getPath());
+ }
+ }
+ 
  /**
   * Seek to the offset where next compressed data chunk should be stored.
   */


[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-05-01 Thread jzhuang
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9418f80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9418f80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9418f80

Branch: refs/heads/trunk
Commit: e9418f808c03b82837a1ab7627abe08057c1388f
Parents: 2fe4b9d 783bbb3
Author: Jay Zhuang 
Authored: Tue May 1 15:16:05 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:18:26 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../CompressedSequentialWriterReopenTest.java   | 148 +++
 .../CompressedSequentialWriterTest.java |   6 +
 4 files changed, 170 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9418f80/CHANGES.txt
--
diff --cc CHANGES.txt
index 33c81d1,443c298..2545e83
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -264,11 -31,13 +264,12 @@@ Merged from 3.0
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
  Merged from 2.2:
+  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
 - * Backport circleci yaml (CASSANDRA-14240)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)
 - * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
  
  3.11.2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9418f80/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 8955d4f,5694616..c35ecc8
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@@ -165,13 -155,13 +165,13 @@@ public class CompressedSequentialWrite
  chunkCount++;
  
  // write out the compressed data
 -compressed.flip();
 -channel.write(compressed);
 +toWrite.flip();
 +channel.write(toWrite);
  
  // write corresponding checksum
 -compressed.rewind();
 -crcMetadata.appendDirect(compressed, true);
 +toWrite.rewind();
 +crcMetadata.appendDirect(toWrite, true);
- lastFlushOffset += compressedLength + 4;
+ lastFlushOffset = uncompressedSize;
  }
  catch (IOException e)
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9418f80/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-05-01 Thread jzhuang
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/783bbb3c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/783bbb3c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/783bbb3c

Branch: refs/heads/cassandra-3.11
Commit: 783bbb3c817e7dbfee8181d210487edc13414ac1
Parents: b67d6fb 733f6b0
Author: Jay Zhuang 
Authored: Tue May 1 15:11:22 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:12:14 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../CompressedSequentialWriterReopenTest.java   | 148 +++
 .../CompressedSequentialWriterTest.java |   6 +
 4 files changed, 170 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/783bbb3c/CHANGES.txt
--
diff --cc CHANGES.txt
index c392059,9992802..443c298
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -31,8 -20,10 +31,9 @@@ Merged from 3.0
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
  Merged from 2.2:
+  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/783bbb3c/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/783bbb3c/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
--
diff --cc 
test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
index a088e20,f04439a..52b18a9
--- 
a/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
+++ 
b/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterTest.java
@@@ -26,10 -27,9 +26,11 @@@ import java.util.*
  
  import static org.apache.commons.io.FileUtils.readFileToByteArray;
  import static org.junit.Assert.assertEquals;
+ import static org.junit.Assert.assertTrue;
  
 +import com.google.common.io.Files;
  import org.junit.After;
 +import org.junit.BeforeClass;
  import org.junit.Test;
  
  import junit.framework.Assert;
@@@ -89,42 -88,46 +90,47 @@@ public class CompressedSequentialWriter
  private void testWrite(File f, int bytesToTest) throws IOException
  {
  final String filename = f.getAbsolutePath();
 -final ChannelProxy channel = new ChannelProxy(f);
 -
 -try
 +MetadataCollector sstableMetadataCollector = new 
MetadataCollector(new 
ClusteringComparator(Collections.singletonList(BytesType.instance)));
 +
 +byte[] dataPre = new byte[bytesToTest];
 +byte[] rawPost = new byte[bytesToTest];
 +try (CompressedSequentialWriter writer = new 
CompressedSequentialWriter(f, filename + ".metadata",
 +null, SequentialWriterOption.DEFAULT,
 +compressionParameters,
 +sstableMetadataCollector))
  {
 -MetadataCollector sstableMetadataCollector = new 
MetadataCollector(new 
ClusteringComparator(Arrays.asList(BytesType.instance)));
 +Random r = new Random(42);
 +
 +// Test both write with byte[] and ByteBuffer
 +r.nextBytes(dataPre);
 +r.nextBytes(rawPost);
 +ByteBuffer dataPost = makeBB(bytesToTest);
 +dataPost.put(rawPost);
 +dataPost.flip();
 +
 +writer.write(dataPre);
 +DataPosition mark = writer.mark();
  
 -byte[] dataPre = new byte[bytesToTest];
 -byte[] rawPost = new byte[bytesToTest];
 -try (CompressedSequentialWriter writer = new 
CompressedSequentialWriter(f, filename + ".metadata", compressionParameters, 
sstableMetadataCollector);)
 +// Write enough garbage to transition chunk
 +for (int i = 0; i < CompressionParams.DEFAULT_CHUNK_LENGTH; i++)
  {
 -Random r = new Random(42);
 -
 -// Test both write with byte[] and ByteBuffer
 -r.nextBytes(dataPre);
 -r.nextBytes(rawPost);
 

[02/10] cassandra git commit: Fix compaction failure caused by reading un-flushed data

2018-05-01 Thread jzhuang
Fix compaction failure caused by reading un-flushed data

patch by Jay Zhuang; reviewed by Marcus Eriksson for CASSANDRA-12743


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a713827
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a713827
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a713827

Branch: refs/heads/cassandra-3.0
Commit: 3a713827f48399f389ea851a19b8ec8cd2cc5773
Parents: 334dca9
Author: Jay Zhuang 
Authored: Sat Apr 21 11:15:06 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:07:01 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../cassandra/io/util/SequentialWriter.java |   2 +
 .../CompressedSequentialWriterReopenTest.java   | 153 +++
 .../CompressedSequentialWriterTest.java |  52 +++
 .../cassandra/io/util/SequentialWriterTest.java |  41 +
 6 files changed, 264 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5f6189f..22ee346 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 9c7c776..a7f9bb4 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -129,7 +129,7 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 // write corresponding checksum
 compressed.rewind();
 crcMetadata.appendDirect(compressed, true);
-lastFlushOffset += compressedLength + 4;
+lastFlushOffset = uncompressedSize;
 
 // adjust our bufferOffset to account for the new uncompressed 
data we've now written out
 resetBuffer();
@@ -235,10 +235,23 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 chunkCount = realMark.nextChunkIndex - 1;
 
 // truncate data and index file
-truncate(chunkOffset);
+truncate(chunkOffset, bufferOffset);
 metadataWriter.resetAndTruncate(realMark.nextChunkIndex - 1);
 }
 
+private void truncate(long toFileSize, long toBufferOffset)
+{
+try
+{
+channel.truncate(toFileSize);
+lastFlushOffset = toBufferOffset;
+}
+catch (IOException e)
+{
+throw new FSWriteError(e, getPath());
+}
+}
+
 /**
  * Seek to the offset where next compressed data chunk should be stored.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/util/SequentialWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/util/SequentialWriter.java 
b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
index 0c39469..452318e 100644
--- a/src/java/org/apache/cassandra/io/util/SequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
@@ -430,6 +430,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 throw new FSReadError(e, getPath());
 }
 
+bufferOffset = truncateTarget;
 resetBuffer();
 }
 
@@ -443,6 +444,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 try
 {
 channel.truncate(toSize);
+lastFlushOffset = toSize;
 }
 catch (IOException e)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
 
b/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
new file 

[jira] [Commented] (CASSANDRA-14335) C* nodetool should report the lowest of the highest CQL protocol version supported by all clients connecting to it

2018-05-01 Thread Dinesh Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460233#comment-16460233
 ] 

Dinesh Joshi commented on CASSANDRA-14335:
--

||trunk||
|[branch|https://github.com/dineshjoshi/cassandra/tree/14335-trunk]|
|[utests  
dtests|https://circleci.com/gh/dineshjoshi/workflows/cassandra/tree/14335-trunk]|
||

> C* nodetool should report the lowest of the highest CQL protocol version 
> supported by all clients connecting to it
> --
>
> Key: CASSANDRA-14335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14335
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> While upgrading C*, it makes it hard to tell whether any client will be 
> affected if C* is upgraded. C* should internally store the highest protocol 
> version of all clients connecting to it. The lowest supported version will 
> help determining if any client will be adversely affected by the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-05-01 Thread jzhuang
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/733f6b0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/733f6b0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/733f6b0c

Branch: refs/heads/trunk
Commit: 733f6b0cf8c5f8d89b9a9bf102e9e37548bba601
Parents: e16f0ed 3a71382
Author: Jay Zhuang 
Authored: Tue May 1 15:08:51 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:10:13 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../cassandra/io/util/SequentialWriter.java |   2 +
 .../CompressedSequentialWriterReopenTest.java   | 148 +++
 .../CompressedSequentialWriterTest.java |  53 +++
 .../cassandra/io/util/SequentialWriterTest.java |  41 +
 6 files changed, 260 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/733f6b0c/CHANGES.txt
--
diff --cc CHANGES.txt
index 857cf96,22ee346..9992802
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,25 -1,5 +1,26 @@@
 -2.2.13
 +3.0.17
 + * Delay hints store excise by write timeout to avoid race with decommission 
(CASSANDRA-13740)
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
+  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
   * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/733f6b0c/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 74258cf,a7f9bb4..43f1fd0
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@@ -132,7 -129,10 +132,7 @@@ public class CompressedSequentialWrite
  // write corresponding checksum
  compressed.rewind();
  crcMetadata.appendDirect(compressed, true);
- lastFlushOffset += compressedLength + 4;
+ lastFlushOffset = uncompressedSize;
 -
 -// adjust our bufferOffset to account for the new uncompressed 
data we've now written out
 -resetBuffer();
  }
  catch (IOException e)
  {
@@@ -240,6 -239,19 +240,19 @@@
  metadataWriter.resetAndTruncate(realMark.nextChunkIndex - 1);
  }
  
+ private void truncate(long toFileSize, long toBufferOffset)
+ {
+ try
+ {
 -channel.truncate(toFileSize);
++fchannel.truncate(toFileSize);
+ lastFlushOffset = toBufferOffset;
+ }
+ catch (IOException e)
+ {
+ throw new FSWriteError(e, getPath());
+ }
+ }
+ 
  /**
   * Seek to the offset where next compressed data chunk should be stored.
   */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/733f6b0c/src/java/org/apache/cassandra/io/util/SequentialWriter.java

[01/10] cassandra git commit: Fix compaction failure caused by reading un-flushed data

2018-05-01 Thread jzhuang
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 334dca9aa -> 3a713827f
  refs/heads/cassandra-3.0 e16f0ed06 -> 733f6b0cf
  refs/heads/cassandra-3.11 b67d6fb60 -> 783bbb3c8
  refs/heads/trunk 2fe4b9dc6 -> e9418f808


Fix compaction failure caused by reading un-flushed data

patch by Jay Zhuang; reviewed by Marcus Eriksson for CASSANDRA-12743


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a713827
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a713827
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a713827

Branch: refs/heads/cassandra-2.2
Commit: 3a713827f48399f389ea851a19b8ec8cd2cc5773
Parents: 334dca9
Author: Jay Zhuang 
Authored: Sat Apr 21 11:15:06 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:07:01 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../cassandra/io/util/SequentialWriter.java |   2 +
 .../CompressedSequentialWriterReopenTest.java   | 153 +++
 .../CompressedSequentialWriterTest.java |  52 +++
 .../cassandra/io/util/SequentialWriterTest.java |  41 +
 6 files changed, 264 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5f6189f..22ee346 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 9c7c776..a7f9bb4 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -129,7 +129,7 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 // write corresponding checksum
 compressed.rewind();
 crcMetadata.appendDirect(compressed, true);
-lastFlushOffset += compressedLength + 4;
+lastFlushOffset = uncompressedSize;
 
 // adjust our bufferOffset to account for the new uncompressed 
data we've now written out
 resetBuffer();
@@ -235,10 +235,23 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 chunkCount = realMark.nextChunkIndex - 1;
 
 // truncate data and index file
-truncate(chunkOffset);
+truncate(chunkOffset, bufferOffset);
 metadataWriter.resetAndTruncate(realMark.nextChunkIndex - 1);
 }
 
+private void truncate(long toFileSize, long toBufferOffset)
+{
+try
+{
+channel.truncate(toFileSize);
+lastFlushOffset = toBufferOffset;
+}
+catch (IOException e)
+{
+throw new FSWriteError(e, getPath());
+}
+}
+
 /**
  * Seek to the offset where next compressed data chunk should be stored.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/util/SequentialWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/util/SequentialWriter.java 
b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
index 0c39469..452318e 100644
--- a/src/java/org/apache/cassandra/io/util/SequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
@@ -430,6 +430,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 throw new FSReadError(e, getPath());
 }
 
+bufferOffset = truncateTarget;
 resetBuffer();
 }
 
@@ -443,6 +444,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 try
 {
 channel.truncate(toSize);
+lastFlushOffset = toSize;
 }
 catch (IOException e)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java

[04/10] cassandra git commit: Fix compaction failure caused by reading un-flushed data

2018-05-01 Thread jzhuang
Fix compaction failure caused by reading un-flushed data

patch by Jay Zhuang; reviewed by Marcus Eriksson for CASSANDRA-12743


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a713827
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a713827
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a713827

Branch: refs/heads/trunk
Commit: 3a713827f48399f389ea851a19b8ec8cd2cc5773
Parents: 334dca9
Author: Jay Zhuang 
Authored: Sat Apr 21 11:15:06 2018 -0700
Committer: Jay Zhuang 
Committed: Tue May 1 15:07:01 2018 -0700

--
 CHANGES.txt |   1 +
 .../io/compress/CompressedSequentialWriter.java |  17 ++-
 .../cassandra/io/util/SequentialWriter.java |   2 +
 .../CompressedSequentialWriterReopenTest.java   | 153 +++
 .../CompressedSequentialWriterTest.java |  52 +++
 .../cassandra/io/util/SequentialWriterTest.java |  41 +
 6 files changed, 264 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5f6189f..22ee346 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index 9c7c776..a7f9bb4 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -129,7 +129,7 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 // write corresponding checksum
 compressed.rewind();
 crcMetadata.appendDirect(compressed, true);
-lastFlushOffset += compressedLength + 4;
+lastFlushOffset = uncompressedSize;
 
 // adjust our bufferOffset to account for the new uncompressed 
data we've now written out
 resetBuffer();
@@ -235,10 +235,23 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 chunkCount = realMark.nextChunkIndex - 1;
 
 // truncate data and index file
-truncate(chunkOffset);
+truncate(chunkOffset, bufferOffset);
 metadataWriter.resetAndTruncate(realMark.nextChunkIndex - 1);
 }
 
+private void truncate(long toFileSize, long toBufferOffset)
+{
+try
+{
+channel.truncate(toFileSize);
+lastFlushOffset = toBufferOffset;
+}
+catch (IOException e)
+{
+throw new FSWriteError(e, getPath());
+}
+}
+
 /**
  * Seek to the offset where next compressed data chunk should be stored.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/src/java/org/apache/cassandra/io/util/SequentialWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/util/SequentialWriter.java 
b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
index 0c39469..452318e 100644
--- a/src/java/org/apache/cassandra/io/util/SequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/util/SequentialWriter.java
@@ -430,6 +430,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 throw new FSReadError(e, getPath());
 }
 
+bufferOffset = truncateTarget;
 resetBuffer();
 }
 
@@ -443,6 +444,7 @@ public class SequentialWriter extends OutputStream 
implements WritableByteChanne
 try
 {
 channel.truncate(toSize);
+lastFlushOffset = toSize;
 }
 catch (IOException e)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a713827/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
 
b/test/unit/org/apache/cassandra/io/compress/CompressedSequentialWriterReopenTest.java
new file mode 

[jira] [Commented] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Iuri Sitinschi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460165#comment-16460165
 ] 

Iuri Sitinschi commented on CASSANDRA-14431:


[~jasobrown] Submitted a patch. This is my first time contribution to 
Cassandra, so, please, correct me if anything is wrong.

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Priority: Minor
> Attachments: 14431-trunk.txt
>
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Iuri Sitinschi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iuri Sitinschi updated CASSANDRA-14431:
---
Attachment: 14431-trunk.txt
Status: Patch Available  (was: Open)

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Priority: Minor
> Attachments: 14431-trunk.txt
>
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14428) Run ant eclipse-warnings in circleci

2018-05-01 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460128#comment-16460128
 ] 

Blake Eggleston commented on CASSANDRA-14428:
-

+1 to your fixes of my code as well

> Run ant eclipse-warnings in circleci
> 
>
> Key: CASSANDRA-14428
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14428
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> We should run ant eclipse-warnings in circle-ci



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14346) Scheduled Repair in Cassandra

2018-05-01 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459936#comment-16459936
 ] 

Blake Eggleston commented on CASSANDRA-14346:
-

not yet [~michaelsembwever], I'll try to create some in the next few days

> Scheduled Repair in Cassandra
> -
>
> Key: CASSANDRA-14346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14346
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Repair
>Reporter: Joseph Lynch
>Priority: Major
>  Labels: CommunityFeedbackRequested
> Fix For: 4.0
>
> Attachments: ScheduledRepairV1_20180327.pdf
>
>
> There have been many attempts to automate repair in Cassandra, which makes 
> sense given that it is necessary to give our users eventual consistency. Most 
> recently CASSANDRA-10070, CASSANDRA-8911 and CASSANDRA-13924 have all looked 
> for ways to solve this problem.
> At Netflix we've built a scheduled repair service within Priam (our sidecar), 
> which we spoke about last year at NGCC. Given the positive feedback at NGCC 
> we focussed on getting it production ready and have now been using it in 
> production to repair hundreds of clusters, tens of thousands of nodes, and 
> petabytes of data for the past six months. Also based on feedback at NGCC we 
> have invested effort in figuring out how to integrate this natively into 
> Cassandra rather than open sourcing it as an external service (e.g. in Priam).
> As such, [~vinaykumarcse] and I would like to re-work and merge our 
> implementation into Cassandra, and have created a [design 
> document|https://docs.google.com/document/d/1RV4rOrG1gwlD5IljmrIq_t45rz7H3xs9GbFSEyGzEtM/edit?usp=sharing]
>  showing how we plan to make it happen, including the the user interface.
> As we work on the code migration from Priam to Cassandra, any feedback would 
> be greatly appreciated about the interface or v1 implementation features. I 
> have tried to call out in the document features which we explicitly consider 
> future work (as well as a path forward to implement them in the future) 
> because I would very much like to get this done before the 4.0 merge window 
> closes, and to do that I think aggressively pruning scope is going to be a 
> necessity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Iuri Sitinschi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459858#comment-16459858
 ] 

Iuri Sitinschi commented on CASSANDRA-14431:


Don't see how to assign it to me. Probably I don't have necessary permissions. 
Consider it assigned ;)

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Priority: Minor
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14420) dtests not determining C* version correctly

2018-05-01 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459856#comment-16459856
 ] 

Blake Eggleston commented on CASSANDRA-14420:
-

I have a few notes:

* could we rename parse_dtest_config to dtest_config. While I realize the 
fixture function itself is doing the parsing, it seems a little strange to be 
passing the dtest config into the misc setup methods with that name
* you could probably set the parse_dtest_config fixture scope to something like 
module or session, so it's not reinstantiated for every test
* auth_test:TestAuthRoles.role has a mutable default argument which can lead to 
difficult to diagnose bugs. It's default should be None, then evaluated in the 
function body as {{options = options or {}}}
* I don't feel too strongly about this, but it looks like the 
parse_dtest_config argument is only used in a handful of 
fixture_dtest_setup_override implementations. Maybe it would be better to have 
a separate fixture setup for places where we need to consult the config? Otoh, 
passing the config into one of the main setup method seems like a reasonable 
thing to do. WDYT?
* There's a {{parse_dtest_config}} definition in 
user_functions_test:TestUserFunctions that's just behaving as a pass through. 
Is this left over from some debugging something, or is there a reason it's 
there? If it's doing something, could you add a comment explaining what?

> dtests not determining C* version correctly
> ---
>
> Key: CASSANDRA-14420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14420
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Major
>
> In the course of CASSANDRA-14134, the means of extracting the C* version 
> under test before starting a cluster became broken. This is necessary in 
> cases where we want to gate values in cassandra.yaml based on version, so a 
> couple of tests are affected. The specifics are that the global 
> {{CASSANDRA_VERSION_FROM_BUILD}} was hardcoded to '4.0' and the ways in which 
> the various tests use it have meant that it was undetected until now.
> Also, the {{fixture_since}} which we use to implement the {{@since}} 
> annotation is broken when a {{--cassandra-version}} is supplied, rather than 
> {{--cassandra-dir}}, meaning testing against released versions from git isn't 
> working right now.
> Tests directly affected:
>  * {{auth_test.py}} - CASSANDRA-13985 added some gating of yaml props and 
> additional checks on CQL results based on the build version. These failed on 
> 3.11, which is how this issue was uncovered, but they're also broken on 2.2 
> on builds.apache.org
>  * {{user_functions_test.py}} - gates setting a yaml property when version < 
> 3.0. Failing on 2.2.
>  * {{upgrade_tests}} - a number of these use the variable, but I don't think 
> they're actually being run at the moment.
>  * {{repair_tests/repair_test.py}}, {{replace_address_test.py}} & 
> {{thrift_test}} all use the global, but only to verify that the version is 
> not 3.9. As we're not running CI for that version, no-one noticed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459847#comment-16459847
 ] 

Jason Brown commented on CASSANDRA-14431:
-

[~iuri_sitinschi] go for it!

> Replace deprecated junit.framework.Assert usages with org.junit.Assert
> --
>
> Key: CASSANDRA-14431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
> Project: Cassandra
>  Issue Type: Test
>Reporter: Iuri Sitinschi
>Priority: Minor
>
> I found a lot of tests which are still using old deprecated junit class 
> *junit.framework.Assert*. I suggest to replace it with recommended 
> *org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14431) Replace deprecated junit.framework.Assert usages with org.junit.Assert

2018-05-01 Thread Iuri Sitinschi (JIRA)
Iuri Sitinschi created CASSANDRA-14431:
--

 Summary: Replace deprecated junit.framework.Assert usages with 
org.junit.Assert
 Key: CASSANDRA-14431
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14431
 Project: Cassandra
  Issue Type: Test
Reporter: Iuri Sitinschi


I found a lot of tests which are still using old deprecated junit class 
*junit.framework.Assert*. I suggest to replace it with recommended 
*org.junit.Assert*. I can prepare a patch as soon as I receive a green light.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14420) dtests not determining C* version correctly

2018-05-01 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14420:

Reviewer: Blake Eggleston

> dtests not determining C* version correctly
> ---
>
> Key: CASSANDRA-14420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14420
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Major
>
> In the course of CASSANDRA-14134, the means of extracting the C* version 
> under test before starting a cluster became broken. This is necessary in 
> cases where we want to gate values in cassandra.yaml based on version, so a 
> couple of tests are affected. The specifics are that the global 
> {{CASSANDRA_VERSION_FROM_BUILD}} was hardcoded to '4.0' and the ways in which 
> the various tests use it have meant that it was undetected until now.
> Also, the {{fixture_since}} which we use to implement the {{@since}} 
> annotation is broken when a {{--cassandra-version}} is supplied, rather than 
> {{--cassandra-dir}}, meaning testing against released versions from git isn't 
> working right now.
> Tests directly affected:
>  * {{auth_test.py}} - CASSANDRA-13985 added some gating of yaml props and 
> additional checks on CQL results based on the build version. These failed on 
> 3.11, which is how this issue was uncovered, but they're also broken on 2.2 
> on builds.apache.org
>  * {{user_functions_test.py}} - gates setting a yaml property when version < 
> 3.0. Failing on 2.2.
>  * {{upgrade_tests}} - a number of these use the variable, but I don't think 
> they're actually being run at the moment.
>  * {{repair_tests/repair_test.py}}, {{replace_address_test.py}} & 
> {{thrift_test}} all use the global, but only to verify that the version is 
> not 3.9. As we're not running CI for that version, no-one noticed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459637#comment-16459637
 ] 

mck edited comment on CASSANDRA-10751 at 5/1/18 11:44 AM:
--

Sorry [~cscetbon] that this got completely forgotten. The patch makes sense.

I've put your patch into relevant branches, and will commit once they go green.

|| Branch || uTest || dTest ||
|[cassandra-2.2_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-2.2_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/534/
 |
|[cassandra-3.0_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/535/
 |
|[cassandra-3.11_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/536/
 |
|[trunk_10751|https://github.com/thelastpickle/cassandra/tree/mck/trunk_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/537/
 |


was (Author: michaelsembwever):
Sorry [~cscetbon] that this got completely forgotten. The patch makes sense.

I've put your patch into relevant branches, and will commit once they go green.

Here's the patch for trunk, 3.0 and 3.11:
|| Branch || uTest || dTest ||
|[cassandra-2.2_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-2.2_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/534/
 |
|[cassandra-3.0_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/535/
 |
|[cassandra-3.11_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/536/
 |
|[trunk_10751|https://github.com/thelastpickle/cassandra/tree/mck/trunk_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/537/
 |

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459637#comment-16459637
 ] 

mck commented on CASSANDRA-10751:
-

Sorry [~cscetbon] that this got completely forgotten. The patch makes sense.

I've put your patch into relevant branches, and will commit once they go green.

Here's the patch for trunk, 3.0 and 3.11:
|| Branch || uTest || dTest ||
|[cassandra-2.2_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-2.2_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-2.2_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/534/
 |
|[cassandra-3.0_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.0_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.0_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/535/
 |
|[cassandra-3.11_10751|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/536/
 |
|[trunk_10751|https://github.com/thelastpickle/cassandra/tree/mck/trunk_10751]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_10751]|
 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/537/
 |

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> 

[jira] [Updated] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-10751:

Reviewer: Morten Kuhl  (was: Alex Liu)

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
>   at 
> 

[jira] [Updated] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-05-01 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-10751:

Reviewer: mck  (was: Morten Kuhl)

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
>   at 
> 

[jira] [Commented] (CASSANDRA-13426) Make all DDL statements idempotent and not dependent on global state

2018-05-01 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459608#comment-16459608
 ] 

Sam Tunnicliffe commented on CASSANDRA-13426:
-

Cool, latest changes lgtm. I'll give it a final pass once the diffing 
optimisations are done, but +1 so far.

> Make all DDL statements idempotent and not dependent on global state
> 
>
> Key: CASSANDRA-13426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13426
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 4.0
>
>
> A follow-up to CASSANDRA-9425 and a pre-requisite for CASSANDRA-10699.
> It's necessary for the latter to be able to apply any DDL statement several 
> times without side-effects. As part of the ticket I think we should also 
> clean up validation logic and our error texts. One example is varying 
> treatment of missing keyspace for DROP TABLE/INDEX/etc. statements with IF 
> EXISTS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11163) Summaries are needlessly rebuilt when the BF FP ratio is changed

2018-05-01 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459598#comment-16459598
 ] 

Aleksey Yeschenko commented on CASSANDRA-11163:
---

bq. Yes, I should have noticed that SSTableReaderTest was a new failure in  
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-3.0-test-all/248/#showFailuresLink
 

Realistically, you shouldn't have. Looking at Circle should be sufficient 
enough. I just assumed that it would break on Circle just like it broke in our 
internal CI and Jenkins, and that was wrong. My apologies for the somewhat 
passive-aggressive righteous tone.

If Circle is green, we should be free to commit, and we should be checking with 
ASF Jenkins from time to time, but it should not be required for every commit.

> Summaries are needlessly rebuilt when the BF FP ratio is changed
> 
>
> Key: CASSANDRA-11163
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11163
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> This is from trunk, but I also saw this happen on 2.0:
> Before:
> {noformat}
> root@bw-1:/srv/cassandra# ls -ltr 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/
> total 221460
> drwxr-xr-x 2 root root  4096 Feb 11 23:34 backups
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-6-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-6-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-6-big-Statistics.db
> -rw-r--r-- 1 root root   2607705 Feb 11 23:50 ma-6-big-Index.db
> -rw-r--r-- 1 root root192440 Feb 11 23:50 ma-6-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 11 23:50 ma-6-big-Digest.crc32
> -rw-r--r-- 1 root root  35212125 Feb 11 23:50 ma-6-big-Data.db
> -rw-r--r-- 1 root root  2156 Feb 11 23:50 ma-6-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-7-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-7-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-7-big-Statistics.db
> -rw-r--r-- 1 root root   2607614 Feb 11 23:50 ma-7-big-Index.db
> -rw-r--r-- 1 root root192432 Feb 11 23:50 ma-7-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-7-big-Digest.crc32
> -rw-r--r-- 1 root root  35190400 Feb 11 23:50 ma-7-big-Data.db
> -rw-r--r-- 1 root root  2152 Feb 11 23:50 ma-7-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-5-big-TOC.txt
> -rw-r--r-- 1 root root104178 Feb 11 23:50 ma-5-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-5-big-Statistics.db
> -rw-r--r-- 1 root root  10289077 Feb 11 23:50 ma-5-big-Index.db
> -rw-r--r-- 1 root root757384 Feb 11 23:50 ma-5-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-5-big-Digest.crc32
> -rw-r--r-- 1 root root 139201355 Feb 11 23:50 ma-5-big-Data.db
> -rw-r--r-- 1 root root  8508 Feb 11 23:50 ma-5-big-CRC.db
> root@bw-1:/srv/cassandra# md5sum 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/ma-5-big-Summary.db
> 5fca154fc790f7cfa37e8ad6d1c7552c
> {noformat}
> BF ratio changed, node restarted:
> {noformat}
> root@bw-1:/srv/cassandra# ls -ltr 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/
> total 242168
> drwxr-xr-x 2 root root  4096 Feb 11 23:34 backups
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-6-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-6-big-Statistics.db
> -rw-r--r-- 1 root root   2607705 Feb 11 23:50 ma-6-big-Index.db
> -rw-r--r-- 1 root root192440 Feb 11 23:50 ma-6-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 11 23:50 ma-6-big-Digest.crc32
> -rw-r--r-- 1 root root  35212125 Feb 11 23:50 ma-6-big-Data.db
> -rw-r--r-- 1 root root  2156 Feb 11 23:50 ma-6-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-7-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-7-big-Statistics.db
> -rw-r--r-- 1 root root   2607614 Feb 11 23:50 ma-7-big-Index.db
> -rw-r--r-- 1 root root192432 Feb 11 23:50 ma-7-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-7-big-Digest.crc32
> -rw-r--r-- 1 root root  35190400 Feb 11 23:50 ma-7-big-Data.db
> -rw-r--r-- 1 root root  2152 Feb 11 23:50 ma-7-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-5-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-5-big-Statistics.db
> -rw-r--r-- 1 root root  10289077 Feb 11 23:50 ma-5-big-Index.db
> -rw-r--r-- 1 root root757384 Feb 11 23:50 ma-5-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-5-big-Digest.crc32
> -rw-r--r-- 1 root root 139201355 Feb 11 23:50 ma-5-big-Data.db
> 

[jira] [Commented] (CASSANDRA-14346) Scheduled Repair in Cassandra

2018-05-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459583#comment-16459583
 ] 

mck commented on CASSANDRA-14346:
-

{quote}So the jmx thing is not super difficult, and arguably something we 
should do anyway. The visibility into repair state isn’t solved by being in 
process, and is really out of scope for a discussion about the best way to 
coordinate when and where repairs are run.{quote}

+1 [~bdeggleston]! Are there any tickets for this?

> Scheduled Repair in Cassandra
> -
>
> Key: CASSANDRA-14346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14346
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Repair
>Reporter: Joseph Lynch
>Priority: Major
>  Labels: CommunityFeedbackRequested
> Fix For: 4.0
>
> Attachments: ScheduledRepairV1_20180327.pdf
>
>
> There have been many attempts to automate repair in Cassandra, which makes 
> sense given that it is necessary to give our users eventual consistency. Most 
> recently CASSANDRA-10070, CASSANDRA-8911 and CASSANDRA-13924 have all looked 
> for ways to solve this problem.
> At Netflix we've built a scheduled repair service within Priam (our sidecar), 
> which we spoke about last year at NGCC. Given the positive feedback at NGCC 
> we focussed on getting it production ready and have now been using it in 
> production to repair hundreds of clusters, tens of thousands of nodes, and 
> petabytes of data for the past six months. Also based on feedback at NGCC we 
> have invested effort in figuring out how to integrate this natively into 
> Cassandra rather than open sourcing it as an external service (e.g. in Priam).
> As such, [~vinaykumarcse] and I would like to re-work and merge our 
> implementation into Cassandra, and have created a [design 
> document|https://docs.google.com/document/d/1RV4rOrG1gwlD5IljmrIq_t45rz7H3xs9GbFSEyGzEtM/edit?usp=sharing]
>  showing how we plan to make it happen, including the the user interface.
> As we work on the code migration from Priam to Cassandra, any feedback would 
> be greatly appreciated about the interface or v1 implementation features. I 
> have tried to call out in the document features which we explicitly consider 
> future work (as well as a path forward to implement them in the future) 
> because I would very much like to get this done before the 4.0 merge window 
> closes, and to do that I think aggressively pruning scope is going to be a 
> necessity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11163) Summaries are needlessly rebuilt when the BF FP ratio is changed

2018-05-01 Thread Dinesh Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459476#comment-16459476
 ] 

Dinesh Joshi edited comment on CASSANDRA-11163 at 5/1/18 7:06 AM:
--

There isn't a clean or standard way to determine if a file system supports 
sub-second date/time resolution for file modifications.


was (Author: djoshi3):
There isn't a clean or standard way to determine if the file system's date/time 
resolution for file modifications.

> Summaries are needlessly rebuilt when the BF FP ratio is changed
> 
>
> Key: CASSANDRA-11163
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11163
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> This is from trunk, but I also saw this happen on 2.0:
> Before:
> {noformat}
> root@bw-1:/srv/cassandra# ls -ltr 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/
> total 221460
> drwxr-xr-x 2 root root  4096 Feb 11 23:34 backups
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-6-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-6-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-6-big-Statistics.db
> -rw-r--r-- 1 root root   2607705 Feb 11 23:50 ma-6-big-Index.db
> -rw-r--r-- 1 root root192440 Feb 11 23:50 ma-6-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 11 23:50 ma-6-big-Digest.crc32
> -rw-r--r-- 1 root root  35212125 Feb 11 23:50 ma-6-big-Data.db
> -rw-r--r-- 1 root root  2156 Feb 11 23:50 ma-6-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-7-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-7-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-7-big-Statistics.db
> -rw-r--r-- 1 root root   2607614 Feb 11 23:50 ma-7-big-Index.db
> -rw-r--r-- 1 root root192432 Feb 11 23:50 ma-7-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-7-big-Digest.crc32
> -rw-r--r-- 1 root root  35190400 Feb 11 23:50 ma-7-big-Data.db
> -rw-r--r-- 1 root root  2152 Feb 11 23:50 ma-7-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-5-big-TOC.txt
> -rw-r--r-- 1 root root104178 Feb 11 23:50 ma-5-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-5-big-Statistics.db
> -rw-r--r-- 1 root root  10289077 Feb 11 23:50 ma-5-big-Index.db
> -rw-r--r-- 1 root root757384 Feb 11 23:50 ma-5-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-5-big-Digest.crc32
> -rw-r--r-- 1 root root 139201355 Feb 11 23:50 ma-5-big-Data.db
> -rw-r--r-- 1 root root  8508 Feb 11 23:50 ma-5-big-CRC.db
> root@bw-1:/srv/cassandra# md5sum 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/ma-5-big-Summary.db
> 5fca154fc790f7cfa37e8ad6d1c7552c
> {noformat}
> BF ratio changed, node restarted:
> {noformat}
> root@bw-1:/srv/cassandra# ls -ltr 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/
> total 242168
> drwxr-xr-x 2 root root  4096 Feb 11 23:34 backups
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-6-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-6-big-Statistics.db
> -rw-r--r-- 1 root root   2607705 Feb 11 23:50 ma-6-big-Index.db
> -rw-r--r-- 1 root root192440 Feb 11 23:50 ma-6-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 11 23:50 ma-6-big-Digest.crc32
> -rw-r--r-- 1 root root  35212125 Feb 11 23:50 ma-6-big-Data.db
> -rw-r--r-- 1 root root  2156 Feb 11 23:50 ma-6-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-7-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-7-big-Statistics.db
> -rw-r--r-- 1 root root   2607614 Feb 11 23:50 ma-7-big-Index.db
> -rw-r--r-- 1 root root192432 Feb 11 23:50 ma-7-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-7-big-Digest.crc32
> -rw-r--r-- 1 root root  35190400 Feb 11 23:50 ma-7-big-Data.db
> -rw-r--r-- 1 root root  2152 Feb 11 23:50 ma-7-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-5-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-5-big-Statistics.db
> -rw-r--r-- 1 root root  10289077 Feb 11 23:50 ma-5-big-Index.db
> -rw-r--r-- 1 root root757384 Feb 11 23:50 ma-5-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-5-big-Digest.crc32
> -rw-r--r-- 1 root root 139201355 Feb 11 23:50 ma-5-big-Data.db
> -rw-r--r-- 1 root root  8508 Feb 11 23:50 ma-5-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 12 00:03 ma-8-big-TOC.txt
> -rw-r--r-- 1 root root 14902 Feb 12 00:03 ma-8-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 12 00:03 ma-8-big-Statistics.db
> -rw-r--r-- 1 root root   

[jira] [Commented] (CASSANDRA-11163) Summaries are needlessly rebuilt when the BF FP ratio is changed

2018-05-01 Thread Dinesh Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459476#comment-16459476
 ] 

Dinesh Joshi commented on CASSANDRA-11163:
--

There isn't a clean or standard way to determine if the file system's date/time 
resolution for file modifications.

> Summaries are needlessly rebuilt when the BF FP ratio is changed
> 
>
> Key: CASSANDRA-11163
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11163
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> This is from trunk, but I also saw this happen on 2.0:
> Before:
> {noformat}
> root@bw-1:/srv/cassandra# ls -ltr 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/
> total 221460
> drwxr-xr-x 2 root root  4096 Feb 11 23:34 backups
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-6-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-6-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-6-big-Statistics.db
> -rw-r--r-- 1 root root   2607705 Feb 11 23:50 ma-6-big-Index.db
> -rw-r--r-- 1 root root192440 Feb 11 23:50 ma-6-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 11 23:50 ma-6-big-Digest.crc32
> -rw-r--r-- 1 root root  35212125 Feb 11 23:50 ma-6-big-Data.db
> -rw-r--r-- 1 root root  2156 Feb 11 23:50 ma-6-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-7-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-7-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-7-big-Statistics.db
> -rw-r--r-- 1 root root   2607614 Feb 11 23:50 ma-7-big-Index.db
> -rw-r--r-- 1 root root192432 Feb 11 23:50 ma-7-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-7-big-Digest.crc32
> -rw-r--r-- 1 root root  35190400 Feb 11 23:50 ma-7-big-Data.db
> -rw-r--r-- 1 root root  2152 Feb 11 23:50 ma-7-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-5-big-TOC.txt
> -rw-r--r-- 1 root root104178 Feb 11 23:50 ma-5-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-5-big-Statistics.db
> -rw-r--r-- 1 root root  10289077 Feb 11 23:50 ma-5-big-Index.db
> -rw-r--r-- 1 root root757384 Feb 11 23:50 ma-5-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-5-big-Digest.crc32
> -rw-r--r-- 1 root root 139201355 Feb 11 23:50 ma-5-big-Data.db
> -rw-r--r-- 1 root root  8508 Feb 11 23:50 ma-5-big-CRC.db
> root@bw-1:/srv/cassandra# md5sum 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/ma-5-big-Summary.db
> 5fca154fc790f7cfa37e8ad6d1c7552c
> {noformat}
> BF ratio changed, node restarted:
> {noformat}
> root@bw-1:/srv/cassandra# ls -ltr 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/
> total 242168
> drwxr-xr-x 2 root root  4096 Feb 11 23:34 backups
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-6-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-6-big-Statistics.db
> -rw-r--r-- 1 root root   2607705 Feb 11 23:50 ma-6-big-Index.db
> -rw-r--r-- 1 root root192440 Feb 11 23:50 ma-6-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 11 23:50 ma-6-big-Digest.crc32
> -rw-r--r-- 1 root root  35212125 Feb 11 23:50 ma-6-big-Data.db
> -rw-r--r-- 1 root root  2156 Feb 11 23:50 ma-6-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-7-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-7-big-Statistics.db
> -rw-r--r-- 1 root root   2607614 Feb 11 23:50 ma-7-big-Index.db
> -rw-r--r-- 1 root root192432 Feb 11 23:50 ma-7-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-7-big-Digest.crc32
> -rw-r--r-- 1 root root  35190400 Feb 11 23:50 ma-7-big-Data.db
> -rw-r--r-- 1 root root  2152 Feb 11 23:50 ma-7-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-5-big-TOC.txt
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-5-big-Statistics.db
> -rw-r--r-- 1 root root  10289077 Feb 11 23:50 ma-5-big-Index.db
> -rw-r--r-- 1 root root757384 Feb 11 23:50 ma-5-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-5-big-Digest.crc32
> -rw-r--r-- 1 root root 139201355 Feb 11 23:50 ma-5-big-Data.db
> -rw-r--r-- 1 root root  8508 Feb 11 23:50 ma-5-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 12 00:03 ma-8-big-TOC.txt
> -rw-r--r-- 1 root root 14902 Feb 12 00:03 ma-8-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 12 00:03 ma-8-big-Statistics.db
> -rw-r--r-- 1 root root   1458631 Feb 12 00:03 ma-8-big-Index.db
> -rw-r--r-- 1 root root 10808 Feb 12 00:03 ma-8-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 12 00:03 ma-8-big-Digest.crc32
> -rw-r--r-- 1 root root