svn commit: r1709061 - in /cassandra/site: publish/download/index.html publish/index.html src/settings.py

2015-10-16 Thread jake
Author: jake
Date: Fri Oct 16 17:17:11 2015
New Revision: 1709061

URL: http://svn.apache.org/viewvc?rev=1709061=rev
Log:
2.2.3 and 2.1.11

Modified:
cassandra/site/publish/download/index.html
cassandra/site/publish/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1709061=1709060=1709061=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Fri Oct 16 17:17:11 2015
@@ -55,21 +55,21 @@
There are currently two active releases available:


-  The latest release of Apache Cassandra is 2.2.2
-  (released on 2015-10-05).  If you're just
+  The latest release of Apache Cassandra is 2.2.3
+  (released on 2015-10-16).  If you're just
   starting out and not yet in production, download this one.
 
   

  
http://www.apache.org/dyn/closer.lua/cassandra/2.2.2/apache-cassandra-2.2.2-bin.tar.gz;
+ 
href="http://www.apache.org/dyn/closer.lua/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz;
  onclick="javascript: 
pageTracker._trackPageview('/clicks/binary_download');">
-apache-cassandra-2.2.2-bin.tar.gz
+apache-cassandra-2.2.3-bin.tar.gz

-   [http://www.apache.org/dist/cassandra/2.2.2/apache-cassandra-2.2.2-bin.tar.gz.asc;>PGP]
-   [http://www.apache.org/dist/cassandra/2.2.2/apache-cassandra-2.2.2-bin.tar.gz.md5;>MD5]
-   [http://www.apache.org/dist/cassandra/2.2.2/apache-cassandra-2.2.2-bin.tar.gz.sha1;>SHA1]
+   [http://www.apache.org/dist/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.asc;>PGP]
+   [http://www.apache.org/dist/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.md5;>MD5]
+   [http://www.apache.org/dist/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.sha1;>SHA1]
  
  
http://wiki.apache.org/cassandra/DebianPackaging;>Debian 
installation instructions
@@ -77,16 +77,16 @@

 

- The most stable release of Apache Cassandra is 2.1.10
- (released on 2015-10-05).  If you are in production or planning to be 
soon, download this one.
+ The most stable release of Apache Cassandra is 2.1.11
+ (released on 2015-10-16).  If you are in production or planning to be 
soon, download this one.

 

  
-   http://www.apache.org/dyn/closer.lua/cassandra/2.1.10/apache-cassandra-2.1.10-bin.tar.gz;>apache-cassandra-2.1.10-bin.tar.gz
-   [http://www.apache.org/dist/cassandra/2.1.10/apache-cassandra-2.1.10-bin.tar.gz.asc;>PGP]
-   [http://www.apache.org/dist/cassandra/2.1.10/apache-cassandra-2.1.10-bin.tar.gz.md5;>MD5]
-   [http://www.apache.org/dist/cassandra/2.1.10/apache-cassandra-2.1.10-bin.tar.gz.sha1;>SHA1]
+   http://www.apache.org/dyn/closer.lua/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz;>apache-cassandra-2.1.11-bin.tar.gz
+   [http://www.apache.org/dist/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.asc;>PGP]
+   [http://www.apache.org/dist/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.md5;>MD5]
+   [http://www.apache.org/dist/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.sha1;>SHA1]
  
 
 http://wiki.apache.org/cassandra/DebianPackaging;>Debian 
installation instructions
@@ -171,20 +171,20 @@
   
 
 http://www.apache.org/dyn/closer.lua/cassandra/2.2.2/apache-cassandra-2.2.2-src.tar.gz;
+   
href="http://www.apache.org/dyn/closer.lua/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz;
onclick="javascript: 
pageTracker._trackPageview('/clicks/source_download');">
-  apache-cassandra-2.2.2-src.tar.gz
+  apache-cassandra-2.2.3-src.tar.gz
 
-[http://www.apache.org/dist/cassandra/2.2.2/apache-cassandra-2.2.2-src.tar.gz.asc;>PGP]
-[http://www.apache.org/dist/cassandra/2.2.2/apache-cassandra-2.2.2-src.tar.gz.md5;>MD5]
-[http://www.apache.org/dist/cassandra/2.2.2/apache-cassandra-2.2.2-src.tar.gz.sha1;>SHA1]
+[http://www.apache.org/dist/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.asc;>PGP]
+[http://www.apache.org/dist/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.md5;>MD5]
+[http://www.apache.org/dist/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.sha1;>SHA1]
 
   
 
-http://www.apache.org/dyn/closer.lua/cassandra/2.1.10/apache-cassandra-2.1.10-src.tar.gz;>apache-cassandra-2.1.10-src.tar.gz
-[http://www.apache.org/dist/cassandra/2.1.10/apache-cassandra-2.1.10-src.tar.gz.asc;>PGP]
-[http://www.apache.org/dist/cassandra/2.1.10/apache-cassandra-2.1.10-src.tar.gz.md5;>MD5]
-[http://www.apache.org/dist/cassandra/2.1.10/apache-cassandra-2.1.10-src.tar.gz.sha1;>SHA1]
+http://www.apache.org/dyn/closer.lua/cassandra/2.1.11/apache-cassandra-2.1.11-src.tar.gz;>apache-cassandra-2.1.11-src.tar.gz
+

[jira] [Commented] (CASSANDRA-10473) fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade path

2015-10-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961202#comment-14961202
 ] 

Aleksey Yeschenko commented on CASSANDRA-10473:
---

+1

> fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade 
> path
> --
>
> Key: CASSANDRA-10473
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10473
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0 rc2
>
>
> {{upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test}} 
> fails on the upgrade path from 2.2 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQL/static_columns_with_distinct_test/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run it with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10473) fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade path

2015-10-16 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10473:
--
Reviewer: Aleksey Yeschenko

> fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade 
> path
> --
>
> Key: CASSANDRA-10473
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10473
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0 rc2
>
>
> {{upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test}} 
> fails on the upgrade path from 2.2 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQL/static_columns_with_distinct_test/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run it with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10545) JDK bug from CASSANDRA-8220 makes drain die early also

2015-10-16 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-10545:
---

 Summary: JDK bug from CASSANDRA-8220 makes drain die early also
 Key: CASSANDRA-10545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10545
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Priority: Trivial


The JDK bug from CASSANDRA-8220 makes drain die early also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/4] cassandra git commit: Update current format sstables for LegacySSTableTest

2015-10-16 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 89293efc7 -> aad3ae2cb
  refs/heads/trunk 1af0db509 -> 09c94a667


Update current format sstables for LegacySSTableTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/89293efc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/89293efc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/89293efc

Branch: refs/heads/trunk
Commit: 89293efc7d2e099eced7a71821cb23058084befa
Parents: 0c92c52
Author: Sylvain Lebresne 
Authored: Fri Oct 16 17:18:25 2015 +0200
Committer: Sylvain Lebresne 
Committed: Fri Oct 16 17:18:45 2015 +0200

--
 .../legacy_ma_clust/ma-1-big-CompressionInfo.db | Bin 83 -> 83 bytes
 .../legacy_ma_clust/ma-1-big-Data.db| Bin 5036 -> 5218 bytes
 .../legacy_ma_clust/ma-1-big-Digest.crc32   |   2 +-
 .../legacy_ma_clust/ma-1-big-Index.db   | Bin 157553 -> 157553 bytes
 .../legacy_ma_clust/ma-1-big-Statistics.db  | Bin 7048 -> 7045 bytes
 .../legacy_ma_clust/ma-1-big-TOC.txt|  10 +-
 .../ma-1-big-CompressionInfo.db | Bin 75 -> 75 bytes
 .../legacy_ma_clust_counter/ma-1-big-Data.db| Bin 4344 -> 4538 bytes
 .../ma-1-big-Digest.crc32   |   2 +-
 .../legacy_ma_clust_counter/ma-1-big-Index.db   | Bin 157553 -> 157553 bytes
 .../ma-1-big-Statistics.db  | Bin 7057 -> 7054 bytes
 .../legacy_ma_clust_counter/ma-1-big-TOC.txt|  10 +-
 .../ma-1-big-CompressionInfo.db | Bin 43 -> 43 bytes
 .../legacy_ma_simple/ma-1-big-Data.db   | Bin 85 -> 88 bytes
 .../legacy_ma_simple/ma-1-big-Digest.crc32  |   2 +-
 .../legacy_ma_simple/ma-1-big-Index.db  | Bin 26 -> 26 bytes
 .../legacy_ma_simple/ma-1-big-Statistics.db | Bin 4601 -> 4598 bytes
 .../legacy_ma_simple/ma-1-big-TOC.txt   |  10 +-
 .../ma-1-big-CompressionInfo.db | Bin 43 -> 43 bytes
 .../legacy_ma_simple_counter/ma-1-big-Data.db   | Bin 106 -> 111 bytes
 .../ma-1-big-Digest.crc32   |   2 +-
 .../legacy_ma_simple_counter/ma-1-big-Index.db  | Bin 27 -> 27 bytes
 .../ma-1-big-Statistics.db  | Bin 4610 -> 4607 bytes
 .../legacy_ma_simple_counter/ma-1-big-TOC.txt   |  10 +-
 24 files changed, 24 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/89293efc/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-CompressionInfo.db
--
diff --git 
a/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-CompressionInfo.db
 
b/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-CompressionInfo.db
index 20e807d..aae310b 100644
Binary files 
a/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-CompressionInfo.db
 and 
b/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-CompressionInfo.db
 differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/89293efc/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Data.db
--
diff --git 
a/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Data.db 
b/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Data.db
index d078596..ad9731c 100644
Binary files 
a/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Data.db 
and 
b/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Data.db 
differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/89293efc/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Digest.crc32
--
diff --git 
a/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Digest.crc32
 
b/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Digest.crc32
index 7e4d733..f7cb5fb 100644
--- 
a/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Digest.crc32
+++ 
b/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Digest.crc32
@@ -1 +1 @@
-311755797
\ No newline at end of file
+4135005735
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/89293efc/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Index.db
--
diff --git 
a/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Index.db 
b/test/data/legacy-sstables/ma/legacy_tables/legacy_ma_clust/ma-1-big-Index.db
index c53bc1f..55ee8d5 100644
Binary files 

[jira] [Commented] (CASSANDRA-10539) Different encodings used between nodes can cause inconsistently generated prepared statement ids

2015-10-16 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961136#comment-14961136
 ] 

Andy Tolbert commented on CASSANDRA-10539:
--

Yep, it appears to be the case (reproduced against 3.0.0-rc1, 2.0.17 and 2.1.9, 
hadn't tried 2.2.2 but assuming its the same).  I don't consider this too big 
of a problem since it requires different instances to be using different 
encoding which may cause other problems.

> Different encodings used between nodes can cause inconsistently generated 
> prepared statement ids 
> -
>
> Key: CASSANDRA-10539
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10539
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Priority: Minor
>
> [From the java-driver mailing 
> list|https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/3Aa7s0u2ZrI]
>  / [JAVA-955|https://datastax-oss.atlassian.net/browse/JAVA-955]
> If you have nodes in your cluster that are using a different default 
> character set it's possible for nodes to generate different prepared 
> statement ids for the same 'keyspace + query string' combination.  I imagine 
> this is not a very typical or desired configuration (thus the low severity).
> This is because 
> [MD5Digest.compute(String)|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/utils/MD5Digest.java#L51-L54]
>  uses 
> [String.getBytes()|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#getBytes()]
>  which relies on the default charset.
> In the general case this is fine, but if you use some characters in your 
> query string such as 
> [Character.MAX_VALUE|http://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#MAX_VALUE]
>  ('\u') the byte representation may vary based on the coding.
> I was able to reproduce this configuring a 2-node cluster with node1 using 
> file.encoding {{UTF-8}} and node2 using file.encoding {{ISO-8859-1}}.   The 
> java-driver test that demonstrates this can be found 
> [here|https://github.com/datastax/java-driver/blob/java955/driver-core/src/test/java/com/datastax/driver/core/RetryOnUnpreparedTest.java].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10539) Different encodings used between nodes can cause inconsistently generated prepared statement ids

2015-10-16 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961136#comment-14961136
 ] 

Andy Tolbert edited comment on CASSANDRA-10539 at 10/16/15 6:23 PM:


Yep, it appears to be the case (reproduced against 3.0.0-rc1, 2.0.17 and 2.1.9, 
hadn't tried 2.2.2 but assuming its the same).  I don't consider this too big 
of a problem since it requires different nodes to be using different encoding 
which may cause other problems.


was (Author: andrew.tolbert):
Yep, it appears to be the case (reproduced against 3.0.0-rc1, 2.0.17 and 2.1.9, 
hadn't tried 2.2.2 but assuming its the same).  I don't consider this too big 
of a problem since it requires different instances to be using different 
encoding which may cause other problems.

> Different encodings used between nodes can cause inconsistently generated 
> prepared statement ids 
> -
>
> Key: CASSANDRA-10539
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10539
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Priority: Minor
>
> [From the java-driver mailing 
> list|https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/3Aa7s0u2ZrI]
>  / [JAVA-955|https://datastax-oss.atlassian.net/browse/JAVA-955]
> If you have nodes in your cluster that are using a different default 
> character set it's possible for nodes to generate different prepared 
> statement ids for the same 'keyspace + query string' combination.  I imagine 
> this is not a very typical or desired configuration (thus the low severity).
> This is because 
> [MD5Digest.compute(String)|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/utils/MD5Digest.java#L51-L54]
>  uses 
> [String.getBytes()|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#getBytes()]
>  which relies on the default charset.
> In the general case this is fine, but if you use some characters in your 
> query string such as 
> [Character.MAX_VALUE|http://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#MAX_VALUE]
>  ('\u') the byte representation may vary based on the coding.
> I was able to reproduce this configuring a 2-node cluster with node1 using 
> file.encoding {{UTF-8}} and node2 using file.encoding {{ISO-8859-1}}.   The 
> java-driver test that demonstrates this can be found 
> [here|https://github.com/datastax/java-driver/blob/java955/driver-core/src/test/java/com/datastax/driver/core/RetryOnUnpreparedTest.java].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10545) JDK bug from CASSANDRA-8220 makes drain die early also

2015-10-16 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-10545:

Fix Version/s: 3.0.x
   2.2.x
   2.1.x

> JDK bug from CASSANDRA-8220 makes drain die early also
> --
>
> Key: CASSANDRA-10545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10545
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Trivial
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> The JDK bug from CASSANDRA-8220 makes drain die early also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Partially revert #9839 to remove reference loop

2015-10-16 Thread samt
Partially revert #9839 to remove reference loop

Patch by Sam Tunnicliffe; reviewed by Benedict Elliot Smith for
CASSANDRA-10543


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc89bc66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc89bc66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc89bc66

Branch: refs/heads/trunk
Commit: bc89bc66cb762da2be61b92d56b48154d8bd3cbf
Parents: aad3ae2
Author: Sam Tunnicliffe 
Authored: Fri Oct 16 17:39:07 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Oct 16 20:19:37 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  4 +++
 .../compress/CompressedRandomAccessReader.java  | 16 +++-
 .../io/sstable/format/SSTableReader.java| 23 +
 .../cassandra/io/util/IChecksummedFile.java | 27 
 .../cassandra/io/util/ICompressedFile.java  |  2 +-
 .../apache/cassandra/io/util/SegmentedFile.java | 14 +-
 .../cassandra/schema/CompressionParams.java | 12 +
 .../miscellaneous/CrcCheckChanceTest.java   | 19 +-
 9 files changed, 64 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc89bc66/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 77facc4..cb4c2d8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Remove circular references in SegmentedFile (CASSANDRA-10543)
  * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
  * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)
  * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc89bc66/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 4c9fc55..0b838bf 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2044,7 +2044,11 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 
TableParams.builder().crcCheckChance(crcCheckChance).build().validate();
 for (ColumnFamilyStore cfs : concatWithIndexes())
+{
 cfs.crcCheckChance.set(crcCheckChance);
+for (SSTableReader sstable : cfs.getSSTables(SSTableSet.LIVE))
+sstable.setCrcCheckChance(crcCheckChance);
+}
 }
 catch (ConfigurationException e)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc89bc66/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index b2759e6..329d932 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -23,6 +23,7 @@ import java.util.concurrent.ThreadLocalRandom;
 import java.util.zip.Checksum;
 import java.util.function.Supplier;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.primitives.Ints;
 
 import org.apache.cassandra.io.FSReadError;
@@ -46,14 +47,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 
 // raw checksum bytes
 private ByteBuffer checksumBytes;
-private final Supplier crcCheckChanceSupplier;
+
+@VisibleForTesting
+public double getCrcCheckChance()
+{
+return metadata.parameters.getCrcCheckChance();
+}
 
 protected CompressedRandomAccessReader(Builder builder)
 {
 super(builder);
 this.metadata = builder.metadata;
 this.checksum = metadata.checksumType.newInstance();
-crcCheckChanceSupplier = builder.crcCheckChanceSupplier;
 
 if (regions == null)
 {
@@ -124,7 +129,7 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
 {
 compressed.rewind();
 metadata.checksumType.update( 

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-10-16 Thread samt
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dec76593
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dec76593
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dec76593

Branch: refs/heads/trunk
Commit: dec76593f9259a7b5315c7b45e7433146bba5ec1
Parents: 09c94a6 bc89bc6
Author: Sam Tunnicliffe 
Authored: Fri Oct 16 20:24:47 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Oct 16 20:24:47 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  4 +++
 .../compress/CompressedRandomAccessReader.java  | 16 +++-
 .../io/sstable/format/SSTableReader.java| 23 +
 .../cassandra/io/util/IChecksummedFile.java | 27 
 .../cassandra/io/util/ICompressedFile.java  |  2 +-
 .../apache/cassandra/io/util/SegmentedFile.java | 14 +-
 .../cassandra/schema/CompressionParams.java | 12 +
 .../miscellaneous/CrcCheckChanceTest.java   | 19 +-
 9 files changed, 64 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dec76593/CHANGES.txt
--
diff --cc CHANGES.txt
index e6f5733,cb4c2d8..801b2fb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.2
 + * Abort in-progress queries that time out (CASSANDRA-7392)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 +
 +
  3.0-rc2
+  * Remove circular references in SegmentedFile (CASSANDRA-10543)
   * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
   * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)
   * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dec76593/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[1/3] cassandra git commit: Partially revert #9839 to remove reference loop

2015-10-16 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 aad3ae2cb -> bc89bc66c
  refs/heads/trunk 09c94a667 -> dec76593f


Partially revert #9839 to remove reference loop

Patch by Sam Tunnicliffe; reviewed by Benedict Elliot Smith for
CASSANDRA-10543


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc89bc66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc89bc66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc89bc66

Branch: refs/heads/cassandra-3.0
Commit: bc89bc66cb762da2be61b92d56b48154d8bd3cbf
Parents: aad3ae2
Author: Sam Tunnicliffe 
Authored: Fri Oct 16 17:39:07 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Oct 16 20:19:37 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  4 +++
 .../compress/CompressedRandomAccessReader.java  | 16 +++-
 .../io/sstable/format/SSTableReader.java| 23 +
 .../cassandra/io/util/IChecksummedFile.java | 27 
 .../cassandra/io/util/ICompressedFile.java  |  2 +-
 .../apache/cassandra/io/util/SegmentedFile.java | 14 +-
 .../cassandra/schema/CompressionParams.java | 12 +
 .../miscellaneous/CrcCheckChanceTest.java   | 19 +-
 9 files changed, 64 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc89bc66/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 77facc4..cb4c2d8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Remove circular references in SegmentedFile (CASSANDRA-10543)
  * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
  * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)
  * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc89bc66/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 4c9fc55..0b838bf 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2044,7 +2044,11 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 
TableParams.builder().crcCheckChance(crcCheckChance).build().validate();
 for (ColumnFamilyStore cfs : concatWithIndexes())
+{
 cfs.crcCheckChance.set(crcCheckChance);
+for (SSTableReader sstable : cfs.getSSTables(SSTableSet.LIVE))
+sstable.setCrcCheckChance(crcCheckChance);
+}
 }
 catch (ConfigurationException e)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc89bc66/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index b2759e6..329d932 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -23,6 +23,7 @@ import java.util.concurrent.ThreadLocalRandom;
 import java.util.zip.Checksum;
 import java.util.function.Supplier;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.primitives.Ints;
 
 import org.apache.cassandra.io.FSReadError;
@@ -46,14 +47,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 
 // raw checksum bytes
 private ByteBuffer checksumBytes;
-private final Supplier crcCheckChanceSupplier;
+
+@VisibleForTesting
+public double getCrcCheckChance()
+{
+return metadata.parameters.getCrcCheckChance();
+}
 
 protected CompressedRandomAccessReader(Builder builder)
 {
 super(builder);
 this.metadata = builder.metadata;
 this.checksum = metadata.checksumType.newInstance();
-crcCheckChanceSupplier = builder.crcCheckChanceSupplier;
 
 if (regions == null)
 {
@@ -124,7 +129,7 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() > 

[jira] [Commented] (CASSANDRA-10421) Potential issue with LogTransaction as it only checks in a single directory for files

2015-10-16 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961022#comment-14961022
 ] 

Ariel Weisberg commented on CASSANDRA-10421:


3.0 actually had a passing run just now. Looks like it started getting better 
very recently. Kudos to the people that got us there. The dtests look good to 
me on your branch.

Some tests look like they might be hard failing like 
org.apache.cassandra.io.sstable.LegacySSTableTest.testLegacyCqlTables. But 
maybe that only just got better on trunk.

I am +1 on the code. My gut says the tests are good, but can you rebase one 
more time and we can see if we can get the utests to match?

> Potential issue with LogTransaction as it only checks in a single directory 
> for files
> -
>
> Key: CASSANDRA-10421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10421
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Stefania
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
>
> When creating a new LogTransaction we try to create the new logfile in the 
> same directory as the one we are writing to, but as we use 
> {{[directories.getDirectoryForNewSSTables()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java#L125]}}
>  this might end up in "any" of the configured data directories. If it does, 
> we will not be able to clean up leftovers as we check for files in the same 
> directory as the logfile was created: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java#L163
> cc [~Stefania]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10543) Self-reference leak in SegmentedFile

2015-10-16 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961024#comment-14961024
 ] 

Sam Tunnicliffe commented on CASSANDRA-10543:
-

It seems like the most straightforward way to resolve this is to partially 
revert CASSANDRA-9839. In the branch linked below, I've removed 
{{IChecksummedFile}} and gone back to {{CompressedRandomAccessReader}} getting 
the crc check chance from {{CompressionMetadata.parameters}}. Checksums were 
not being made/checked for non-compressed files anyway, so we can figure out a 
better approach when we add that.

||3.0||
|[branch|https://github.com/beobal/cassandra/tree/10543-3.0]|
|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10543-3.0-testall/]|
|[dtests|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10543-3.0-dtest/]|

> Self-reference leak in SegmentedFile
> 
>
> Key: CASSANDRA-10543
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10543
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.0.0 rc2
>
>
> CASSANDRA-9839, which moved {{crc_check_chance}} out of compression params 
> and made it a top level table property introduced a reference leak in 
> {{SegmentedFile}}. See [this 
> comment|https://issues.apache.org/jira/browse/CASSANDRA-9839?focusedCommentId=14960528=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14960528
>  ] from [~benedict]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10140) Enable GC logging by default

2015-10-16 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961040#comment-14961040
 ] 

Ariel Weisberg commented on CASSANDRA-10140:


Chris provided an updated 2.2 patch for this so it is ready to merge.

> Enable GC logging by default
> 
>
> Key: CASSANDRA-10140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Attachments: CASSANDRA-10140-2-2.txt, CASSANDRA-10140.txt, 
> cassandra-2.2-10140-v2.txt
>
>
> Overhead for the gc logging is very small (with cycling logs in 7+) and it 
> provides a ton of useful information. This will open up more for C* 
> diagnostic tools to provide feedback as well without requiring restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10504) Create tests for compactionstats

2015-10-16 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961084#comment-14961084
 ] 

Philip Thompson edited comment on CASSANDRA-10504 at 10/16/15 5:57 PM:
---

[~krummas] and [~mambocab], this test fails intermittently on 2.2 with a 
compaction hanging at 0 progress. Can you see anything wrong in the test, or 
might this be a bug?

Sometimes the original major compaction doesn't move past 0 progress, and 
othertimes a second compaction shows up in CompactionSummary despite 
autocompaction being disabled, and that compaction stays at 0 progress.


was (Author: philipthompson):
[~krummas] and [~mambocab], this test fails intermittently on 2.2 with a 
compaction hanging at 0 progress. Can you see anything wrong in the test, or 
might this be a bug?

> Create tests for compactionstats
> 
>
> Key: CASSANDRA-10504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10504
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Philip Thompson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> Creating a new ticket for compactionstats tests to avoid confusion regarding 
> release versions etc, see CASSANDRA-10427



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10524) Add ability to skip TIME_WAIT sockets on port check on Windows startup

2015-10-16 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961115#comment-14961115
 ] 

Andy Tolbert commented on CASSANDRA-10524:
--

looks good to me!

> Add ability to skip TIME_WAIT sockets on port check on Windows startup
> --
>
> Key: CASSANDRA-10524
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10524
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Trivial
>  Labels: Windows
> Fix For: 3.0.0 rc2, 2.2.4
>
> Attachments: 10524_v2.txt, win_aggressive_startup.txt
>
>
> C* sockets are often staying TIME_WAIT for up to 120 seconds (2x max segment 
> lifetime) for me in my dev environment on Windows. This is rather obnoxious 
> since it means I can't launch C* for up to 2 minutes after stopping it.
> Attaching a patch that adds a simple -a for aggressive startup to the launch 
> scripts to ignore duplicate port check from netstat if it's TIME_WAIT. Also 
> snuck in some more liberal interpretation of help strings in the .ps1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10504) Create tests for compactionstats

2015-10-16 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961188#comment-14961188
 ] 

Philip Thompson commented on CASSANDRA-10504:
-

If compaction finished, jmx.read would return an empty set. Instead I see a 
compaction occurring that is at 0 completed.

> Create tests for compactionstats
> 
>
> Key: CASSANDRA-10504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10504
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Philip Thompson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> Creating a new ticket for compactionstats tests to avoid confusion regarding 
> release versions etc, see CASSANDRA-10427



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10473) fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade path

2015-10-16 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961192#comment-14961192
 ] 

Benjamin Lerer edited comment on CASSANDRA-10473 at 10/16/15 6:46 PM:
--

The patch is 
[here|https://github.com/apache/cassandra/compare/trunk...blerer:10473-3.0].

* The unit test results are 
[here|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10473-3.0-testall/2/]
* The dtest results are 
[here|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10473-3.0-dtest/2/]

The  select distinct with static column is failing on CI due to the fact that 
the upgrade is done from 2.2.1 to 3.0 HEAD. The patch fix the test for the 
upgrade from 2.2.0 or 2.2.2 to 3.0 HEAD. It fails on 2.2.1 with a 
ClassCastException.

I have modified the {{NEWS.txt}} to advise users to only upgrade from 2.2.2+.  


was (Author: blerer):
The patch is 
[here|https://github.com/apache/cassandra/compare/trunk...blerer:10473-3.0].

* The unit test results are [here|
* The dtest results are 
[here|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10473-3.0-dtest/2/]

The  select distinct with static column is failing on CI due to the fact that 
the upgrade is done from 2.2.1 to 3.0 HEAD. The patch fix the test for the 
upgrade from 2.2.0 or 2.2.2 to 3.0 HEAD. It fails on 2.2.1 with a 
ClassCastException.

I have modified the {{NEWS.txt}} to advise users to only upgrade from 2.2.2+.  

> fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade 
> path
> --
>
> Key: CASSANDRA-10473
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10473
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0 rc2
>
>
> {{upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test}} 
> fails on the upgrade path from 2.2 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQL/static_columns_with_distinct_test/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run it with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10473) fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade path

2015-10-16 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961192#comment-14961192
 ] 

Benjamin Lerer commented on CASSANDRA-10473:


The patch is 
[here|https://github.com/apache/cassandra/compare/trunk...blerer:10473-3.0].

* The unit test results are [here|
* The dtest results are 
[here|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10473-3.0-dtest/2/]

The  select distinct with static column is failing on CI due to the fact that 
the upgrade is done from 2.2.1 to 3.0 HEAD. The patch fix the test for the 
upgrade from 2.2.0 or 2.2.2 to 3.0 HEAD. It fails on 2.2.1 with a 
ClassCastException.

I have modified the {{NEWS.txt}} to advise users to only upgrade from 2.2.2+.  

> fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade 
> path
> --
>
> Key: CASSANDRA-10473
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10473
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0 rc2
>
>
> {{upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test}} 
> fails on the upgrade path from 2.2 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQL/static_columns_with_distinct_test/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run it with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/4] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-10-16 Thread samt
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/09c94a66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/09c94a66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/09c94a66

Branch: refs/heads/trunk
Commit: 09c94a667aa7429e2e207753e5914441552c67dc
Parents: 1af0db5 aad3ae2
Author: Sam Tunnicliffe 
Authored: Fri Oct 16 18:01:11 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Oct 16 18:01:11 2015 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/UpdateParameters.java |  27 +--
 .../cql3/statements/BatchStatement.java |  10 ++-
 .../cql3/statements/CQL3CasRequest.java |  14 ++--
 .../cql3/statements/DeleteStatement.java|   2 -
 .../cql3/statements/ModificationStatement.java  |   4 +-
 .../cql3/statements/UpdateStatement.java|   2 -
 .../cql3/statements/UpdatesCollector.java   |  20 --
 .../cassandra/io/sstable/CQLSSTableWriter.java  |   8 +--
 .../legacy_ma_clust/ma-1-big-CompressionInfo.db | Bin 83 -> 83 bytes
 .../legacy_ma_clust/ma-1-big-Data.db| Bin 5036 -> 5218 bytes
 .../legacy_ma_clust/ma-1-big-Digest.crc32   |   2 +-
 .../legacy_ma_clust/ma-1-big-Index.db   | Bin 157553 -> 157553 bytes
 .../legacy_ma_clust/ma-1-big-Statistics.db  | Bin 7048 -> 7045 bytes
 .../legacy_ma_clust/ma-1-big-TOC.txt|  10 +--
 .../ma-1-big-CompressionInfo.db | Bin 75 -> 75 bytes
 .../legacy_ma_clust_counter/ma-1-big-Data.db| Bin 4344 -> 4538 bytes
 .../ma-1-big-Digest.crc32   |   2 +-
 .../legacy_ma_clust_counter/ma-1-big-Index.db   | Bin 157553 -> 157553 bytes
 .../ma-1-big-Statistics.db  | Bin 7057 -> 7054 bytes
 .../legacy_ma_clust_counter/ma-1-big-TOC.txt|  10 +--
 .../ma-1-big-CompressionInfo.db | Bin 43 -> 43 bytes
 .../legacy_ma_simple/ma-1-big-Data.db   | Bin 85 -> 88 bytes
 .../legacy_ma_simple/ma-1-big-Digest.crc32  |   2 +-
 .../legacy_ma_simple/ma-1-big-Index.db  | Bin 26 -> 26 bytes
 .../legacy_ma_simple/ma-1-big-Statistics.db | Bin 4601 -> 4598 bytes
 .../legacy_ma_simple/ma-1-big-TOC.txt   |  10 +--
 .../ma-1-big-CompressionInfo.db | Bin 43 -> 43 bytes
 .../legacy_ma_simple_counter/ma-1-big-Data.db   | Bin 106 -> 111 bytes
 .../ma-1-big-Digest.crc32   |   2 +-
 .../legacy_ma_simple_counter/ma-1-big-Index.db  | Bin 27 -> 27 bytes
 .../ma-1-big-Statistics.db  | Bin 4610 -> 4607 bytes
 .../legacy_ma_simple_counter/ma-1-big-TOC.txt   |  10 +--
 .../validation/entities/SecondaryIndexTest.java |  72 +++
 .../index/internal/CassandraIndexTest.java  |  55 ++
 35 files changed, 188 insertions(+), 75 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/09c94a66/CHANGES.txt
--
diff --cc CHANGES.txt
index e98de55,77facc4..e6f5733
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.2
 + * Abort in-progress queries that time out (CASSANDRA-7392)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 +
 +
  3.0-rc2
+  * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
   * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)
   * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)
   * Remove Pig support (CASSANDRA-10542)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/09c94a66/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/09c94a66/test/unit/org/apache/cassandra/index/internal/CassandraIndexTest.java
--



[2/4] cassandra git commit: Ensure indexed values are only validated once per partition

2015-10-16 Thread samt
Ensure indexed values are only validated once per partition

Patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-10536


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aad3ae2c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aad3ae2c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aad3ae2c

Branch: refs/heads/trunk
Commit: aad3ae2cbec85ca36d3caacbe68aebe1e552f41b
Parents: 89293ef
Author: Sam Tunnicliffe 
Authored: Fri Oct 16 13:53:52 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Oct 16 17:59:51 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/cql3/UpdateParameters.java | 27 +---
 .../cql3/statements/BatchStatement.java | 10 ++-
 .../cql3/statements/CQL3CasRequest.java | 14 ++--
 .../cql3/statements/DeleteStatement.java|  2 -
 .../cql3/statements/ModificationStatement.java  |  4 +-
 .../cql3/statements/UpdateStatement.java|  2 -
 .../cql3/statements/UpdatesCollector.java   | 20 --
 .../cassandra/io/sstable/CQLSSTableWriter.java  |  8 +--
 .../validation/entities/SecondaryIndexTest.java | 72 
 .../index/internal/CassandraIndexTest.java  | 55 +++
 11 files changed, 164 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aad3ae2c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e2d9dd7..77facc4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
  * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)
  * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)
  * Remove Pig support (CASSANDRA-10542)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aad3ae2c/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UpdateParameters.java 
b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
index 03468f0..572365b 100644
--- a/src/java/org/apache/cassandra/cql3/UpdateParameters.java
+++ b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
@@ -26,10 +26,8 @@ import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.context.CounterContext;
 import org.apache.cassandra.db.filter.ColumnFilter;
 import org.apache.cassandra.db.partitions.Partition;
-import org.apache.cassandra.db.partitions.PartitionUpdate;
 import org.apache.cassandra.db.rows.*;
 import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.index.SecondaryIndexManager;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -47,8 +45,6 @@ public class UpdateParameters
 
 private final DeletionTime deletionTime;
 
-private final SecondaryIndexManager indexManager;
-
 // For lists operation that require a read-before-write. Will be null 
otherwise.
 private final Map prefetchedRows;
 
@@ -63,8 +59,7 @@ public class UpdateParameters
 QueryOptions options,
 long timestamp,
 int ttl,
-Map prefetchedRows,
-boolean validateIndexedColumns)
+Map prefetchedRows)
 throws InvalidRequestException
 {
 this.metadata = metadata;
@@ -79,32 +74,12 @@ public class UpdateParameters
 
 this.prefetchedRows = prefetchedRows;
 
-// Index column validation triggers a call to Keyspace.open() which we 
want
-// to be able to avoid in some case (e.g. when using CQLSSTableWriter)
-if (validateIndexedColumns)
-{
-SecondaryIndexManager manager = 
Keyspace.openAndGetStore(metadata).indexManager;
-indexManager = manager.hasIndexes() ? manager : null;
-}
-else
-{
-indexManager = null;
-}
-
 // We use MIN_VALUE internally to mean the absence of of timestamp (in 
Selection, in sstable stats, ...), so exclude
 // it to avoid potential confusion.
 if (timestamp == Long.MIN_VALUE)
 throw new InvalidRequestException(String.format("Out of bound 
timestamp, must be in [%d, %d]", Long.MIN_VALUE + 1, Long.MAX_VALUE));
 }
 
-public void validateIndexedColumns(PartitionUpdate update)
-{
-if (indexManager == null)
-return;
-
-indexManager.validate(update);
-}
-
 public void 

[3/4] cassandra git commit: Ensure indexed values are only validated once per partition

2015-10-16 Thread samt
Ensure indexed values are only validated once per partition

Patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-10536


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aad3ae2c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aad3ae2c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aad3ae2c

Branch: refs/heads/cassandra-3.0
Commit: aad3ae2cbec85ca36d3caacbe68aebe1e552f41b
Parents: 89293ef
Author: Sam Tunnicliffe 
Authored: Fri Oct 16 13:53:52 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Oct 16 17:59:51 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/cql3/UpdateParameters.java | 27 +---
 .../cql3/statements/BatchStatement.java | 10 ++-
 .../cql3/statements/CQL3CasRequest.java | 14 ++--
 .../cql3/statements/DeleteStatement.java|  2 -
 .../cql3/statements/ModificationStatement.java  |  4 +-
 .../cql3/statements/UpdateStatement.java|  2 -
 .../cql3/statements/UpdatesCollector.java   | 20 --
 .../cassandra/io/sstable/CQLSSTableWriter.java  |  8 +--
 .../validation/entities/SecondaryIndexTest.java | 72 
 .../index/internal/CassandraIndexTest.java  | 55 +++
 11 files changed, 164 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aad3ae2c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e2d9dd7..77facc4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
  * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)
  * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)
  * Remove Pig support (CASSANDRA-10542)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aad3ae2c/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UpdateParameters.java 
b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
index 03468f0..572365b 100644
--- a/src/java/org/apache/cassandra/cql3/UpdateParameters.java
+++ b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
@@ -26,10 +26,8 @@ import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.context.CounterContext;
 import org.apache.cassandra.db.filter.ColumnFilter;
 import org.apache.cassandra.db.partitions.Partition;
-import org.apache.cassandra.db.partitions.PartitionUpdate;
 import org.apache.cassandra.db.rows.*;
 import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.index.SecondaryIndexManager;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -47,8 +45,6 @@ public class UpdateParameters
 
 private final DeletionTime deletionTime;
 
-private final SecondaryIndexManager indexManager;
-
 // For lists operation that require a read-before-write. Will be null 
otherwise.
 private final Map prefetchedRows;
 
@@ -63,8 +59,7 @@ public class UpdateParameters
 QueryOptions options,
 long timestamp,
 int ttl,
-Map prefetchedRows,
-boolean validateIndexedColumns)
+Map prefetchedRows)
 throws InvalidRequestException
 {
 this.metadata = metadata;
@@ -79,32 +74,12 @@ public class UpdateParameters
 
 this.prefetchedRows = prefetchedRows;
 
-// Index column validation triggers a call to Keyspace.open() which we 
want
-// to be able to avoid in some case (e.g. when using CQLSSTableWriter)
-if (validateIndexedColumns)
-{
-SecondaryIndexManager manager = 
Keyspace.openAndGetStore(metadata).indexManager;
-indexManager = manager.hasIndexes() ? manager : null;
-}
-else
-{
-indexManager = null;
-}
-
 // We use MIN_VALUE internally to mean the absence of of timestamp (in 
Selection, in sstable stats, ...), so exclude
 // it to avoid potential confusion.
 if (timestamp == Long.MIN_VALUE)
 throw new InvalidRequestException(String.format("Out of bound 
timestamp, must be in [%d, %d]", Long.MIN_VALUE + 1, Long.MAX_VALUE));
 }
 
-public void validateIndexedColumns(PartitionUpdate update)
-{
-if (indexManager == null)
-return;
-
-indexManager.validate(update);
-}
-
 public 

[jira] [Commented] (CASSANDRA-10545) JDK bug from CASSANDRA-8220 makes drain die early also

2015-10-16 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961236#comment-14961236
 ] 

Jeremiah Jordan commented on CASSANDRA-10545:
-

branch [here|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-10545]

> JDK bug from CASSANDRA-8220 makes drain die early also
> --
>
> Key: CASSANDRA-10545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10545
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeremiah Jordan
>Priority: Trivial
>
> The JDK bug from CASSANDRA-8220 makes drain die early also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix SELECT DISTINCT queries between 2.2.2 nodes and 3.0 nodes

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 bc89bc66c -> 5d6455f29


Fix SELECT DISTINCT queries between 2.2.2 nodes and 3.0 nodes

patch by Benjamin Lerer; reviewed by Aleksey Yeschenko  for CASSANDRA-10473


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5d6455f2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5d6455f2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5d6455f2

Branch: refs/heads/cassandra-3.0
Commit: 5d6455f29c7919d6b08667755f90428984524a22
Parents: bc89bc6
Author: blerer 
Authored: Fri Oct 16 21:36:45 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 21:40:24 2015 +0200

--
 CHANGES.txt   | 1 +
 NEWS.txt  | 2 +-
 src/java/org/apache/cassandra/db/ReadCommand.java | 2 +-
 3 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d6455f2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cb4c2d8..33c360e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Fix SELECT DISTINCT queries between 2.2.2 nodes and 3.0 nodes 
(CASSANDRA-10473)
  * Remove circular references in SegmentedFile (CASSANDRA-10543)
  * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
  * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d6455f2/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 48a0733..e8f86b7 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -46,7 +46,7 @@ New features
 Upgrading
 -
- Upgrade to 3.0 is supported from Cassandra 2.1 versions greater or equal 
to 2.1.9,
- or Cassandra 2.2 versions greater or equal to 2.2.1. Upgrade from 
Cassandra 2.0 and
+ or Cassandra 2.2 versions greater or equal to 2.2.2. Upgrade from 
Cassandra 2.0 and
  older versions is not supported.
- The 'memtable_allocation_type: offheap_objects' option has been removed. 
It should
  be re-introduced in a future release and you can follow CASSANDRA-9472 to 
know more.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d6455f2/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index 91227cf..f29a009 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1060,7 +1060,7 @@ public abstract class ReadCommand implements ReadQuery
 // is what 3.0 does.
 DataRange dataRange = new DataRange(keyRange, filter);
 Slices slices = filter.requestedSlices();
-if (startBound != LegacyLayout.LegacyBound.BOTTOM && 
!startBound.bound.equals(slices.get(0).start()))
+if (!isDistinct && startBound != LegacyLayout.LegacyBound.BOTTOM 
&& !startBound.bound.equals(slices.get(0).start()))
 {
 // pre-3.0 nodes normally expect pages to include the last 
cell from the previous page, but they handle it
 // missing without any problems, so we can safely always set 
"inclusive" to false in the data range



[jira] [Issue Comment Deleted] (CASSANDRA-6412) Custom creation and merge functions for user-defined column types

2015-10-16 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-6412:
--
Comment: was deleted

(was: I'm playing with this, just to understand it conceptually, using 
CASSANDRA-8099 as a base.

{noformat}
cqlsh> create keyspace test2 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 2}; use test2;
cqlsh:test2> select column_name, column_resolver from system.schema_columns 
where keyspace_name='test2' and columnfamily_name='table_with_resolvers';

 column_name | column_resolver
-+
   first | org.apache.cassandra.db.resolvers.ReverseTimestampResolver
high | org.apache.cassandra.db.resolvers.MaxValueResolver
  id |org.apache.cassandra.db.resolvers.TimestampResolver
last |org.apache.cassandra.db.resolvers.TimestampResolver
 low | org.apache.cassandra.db.resolvers.MinValueResolver

(5 rows)
cqlsh:test2> create table table_with_resolvers ( id text, low int with resolver 
'org.apache.cassandra.db.resolvers.MinValueResolver', high int with resolver 
'org.apache.cassandra.db.resolvers.MaxValueResolver', last int with resolver 
'org.apache.cassandra.db.resolvers.TimestampResolver', first int with resolver 
'org.apache.cassandra.db.resolvers.ReverseTimestampResolver', PRIMARY KEY(id));
cqlsh:test2> insert into table_with_resolvers (id, low, high, first, last ) 
values ('1', 1, 1, 1, 1);   

cqlsh:test2> insert into table_with_resolvers (id, low, high, first, last ) 
values ('1', 2, 2, 2, 2);
cqlsh:test2> insert into table_with_resolvers (id, low, high, first, last ) 
values ('1', 3, 3, 3, 3);
cqlsh:test2> insert into table_with_resolvers (id, low, high, first, last ) 
values ('1', 5, 5, 5, 5);
cqlsh:test2> insert into table_with_resolvers (id, low, high, first, last ) 
values ('1', 4, 4, 4, 4);
cqlsh:test2> select * from table_with_resolvers;

 id | first | high | last | low
+---+--+--+-
  1 | 1 |5 |4 |   1

(1 rows)
{noformat}

My diff/patch isn't fit for sharing at this time but as I'm going through, I 
had some questions: 

1) Given that user types are frozen, does it make sense to allow a resolver per 
field in user types, assuming that eventually user types will become un-frozen?
2) My initial pass disallows custom resolvers on counters and collections - 
does anyone have any strong opinion on whether or not user defined merge 
functions should be allowed for collections? 
3) Still battling through deletion/tombstone reconciliation. Still making sure 
I fully understand all of the problem cases. )

> Custom creation and merge functions for user-defined column types
> -
>
> Key: CASSANDRA-6412
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6412
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Nicolas Favre-Felix
>
> This is a proposal for a new feature, mapping custom types to Cassandra 
> columns.
> These types would provide a creation function and a merge function, to be 
> implemented in Java by the user.
> This feature relates to the concept of CRDTs; the proposal is to replicate 
> "operations" on these types during write, to apply these operations 
> internally during merge (Column.reconcile), and to also merge their values on 
> read.
> The following operations are made possible without reading back any data:
> * MIN or MAX(value) for a column
> * First value for a column
> * Count Distinct
> * HyperLogLog
> * Count-Min
> And any composition of these too, e.g. a Candlestick type includes first, 
> last, min, and max.
> The merge operations exposed by these types need to be commutative; this is 
> the case for many functions used in analytics.
> This feature is incomplete without some integration with CASSANDRA-4775 
> (Counters 2.0) which provides a Read-Modify-Write implementation for 
> distributed counters. Integrating custom creation and merge functions with 
> new counters would let users implement complex CRDTs in Cassandra, including:
> * Averages & related (sum of squares, standard deviation)
> * Graphs
> * Sets
> * Custom registers (even with vector clocks)
> I have a working prototype with implementations for min, max, and Candlestick 
> at https://github.com/acunu/cassandra/tree/crdts - I'd appreciate any 
> feedback on the design and interfaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10473) fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade path

2015-10-16 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961264#comment-14961264
 ] 

Benjamin Lerer commented on CASSANDRA-10473:


Committed to cassandra-3.0 at 5d6455f29c7919d6b08667755f90428984524a22 and 
merged into trunk 

> fix failing dtest for select distinct with static columns on 2.2->3.0 upgrade 
> path
> --
>
> Key: CASSANDRA-10473
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10473
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
> Fix For: 3.0.0 rc2
>
>
> {{upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test}} 
> fails on the upgrade path from 2.2 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQL/static_columns_with_distinct_test/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run it with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests upgrade_tests/cql_tests.py:TestCQL.static_columns_with_distinct_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: bump versions

2015-10-16 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 5d6455f29 -> 56a06d78f


bump versions


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/56a06d78
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/56a06d78
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/56a06d78

Branch: refs/heads/cassandra-3.0
Commit: 56a06d78f20237c15a2bc7fb79826818173baead
Parents: 5d6455f
Author: T Jake Luciani 
Authored: Fri Oct 16 16:03:16 2015 -0400
Committer: T Jake Luciani 
Committed: Fri Oct 16 16:03:16 2015 -0400

--
 build.xml| 2 +-
 debian/changelog | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/56a06d78/build.xml
--
diff --git a/build.xml b/build.xml
index e039bd2..c8707cd 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/56a06d78/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 7690e1d..2d858fe 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.0~rc2) unstable; urgency=medium
+
+  * New release candidate 
+
+ -- Jake Luciani   Fri, 16 Oct 2015 16:02:24 -0400
+
 cassandra (3.0.0~rc1) unstable; urgency=medium
 
   * New release candidate



[jira] [Updated] (CASSANDRA-10449) OOM on bootstrap after long GC pause

2015-10-16 Thread Robbie Strickland (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Strickland updated CASSANDRA-10449:
--
Attachment: heap_dump.png

I've attached a screen shot of the heap dump.

> OOM on bootstrap after long GC pause
> 
>
> Key: CASSANDRA-10449
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10449
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.04, AWS
>Reporter: Robbie Strickland
>  Labels: gc
> Fix For: 2.1.x
>
> Attachments: heap_dump.png, system.log.10-05, thread_dump.log
>
>
> I have a 20-node cluster (i2.4xlarge) with vnodes (default of 256) and 
> 500-700GB per node.  SSTable counts are <10 per table.  I am attempting to 
> provision additional nodes, but bootstrapping OOMs every time after about 10 
> hours with a sudden long GC pause:
> {noformat}
> INFO  [Service Thread] 2015-10-05 23:33:33,373 GCInspector.java:252 - G1 Old 
> Generation GC in 1586126ms.  G1 Old Gen: 49213756976 -> 49072277176;
> ...
> ERROR [MemtableFlushWriter:454] 2015-10-05 23:33:33,380 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[MemtableFlushWriter:454,5,main]
> java.lang.OutOfMemoryError: Java heap space
> {noformat}
> I have tried increasing max heap to 48G just to get through the bootstrap, to 
> no avail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-10-16 Thread jake
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c4a3bcd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c4a3bcd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c4a3bcd

Branch: refs/heads/trunk
Commit: 3c4a3bcd8c41c2a0435e82252a83c5018ed99645
Parents: 841e485 56a06d7
Author: T Jake Luciani 
Authored: Fri Oct 16 16:08:06 2015 -0400
Committer: T Jake Luciani 
Committed: Fri Oct 16 16:08:06 2015 -0400

--

--




[1/2] cassandra git commit: bump versions

2015-10-16 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 841e48546 -> 3c4a3bcd8


bump versions


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/56a06d78
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/56a06d78
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/56a06d78

Branch: refs/heads/trunk
Commit: 56a06d78f20237c15a2bc7fb79826818173baead
Parents: 5d6455f
Author: T Jake Luciani 
Authored: Fri Oct 16 16:03:16 2015 -0400
Committer: T Jake Luciani 
Committed: Fri Oct 16 16:03:16 2015 -0400

--
 build.xml| 2 +-
 debian/changelog | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/56a06d78/build.xml
--
diff --git a/build.xml b/build.xml
index e039bd2..c8707cd 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/56a06d78/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 7690e1d..2d858fe 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.0~rc2) unstable; urgency=medium
+
+  * New release candidate 
+
+ -- Jake Luciani   Fri, 16 Oct 2015 16:02:24 -0400
+
 cassandra (3.0.0~rc1) unstable; urgency=medium
 
   * New release candidate



[jira] [Created] (CASSANDRA-10546) Custom MV support

2015-10-16 Thread Matthias Broecheler (JIRA)
Matthias Broecheler created CASSANDRA-10546:
---

 Summary: Custom MV support
 Key: CASSANDRA-10546
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10546
 Project: Cassandra
  Issue Type: New Feature
Reporter: Matthias Broecheler


The MV implementation should be generalized to allow for custom materialized 
view implementations. Like with MV, the logic would be triggered by a mutation 
to some base table on which the custom MV is registered. A custom MV would 
allow for custom logic to determine the "derived" mutations that need to be 
applied as a result of the base table mutation. It would then ensure that those 
derived mutations are applied (to other tables) as the current MV 
implementation does.

Note, that a custom MV implementation is responsible for ensuring that any 
tables that derived mutations are written into exist. As such, a custom MV 
implementation has an initialization logic which can create those tables upon 
registration if needed. There should be no limit on what table a custom MV can 
write derived records to (even existing ones).

Example:
(Note, that this example is somewhat construed for simplicity)

We have a table in which we track user visits to certain properties with 
timestamp:
{code}
CREATE TABLE visits (
  userId bigint,
  visitAt timestamp,
  property varchar,
  PRIMARY KEY (userId, visitAt)
);
{code}

Every time a user visits a property, a record gets added to this table. Records 
frequently come in out-of-order.
At the same time, we would like to know who is currently visiting a particular 
property (with their last entry time).
For that, we create a custom MV registered against the {{visits}} table which 
upon registration creates the following table:
{code}
CREATE TABLE currentlyVisiting (
  property varchar,
  userId bigint,
  enteredOn timestamp,
  PRIMARY KEY (property, userId)
);
{code}

Now, when a record (u,v,p) gets inserted into the {{visits}} table the custom 
MV logic gets invoked:
# It reads the most recent visit record for user u: (u,v',p').
# If no such record exists, it emits (p,u,v) targeting table 
{{currentlyVisiting}} as a derived record to be persisted.
# If such a record exists and v'>=v then it emits nothing. But if v'

[1/2] cassandra git commit: Fix SELECT DISTINCT queries between 2.2.2 nodes and 3.0 nodes

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk dec76593f -> 841e48546


Fix SELECT DISTINCT queries between 2.2.2 nodes and 3.0 nodes

patch by Benjamin Lerer; reviewed by Aleksey Yeschenko  for CASSANDRA-10473


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5d6455f2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5d6455f2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5d6455f2

Branch: refs/heads/trunk
Commit: 5d6455f29c7919d6b08667755f90428984524a22
Parents: bc89bc6
Author: blerer 
Authored: Fri Oct 16 21:36:45 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 21:40:24 2015 +0200

--
 CHANGES.txt   | 1 +
 NEWS.txt  | 2 +-
 src/java/org/apache/cassandra/db/ReadCommand.java | 2 +-
 3 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d6455f2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cb4c2d8..33c360e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Fix SELECT DISTINCT queries between 2.2.2 nodes and 3.0 nodes 
(CASSANDRA-10473)
  * Remove circular references in SegmentedFile (CASSANDRA-10543)
  * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
  * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d6455f2/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 48a0733..e8f86b7 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -46,7 +46,7 @@ New features
 Upgrading
 -
- Upgrade to 3.0 is supported from Cassandra 2.1 versions greater or equal 
to 2.1.9,
- or Cassandra 2.2 versions greater or equal to 2.2.1. Upgrade from 
Cassandra 2.0 and
+ or Cassandra 2.2 versions greater or equal to 2.2.2. Upgrade from 
Cassandra 2.0 and
  older versions is not supported.
- The 'memtable_allocation_type: offheap_objects' option has been removed. 
It should
  be re-introduced in a future release and you can follow CASSANDRA-9472 to 
know more.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d6455f2/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index 91227cf..f29a009 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1060,7 +1060,7 @@ public abstract class ReadCommand implements ReadQuery
 // is what 3.0 does.
 DataRange dataRange = new DataRange(keyRange, filter);
 Slices slices = filter.requestedSlices();
-if (startBound != LegacyLayout.LegacyBound.BOTTOM && 
!startBound.bound.equals(slices.get(0).start()))
+if (!isDistinct && startBound != LegacyLayout.LegacyBound.BOTTOM 
&& !startBound.bound.equals(slices.get(0).start()))
 {
 // pre-3.0 nodes normally expect pages to include the last 
cell from the previous page, but they handle it
 // missing without any problems, so we can safely always set 
"inclusive" to false in the data range



[2/2] cassandra git commit: Merge branch cassandra-3.0 into trunk

2015-10-16 Thread blerer
Merge branch cassandra-3.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/841e4854
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/841e4854
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/841e4854

Branch: refs/heads/trunk
Commit: 841e4854661354d32bd94762a72d292fc1553866
Parents: dec7659 5d6455f
Author: blerer 
Authored: Fri Oct 16 21:42:04 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 21:42:14 2015 +0200

--
 CHANGES.txt   | 1 +
 NEWS.txt  | 2 +-
 src/java/org/apache/cassandra/db/ReadCommand.java | 2 +-
 3 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/841e4854/CHANGES.txt
--
diff --cc CHANGES.txt
index 801b2fb,33c360e..725cc9f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.2
 + * Abort in-progress queries that time out (CASSANDRA-7392)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 +
 +
  3.0-rc2
+  * Fix SELECT DISTINCT queries between 2.2.2 nodes and 3.0 nodes 
(CASSANDRA-10473)
   * Remove circular references in SegmentedFile (CASSANDRA-10543)
   * Ensure validation of indexed values only occurs once per-partition 
(CASSANDRA-10536)
   * Fix handling of static columns for range tombstones in thrift 
(CASSANDRA-10174)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/841e4854/src/java/org/apache/cassandra/db/ReadCommand.java
--



Git Push Summary

2015-10-16 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/3.0.0-rc2-tentative [created] 56a06d78f


[jira] [Updated] (CASSANDRA-10545) JDK bug from CASSANDRA-8220 makes drain die early also

2015-10-16 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10545:
-
Reviewer: Robert Stupp

> JDK bug from CASSANDRA-8220 makes drain die early also
> --
>
> Key: CASSANDRA-10545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10545
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Trivial
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> The JDK bug from CASSANDRA-8220 makes drain die early also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10545) JDK bug from CASSANDRA-8220 makes drain die early also

2015-10-16 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961319#comment-14961319
 ] 

Robert Stupp commented on CASSANDRA-10545:
--

Looks fine - but can you consolidate the hack into a single method called from 
both catch-blocks?

> JDK bug from CASSANDRA-8220 makes drain die early also
> --
>
> Key: CASSANDRA-10545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10545
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Trivial
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> The JDK bug from CASSANDRA-8220 makes drain die early also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10089) NullPointerException in Gossip handleStateNormal

2015-10-16 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961373#comment-14961373
 ] 

Joel Knighton commented on CASSANDRA-10089:
---

Unfortunately, latest CI runs don't look entirely clean yet.  In particular, on 
2.2, 
[incremental_repair_test.TestIncRepair.multiple_repair_test|http://cassci.datastax.com/view/Dev/view/jkni/job/jkni-10089-2.2-dtest/2/testReport/junit/incremental_repair_test/TestIncRepair/multiple_repair_test_2/]
 fails due to a state in normal with no tokens.

I'm trying to reproduce this failure locally with higher logging levels.

> NullPointerException in Gossip handleStateNormal
> 
>
> Key: CASSANDRA-10089
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10089
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Whilst comparing dtests for CASSANDRA-9970 I found [this failing 
> dtest|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-9970-dtest/lastCompletedBuild/testReport/consistency_test/TestConsistency/short_read_test/]
>  in 2.2:
> {code}
> Unexpected error in node1 node log: ['ERROR [GossipStage:1] 2015-08-14 
> 15:39:57,873 CassandraDaemon.java:183 - Exception in thread 
> Thread[GossipStage:1,5,main] java.lang.NullPointerException: null \tat 
> org.apache.cassandra.service.StorageService.getApplicationStateValue(StorageService.java:1731)
>  ~[main/:na] \tat 
> org.apache.cassandra.service.StorageService.getTokensFor(StorageService.java:1804)
>  ~[main/:na] \tat 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1857)
>  ~[main/:na] \tat 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1629)
>  ~[main/:na] \tat 
> org.apache.cassandra.service.StorageService.onJoin(StorageService.java:2312) 
> ~[main/:na] \tat 
> org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:1025) 
> ~[main/:na] \tat 
> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1106) 
> ~[main/:na] \tat 
> org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:49)
>  ~[main/:na] \tat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) 
> ~[main/:na] \tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80] \tat 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_80] \tat java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_80]']
> {code}
> I wasn't able to find it on unpatched branches  but it is clearly not related 
> to CASSANDRA-9970, if anything it could have been a side effect of 
> CASSANDRA-9871.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10449) OOM on bootstrap after long GC pause

2015-10-16 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961340#comment-14961340
 ] 

Mikhail Stepura commented on CASSANDRA-10449:
-

Any chance to get the dump file itself? Of course if it doesn't contain any 
sensible information.

> OOM on bootstrap after long GC pause
> 
>
> Key: CASSANDRA-10449
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10449
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.04, AWS
>Reporter: Robbie Strickland
>  Labels: gc
> Fix For: 2.1.x
>
> Attachments: heap_dump.png, system.log.10-05, thread_dump.log
>
>
> I have a 20-node cluster (i2.4xlarge) with vnodes (default of 256) and 
> 500-700GB per node.  SSTable counts are <10 per table.  I am attempting to 
> provision additional nodes, but bootstrapping OOMs every time after about 10 
> hours with a sudden long GC pause:
> {noformat}
> INFO  [Service Thread] 2015-10-05 23:33:33,373 GCInspector.java:252 - G1 Old 
> Generation GC in 1586126ms.  G1 Old Gen: 49213756976 -> 49072277176;
> ...
> ERROR [MemtableFlushWriter:454] 2015-10-05 23:33:33,380 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[MemtableFlushWriter:454,5,main]
> java.lang.OutOfMemoryError: Java heap space
> {noformat}
> I have tried increasing max heap to 48G just to get through the bootstrap, to 
> no avail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10449) OOM on bootstrap after long GC pause

2015-10-16 Thread Robbie Strickland (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961343#comment-14961343
 ] 

Robbie Strickland commented on CASSANDRA-10449:
---

Yes, sorry I was working on getting it to S3.  You can get it 
[here|https://s3.amazonaws.com/twc-analytics-public/java_1445001330.hprof].

> OOM on bootstrap after long GC pause
> 
>
> Key: CASSANDRA-10449
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10449
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.04, AWS
>Reporter: Robbie Strickland
>  Labels: gc
> Fix For: 2.1.x
>
> Attachments: heap_dump.png, system.log.10-05, thread_dump.log
>
>
> I have a 20-node cluster (i2.4xlarge) with vnodes (default of 256) and 
> 500-700GB per node.  SSTable counts are <10 per table.  I am attempting to 
> provision additional nodes, but bootstrapping OOMs every time after about 10 
> hours with a sudden long GC pause:
> {noformat}
> INFO  [Service Thread] 2015-10-05 23:33:33,373 GCInspector.java:252 - G1 Old 
> Generation GC in 1586126ms.  G1 Old Gen: 49213756976 -> 49072277176;
> ...
> ERROR [MemtableFlushWriter:454] 2015-10-05 23:33:33,380 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[MemtableFlushWriter:454,5,main]
> java.lang.OutOfMemoryError: Java heap space
> {noformat}
> I have tried increasing max heap to 48G just to get through the bootstrap, to 
> no avail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-10534:
--
Description: 
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these but the CompressionInfo seem tolerable. Also a quick look through the 
code did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the line
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
Opening 
/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
 (79 bytes)
ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - Exiting 
forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.7.0_80]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
... 14 common frames omitted
{noformat}

Following the result of ls on the data directory of a corrupted SSTable after 
the hard reboot:
{noformat}
$ ls -l 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
total 60
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-CompressionInfo.db
-rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
system-sstable_activity-ka-1-Data.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-Digest.sha1
-rw-r--r-- 1 cassandra cassandra   880 Oct 15 09:31 
system-sstable_activity-ka-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 34000 Oct 15 09:31 
system-sstable_activity-ka-1-Index.db
-rw-r--r-- 1 cassandra cassandra  7338 Oct 15 09:31 
system-sstable_activity-ka-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-TOC.txt
{noformat}

  was:
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 

[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-10534:
--
Description: 
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these but the CompressionInfo seem tolerable. Also a quick look through the 
code did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the line
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
Opening 
/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
 (79 bytes)
ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - Exiting 
forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.7.0_80]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
... 14 common frames omitted
{noformat}

Following is the result of ls on the data directory of a corrupted SSTable 
after the hard reboot:
{noformat}
$ ls -l 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
total 60
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-CompressionInfo.db
-rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
system-sstable_activity-ka-1-Data.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-Digest.sha1
-rw-r--r-- 1 cassandra cassandra   880 Oct 15 09:31 
system-sstable_activity-ka-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 34000 Oct 15 09:31 
system-sstable_activity-ka-1-Index.db
-rw-r--r-- 1 cassandra cassandra  7338 Oct 15 09:31 
system-sstable_activity-ka-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-TOC.txt
{noformat}

  was:
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 

[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-10534:
--
Description: 
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these but the CompressionInfo seem tolerable. Also a quick look through the 
code did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the line
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
Opening 
/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
 (79 bytes)
ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - Exiting 
forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.7.0_80]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
... 14 common frames omitted
{noformat}

Following the result of ls on the data directory on a corrupted SSTable after 
the hard reboot
{noformat}
$ ls -l 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
total 60
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-CompressionInfo.db
-rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
system-sstable_activity-ka-1-Data.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-Digest.sha1
-rw-r--r-- 1 cassandra cassandra   880 Oct 15 09:31 
system-sstable_activity-ka-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 34000 Oct 15 09:31 
system-sstable_activity-ka-1-Index.db
-rw-r--r-- 1 cassandra cassandra  7338 Oct 15 09:31 
system-sstable_activity-ka-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-TOC.txt
{noformat}

  was:
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 

[jira] [Commented] (CASSANDRA-10529) Channel.size() is costly, mutually exclusive, and on the critical path

2015-10-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960295#comment-14960295
 ] 

Benedict commented on CASSANDRA-10529:
--

No, I'll commit this alongside the other small patches we have queued.

> Channel.size() is costly, mutually exclusive, and on the critical path
> --
>
> Key: CASSANDRA-10529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10529
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
> Fix For: 3.0.0 rc2
>
>
> [~stefania_alborghetti] mentioned this already on another ticket, but I have 
> lost track of exactly where. While benchmarking it became apparent this was a 
> noticeable bottleneck for small in-memory workloads with few files, 
> especially with RF=1. We should probably fix this soon, since it is trivial 
> to do so, and the call is only to impose an assertion that our requested 
> length is less than the file size. It isn't possible to safely memoize a 
> value anywhere we can guarantee to be able to safely refer to it without some 
> refactoring, so I suggest simply removing the assertion for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-10-16 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960321#comment-14960321
 ] 

Stefania commented on CASSANDRA-10538:
--

bq. Yes, it looks like we did. This only matters for abort, since commit we 
want to throw either way - but we expect to do this in the caller 
(LifecycleTransaction, so catching and returning them in both is most suitable.

This was my conclusion as well, thanks for checking.

bq. Once things quiet down we should really try to introduce fault injection 
tests for this subsystem so we can easily cover this kind of scenario.

Yes we definitely need tests with fault injection for this component.

CI:

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10538-3.0-dtest
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10538-3.0-testall


> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:242)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:134)
>  

[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960320#comment-14960320
 ] 

Benedict commented on CASSANDRA-10534:
--

In all cases we will need to call flush before calling sync, since we have a 
buffered writer. {{close}} is idempotent, so that should not be a problem.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
> total 60
> -rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
> 

[jira] [Assigned] (CASSANDRA-10536) Batch statements with multiple updates to partition error when table is indexed

2015-10-16 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-10536:
---

Assignee: Sam Tunnicliffe  (was: Sylvain Lebresne)

> Batch statements with multiple updates to partition error when table is 
> indexed
> ---
>
> Key: CASSANDRA-10536
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10536
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.0 rc2
>
>
> If a {{BATCH}} statement contains multiple {{UPDATE}} statements that update 
> the same partition, and a secondary index exists on that table, the batch 
> statement will error:
> {noformat}
> ServerError:  message="java.lang.IllegalStateException: An update should not be written 
> again once it has been read">
> {noformat}
> with the following traceback in the logs:
> {noformat}
> ERROR 20:53:46 Unexpected exception during request
> java.lang.IllegalStateException: An update should not be written again once 
> it has been read
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.assertNotBuilt(PartitionUpdate.java:504)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.add(PartitionUpdate.java:535)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addUpdates(ModificationStatement.java:667)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:234)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:335)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:321)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:316)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:205)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:471)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:448)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {noformat}
> This is due to {{SecondaryIndexManager.validate()}} triggering a build of the 
> {{PartitionUpdate}} (stacktrace from debugging the build() call):
> {noformat}
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate.build(PartitionUpdate.java:571)
>  [main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.maybeBuild(PartitionUpdate.java:561)
>  [main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.iterator(PartitionUpdate.java:418)
>  [main/:na]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.validateRows(CassandraIndex.java:560)
>  [main/:na]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.validate(CassandraIndex.java:314)
>  [main/:na]
>   at 
> org.apache.cassandra.index.SecondaryIndexManager.lambda$validate$75(SecondaryIndexManager.java:642)
>  [main/:na]
>   at 
> org.apache.cassandra.index.SecondaryIndexManager$$Lambda$166/1388080038.accept(Unknown
>  Source) [main/:na]
>   at 
> 

[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960354#comment-14960354
 ] 

Stefania commented on CASSANDRA-10534:
--

I've added the call to {{flush}} in a separate commit. 

I tried to abort the CI jobs and restart new ones but it seems the abort only 
removed the jobs in the queue so it's the old jobs (without flush) that are 
running at the moment. If they are aborted later on I will restart them or if I 
am offline you can restart yourself with cassci {{!build 
stef1927-10534-3.0-dtest}} etc.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the 

[jira] [Commented] (CASSANDRA-10421) Potential issue with LogTransaction as it only checks in a single directory for files

2015-10-16 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960241#comment-14960241
 ] 

Stefania commented on CASSANDRA-10421:
--

bq. Syncing the directory won't sync the log file. You need sync the log file 
specifically to have that data be available. 

Thanks, I should've read the entire documentation of fsync.

bq. I don't mind opening the file every time. However to sync it after every 
write you will need to keep it open long enough to do that. Or open it O_SYNC 
or something.

We can actually append and sync via {{Files.write}} with 
{{StandardOpenOption.SYNC}}. 

Latest commit is 
[here|https://github.com/stef1927/cassandra/commit/4866ce93328108ab09dcc10596ab1ea0c4b76f9d].

I'm monitoring dtests, it seem we get lots of timeouts but when I run the tests 
locally they pass, let's see if the next batch is better.

> Potential issue with LogTransaction as it only checks in a single directory 
> for files
> -
>
> Key: CASSANDRA-10421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10421
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Stefania
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
>
> When creating a new LogTransaction we try to create the new logfile in the 
> same directory as the one we are writing to, but as we use 
> {{[directories.getDirectoryForNewSSTables()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java#L125]}}
>  this might end up in "any" of the configured data directories. If it does, 
> we will not be able to clean up leftovers as we check for files in the same 
> directory as the logfile was created: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java#L163
> cc [~Stefania]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-10534:
--
Description: 
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these the CompressionInfo seem tolerable. Also a quick look through the code 
did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the 
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
Opening 
/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
 (79 bytes)
ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - Exiting 
forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.7.0_80]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
... 14 common frames omitted
{noformat}

  was:
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All seem 
tolerable but the  CompressionInfo seem tolerable. Also a quick look through 
the code and did not revealed any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the 
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log

{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 

[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-10534:
--
Description: 
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these but the CompressionInfo seem tolerable. Also a quick look through the 
code did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the line
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
Opening 
/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
 (79 bytes)
ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - Exiting 
forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.7.0_80]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
... 14 common frames omitted
{noformat}

Following the result of ls on the data directory on a corrupted SSTable after 
the hard reboot:
{noformat}
$ ls -l 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
total 60
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-CompressionInfo.db
-rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
system-sstable_activity-ka-1-Data.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-Digest.sha1
-rw-r--r-- 1 cassandra cassandra   880 Oct 15 09:31 
system-sstable_activity-ka-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 34000 Oct 15 09:31 
system-sstable_activity-ka-1-Index.db
-rw-r--r-- 1 cassandra cassandra  7338 Oct 15 09:31 
system-sstable_activity-ka-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
system-sstable_activity-ka-1-TOC.txt
{noformat}

  was:
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 

[jira] [Assigned] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-10534:


Assignee: Stefania

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
> total 60
> -rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
> system-sstable_activity-ka-1-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
> system-sstable_activity-ka-1-Data.db
> -rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
> 

[jira] [Comment Edited] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-10-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960291#comment-14960291
 ] 

Benedict edited comment on CASSANDRA-10538 at 10/16/15 7:18 AM:


Yes, it looks like we did. This only matters for abort, since commit we want to 
throw either way - but we expect to do this in the caller 
({{LifecycleTransaction}}, so catching and returning them in both _is_ most 
suitable.

Once things quiet down we should really try to introduce fault injection tests 
for this subsystem so we can easily cover this kind of scenario.

LGTM, I'll commit once we have clean CI


was (Author: benedict):
Yes, it looks like we did. This only matters for abort, since commit we want to 
throw either way - but we expect to do this in the caller 
({{LifecycleTransaction}}, so catching and returning them in both _is_ most 
suitable.

Once things quiet down we should really try to introduce fault injection tests 
for this subsystem so we can easily cover this kind of scenario.


> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 
> 

[jira] [Updated] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-10-16 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-10538:
-
Reviewer: Benedict

> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:242)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:134)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolE
> {code}
> We should not have an assertion if it can happen when the disk is full, we 
> should rather have a runtime exception.
> I also would like to understand exactly what triggered the assertion. 
> {{LifecycleTransaction}} can throw at the beginning of the commit method if 
> it cannot write the record to disk, in which case all we have to do is ensure 
> we update the records in memory after writing to disk (currently we update 
> them before). However, I am not sure this is what 

[jira] [Comment Edited] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-10-16 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960136#comment-14960136
 ] 

Stefania edited comment on CASSANDRA-10538 at 10/16/15 8:20 AM:


I've created a patch to ensure we update the in memory records after updating 
the disk state, to prevent the assertion in case we throw in 
{{LifecycleTransaction.doCommit}}. However we still need to verify this is what 
actually happened in the logs.

I've also changed {{LogTransaction.doCommit}} and {{doAbort}} so that they 
catch and return runtime exceptions. [~benedict] is this something we missed 
right?


was (Author: stefania):
I've created a patch to ensure we update the in memory records after updating 
the disk state, to prevent the assertion in case we throw in 
{{LifecycleTransaction.doCommit}}. However we still need to verify this is what 
actually happened in the logs.

I've also changed {{LogTransaction.doCommit} and {{doAbort}} so that they catch 
and return runtime exceptions. [~benedict] is this something we missed right?

> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 

[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960272#comment-14960272
 ] 

Stefania commented on CASSANDRA-10534:
--

[~benedict], [~sharvanath] analysis is correct: since CASSANDRA-6916 we no 
longer fsync compression metadata after writing it. I've attached a small 
[patch|https://github.com/stef1927/cassandra/commits/10534-2.1] that should fix 
this, can you take a look?

If the patch is fine I will run CI on the 2.1+ branches. 

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> 

[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960278#comment-14960278
 ] 

Benedict commented on CASSANDRA-10534:
--

Hi [~sharvanath]: thanks for taking the time to strace this and find our (my) 
mistake. 

[~stefania]: thanks for providing a patch. LGTM. I'll commit once we have clean 
CI results.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
> total 60
> -rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
> 

[jira] [Commented] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-10-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960291#comment-14960291
 ] 

Benedict commented on CASSANDRA-10538:
--

Yes, it looks like we did. This only matters for abort, since commit we want to 
throw either way - but we expect to do this in the caller 
({{LifecycleTransaction}}, so catching and returning them in both _is_ most 
suitable.

Once things quiet down we should really try to introduce fault injection tests 
for this subsystem so we can easily cover this kind of scenario.


> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:242)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:134)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolE
> {code}
> We should not have an assertion if it can happen when the disk is full, we 
> should rather 

[jira] [Comment Edited] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960309#comment-14960309
 ] 

Sharvanath Pathak edited comment on CASSANDRA-10534 at 10/16/15 7:30 AM:
-

[~benedict] [~Stefania] thanks for taking quick action on it.


was (Author: sharvanath):
@benedict @stefania thanks for taking quick action on it.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
> total 60
> -rw-r--r-- 1 cassandra cassandra  

[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960313#comment-14960313
 ] 

Stefania commented on CASSANDRA-10534:
--

[~Benedict]: I merely wanted to fsync in case of partial write but if it looks 
too unusual we can have it in the try block. I amended the commit and force 
pushed.  The 2.2 patch is a rewrite because the code is too divergent. I 
believe the FOS close is idempotent so we are OK if we close it twice but 
please double check. The 2.2 patch then merges without conflicts into 3.0.

CI should eventually appear here:

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10534-2.1-dtest
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10534-2.1-testall

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10534-2.2-dtest
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10534-2.2-testall

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10534-3.0-dtest
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10534-3.0-testall


> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> 

[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-10534:
--
Description: 
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these but the CompressionInfo seem tolerable. Also a quick look through the 
code did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the line
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
Opening 
/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
 (79 bytes)
ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - Exiting 
forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.7.0_80]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
... 14 common frames omitted
{noformat}

  was:
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these but the CompressionInfo seem tolerable. Also a quick look through the 
code did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the 
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 

[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-10534:
--
Description: 
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these but the CompressionInfo seem tolerable. Also a quick look through the 
code did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the 
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
Opening 
/var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
 (79 bytes)
ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - Exiting 
forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
~[apache-cassandra-2.1.9.jar:2.1.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.7.0_80]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.7.0_80]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
 ~[apache-cassandra-2.1.9.jar:2.1.9]
... 14 common frames omitted
{noformat}

  was:
I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
this happened multiple times in our testing with hard node reboots. After some 
investigation it seems like these file is not being fsynced, and that can 
potentially lead to data corruption. I am working with version 2.1.9.

I checked for fsync calls using strace, and found them happening for all but 
the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
these the CompressionInfo seem tolerable. Also a quick look through the code 
did not reveal any fsync calls. Moreover, I suspect the commit  
4e95953f29d89a441dfe06d3f0393ed7dd8586df 
(https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
 has caused the regression, which removed the 
{noformat}
 getChannel().force(true);
{noformat}
from CompressionMetadata.Writer.close.

Following is the trace I saw in system.log:
{noformat}
INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 

[jira] [Comment Edited] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960278#comment-14960278
 ] 

Benedict edited comment on CASSANDRA-10534 at 10/16/15 7:24 AM:


Hi [~sharvanath]: thanks for taking the time to strace this and find our (my) 
mistake. 

[~stefania]: thanks for providing a patch. -LGTM. I'll commit once we have 
clean CI results.-  Why did you put the {{sync()}} call in the finally block 
before the close, instead of in the try block? At the very least we should 
ensure the close is called after, but it seems easiest to place it in the try 
block.


was (Author: benedict):
Hi [~sharvanath]: thanks for taking the time to strace this and find our (my) 
mistake. 

[~stefania]: thanks for providing a patch. LGTM. I'll commit once we have clean 
CI results.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]

[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-10-16 Thread Sharvanath Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960309#comment-14960309
 ] 

Sharvanath Pathak commented on CASSANDRA-10534:
---

@benedict @stefania thanks for taking quick action on it.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
> total 60
> -rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
> system-sstable_activity-ka-1-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
> 

[jira] [Created] (CASSANDRA-10541) cqlshlib tests cannot run on Windows

2015-10-16 Thread Benjamin Lerer (JIRA)
Benjamin Lerer created CASSANDRA-10541:
--

 Summary: cqlshlib tests cannot run on Windows
 Key: CASSANDRA-10541
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10541
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Priority: Minor


If I try to run the {{cqlshlib}} tests on Windows, I got the following error:
{quote}
==
ERROR: Failure: AttributeError ('module' object has no attribute 'symlink')
--
Traceback (most recent call last):
  File "C:\Python27\lib\site-packages\nose\loader.py", line 414, in 
loadTestsFromName
addr.filename, addr.module)
  File "C:\Python27\lib\site-packages\nose\importer.py", line 47, in 
importFromPath
return self.importFromDir(dir_path, fqname)
  File "C:\Python27\lib\site-packages\nose\importer.py", line 94, in 
importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File "[...]\pylib\cqlshlib\test\__init__.py", line 17, in 
from .cassconnect import create_test_db, remove_test_db
  File "[...]\pylib\cqlshlib\test\cassconnect.py", line 22, in 
from .basecase import cql, cqlsh, cqlshlog, TEST_HOST, TEST_PORT, rundir
  File "[...]\pylib\cqlshlib\test\basecase.py", line 43, in 
os.symlink(path_to_cqlsh, modulepath)
AttributeError: 'module' object has no attribute 'symlink'

--
Ran 1 test in 0.002s

FAILED (errors=1)
{quote}

The problem comes from the fact tha Windows has no support for symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix cqlsh rules

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 a61fc01f4 -> 806378c8c


Fix cqlsh rules

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for
CASSANDRA-10415


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/806378c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/806378c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/806378c8

Branch: refs/heads/cassandra-2.1
Commit: 806378c8c295fb062f94eb8bf0f719b398d27745
Parents: a61fc01
Author: Stefania Alborghetti 
Authored: Fri Oct 16 11:47:07 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:47:07 2015 +0200

--
 pylib/cqlshlib/cqlhandling.py| 10 ++
 pylib/cqlshlib/test/cassconnect.py   |  3 ++-
 pylib/cqlshlib/test/run_cqlsh.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_completion.py | 11 ---
 pylib/cqlshlib/test/test_cqlsh_output.py |  4 ++--
 pylib/cqlshlib/test/test_keyspace_init.cql   |  2 +-
 6 files changed, 20 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/cqlhandling.py
--
diff --git a/pylib/cqlshlib/cqlhandling.py b/pylib/cqlshlib/cqlhandling.py
index 5fe311f..9ea30cd 100644
--- a/pylib/cqlshlib/cqlhandling.py
+++ b/pylib/cqlshlib/cqlhandling.py
@@ -18,6 +18,7 @@
 # i.e., stuff that's not necessarily cqlsh-specific
 
 import traceback
+from cassandra.metadata import cql_keywords_reserved
 from . import pylexotron, util
 
 Hint = pylexotron.Hint
@@ -55,6 +56,15 @@ class CqlParsingRuleSet(pylexotron.ParsingRuleSet):
 
 # note: commands_end_with_newline may be extended by callers.
 self.commands_end_with_newline = set()
+self.set_reserved_keywords(cql_keywords_reserved)
+
+def set_reserved_keywords(self, keywords):
+"""
+We cannot let resreved cql keywords be simple 'identifier' since this 
caused
+problems with completion, see CASSANDRA-10415
+"""
+syntax = ' ::= /(' + '|'.join(r'\b{}\b'.format(k) 
for k in keywords) + ')/ ;'
+self.append_rules(syntax)
 
 def completer_for(self, rulename, symname):
 def registrator(f):

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/cassconnect.py
--
diff --git a/pylib/cqlshlib/test/cassconnect.py 
b/pylib/cqlshlib/test/cassconnect.py
index 21dddcd..a67407b 100644
--- a/pylib/cqlshlib/test/cassconnect.py
+++ b/pylib/cqlshlib/test/cassconnect.py
@@ -20,6 +20,7 @@ import contextlib
 import tempfile
 import os.path
 from .basecase import cql, cqlsh, cqlshlog, TEST_HOST, TEST_PORT, rundir
+from cassandra.metadata import maybe_escape_name
 from .run_cqlsh import run_cqlsh, call_cqlsh
 
 test_keyspace_init = os.path.join(rundir, 'test_keyspace_init.cql')
@@ -126,7 +127,7 @@ def cql_rule_set():
 return cqlsh.cql3handling.CqlRuleSet
 
 def quote_name(name):
-return cql_rule_set().maybe_escape_name(name)
+return maybe_escape_name(name)
 
 class DEFAULTVAL: pass
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/run_cqlsh.py
--
diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py
index 6ae295c..88b0ca6 100644
--- a/pylib/cqlshlib/test/run_cqlsh.py
+++ b/pylib/cqlshlib/test/run_cqlsh.py
@@ -231,7 +231,7 @@ class CqlshRunner(ProcRunner):
 self.output_header = self.read_to_next_prompt()
 
 def read_to_next_prompt(self):
-return self.read_until(self.prompt, timeout=4.0)
+return self.read_until(self.prompt, timeout=10.0)
 
 def read_up_to_timeout(self, timeout, blksize=4096):
 output = ProcRunner.read_up_to_timeout(self, timeout, blksize=blksize)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 97bd96b..5f7b6e4 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -142,8 +142,8 @@ class TestCqlshCompletion(CqlshCompletionCase):
 def test_complete_on_empty_string(self):
 self.trycompletions('', choices=('?', 'ALTER', 'BEGIN', 'CAPTURE', 
'CONSISTENCY',
  'COPY', 'CREATE', 'DEBUG', 'DELETE', 
'DESC', 'DESCRIBE',
- 'DROP', 'GRANT', 'HELP', 'INSERT', 
'LIST', 

[1/3] cassandra git commit: Fix cqlsh rules

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 a52597d81 -> f3143e624


Fix cqlsh rules

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for
CASSANDRA-10415


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/806378c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/806378c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/806378c8

Branch: refs/heads/cassandra-3.0
Commit: 806378c8c295fb062f94eb8bf0f719b398d27745
Parents: a61fc01
Author: Stefania Alborghetti 
Authored: Fri Oct 16 11:47:07 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:47:07 2015 +0200

--
 pylib/cqlshlib/cqlhandling.py| 10 ++
 pylib/cqlshlib/test/cassconnect.py   |  3 ++-
 pylib/cqlshlib/test/run_cqlsh.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_completion.py | 11 ---
 pylib/cqlshlib/test/test_cqlsh_output.py |  4 ++--
 pylib/cqlshlib/test/test_keyspace_init.cql   |  2 +-
 6 files changed, 20 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/cqlhandling.py
--
diff --git a/pylib/cqlshlib/cqlhandling.py b/pylib/cqlshlib/cqlhandling.py
index 5fe311f..9ea30cd 100644
--- a/pylib/cqlshlib/cqlhandling.py
+++ b/pylib/cqlshlib/cqlhandling.py
@@ -18,6 +18,7 @@
 # i.e., stuff that's not necessarily cqlsh-specific
 
 import traceback
+from cassandra.metadata import cql_keywords_reserved
 from . import pylexotron, util
 
 Hint = pylexotron.Hint
@@ -55,6 +56,15 @@ class CqlParsingRuleSet(pylexotron.ParsingRuleSet):
 
 # note: commands_end_with_newline may be extended by callers.
 self.commands_end_with_newline = set()
+self.set_reserved_keywords(cql_keywords_reserved)
+
+def set_reserved_keywords(self, keywords):
+"""
+We cannot let resreved cql keywords be simple 'identifier' since this 
caused
+problems with completion, see CASSANDRA-10415
+"""
+syntax = ' ::= /(' + '|'.join(r'\b{}\b'.format(k) 
for k in keywords) + ')/ ;'
+self.append_rules(syntax)
 
 def completer_for(self, rulename, symname):
 def registrator(f):

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/cassconnect.py
--
diff --git a/pylib/cqlshlib/test/cassconnect.py 
b/pylib/cqlshlib/test/cassconnect.py
index 21dddcd..a67407b 100644
--- a/pylib/cqlshlib/test/cassconnect.py
+++ b/pylib/cqlshlib/test/cassconnect.py
@@ -20,6 +20,7 @@ import contextlib
 import tempfile
 import os.path
 from .basecase import cql, cqlsh, cqlshlog, TEST_HOST, TEST_PORT, rundir
+from cassandra.metadata import maybe_escape_name
 from .run_cqlsh import run_cqlsh, call_cqlsh
 
 test_keyspace_init = os.path.join(rundir, 'test_keyspace_init.cql')
@@ -126,7 +127,7 @@ def cql_rule_set():
 return cqlsh.cql3handling.CqlRuleSet
 
 def quote_name(name):
-return cql_rule_set().maybe_escape_name(name)
+return maybe_escape_name(name)
 
 class DEFAULTVAL: pass
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/run_cqlsh.py
--
diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py
index 6ae295c..88b0ca6 100644
--- a/pylib/cqlshlib/test/run_cqlsh.py
+++ b/pylib/cqlshlib/test/run_cqlsh.py
@@ -231,7 +231,7 @@ class CqlshRunner(ProcRunner):
 self.output_header = self.read_to_next_prompt()
 
 def read_to_next_prompt(self):
-return self.read_until(self.prompt, timeout=4.0)
+return self.read_until(self.prompt, timeout=10.0)
 
 def read_up_to_timeout(self, timeout, blksize=4096):
 output = ProcRunner.read_up_to_timeout(self, timeout, blksize=blksize)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 97bd96b..5f7b6e4 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -142,8 +142,8 @@ class TestCqlshCompletion(CqlshCompletionCase):
 def test_complete_on_empty_string(self):
 self.trycompletions('', choices=('?', 'ALTER', 'BEGIN', 'CAPTURE', 
'CONSISTENCY',
  'COPY', 'CREATE', 'DEBUG', 'DELETE', 
'DESC', 'DESCRIBE',
- 'DROP', 'GRANT', 'HELP', 'INSERT', 
'LIST', 

[3/3] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2015-10-16 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3143e62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3143e62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3143e62

Branch: refs/heads/cassandra-3.0
Commit: f3143e624cb73e86ea11dd2f9994c70008fc26aa
Parents: a52597d 94a7d06
Author: blerer 
Authored: Fri Oct 16 11:56:46 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:57:22 2015 +0200

--
 pylib/cqlshlib/cql3handling.py  |   8 +-
 pylib/cqlshlib/cqlhandling.py   |  10 ++
 pylib/cqlshlib/test/run_cqlsh.py|   2 +-
 pylib/cqlshlib/test/test_cql_parsing.py | 240 +--
 4 files changed, 137 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3143e62/pylib/cqlshlib/cql3handling.py
--



[2/4] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2015-10-16 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94a7d068
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94a7d068
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94a7d068

Branch: refs/heads/trunk
Commit: 94a7d0682e037c8fb4226a1c5f1f47918912f2e2
Parents: 3b7ccdf 806378c
Author: blerer 
Authored: Fri Oct 16 11:52:51 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:53:52 2015 +0200

--
 pylib/cqlshlib/cql3handling.py   |   8 +-
 pylib/cqlshlib/cqlhandling.py|  10 +
 pylib/cqlshlib/test/run_cqlsh.py |   2 +-
 pylib/cqlshlib/test/test_cql_parsing.py  | 240 +++---
 pylib/cqlshlib/test/test_cqlsh_completion.py |  10 +-
 pylib/cqlshlib/test/test_cqlsh_output.py |   9 +-
 6 files changed, 145 insertions(+), 134 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/cql3handling.py
--
diff --cc pylib/cqlshlib/cql3handling.py
index 40b7d6b,5f93003..0ee0a38
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@@ -166,10 -162,10 +166,14 @@@ JUNK ::= /([ \t\r\f\v]+|(--|[/][/])[^\n
   | 
   | 
   | 
-- |  
++ |  
   | "NULL"
   ;
  
++ ::= ( ( "."  )?)
++ | "TOKEN"
++ ;
++
   ::= "(" (  ( ","  )* )? ")"
   ;
  
@@@ -1391,21 -1227,6 +1395,21 @@@ def username_name_completer(ctxt, cass)
  session = cass.session
  return [maybe_quote(row.values()[0].replace("'", "''")) for row in 
session.execute("LIST USERS")]
  
 +
 +@completer_for('rolename', 'role')
 +def rolename_completer(ctxt, cass):
 +def maybe_quote(name):
 +if CqlRuleSet.is_valid_cql3_name(name):
 +return name
 +return "'%s'" % name
 +
 +# disable completion for CREATE ROLE.
- if ctxt.matched[0][0] == 'K_CREATE':
++if ctxt.matched[0][1].upper() == 'CREATE':
 +return [Hint('')]
 +
 +session = cass.session
 +return [maybe_quote(row[0].replace("'", "''")) for row in 
session.execute("LIST ROLES")]
 +
  syntax_rules += r'''
   ::= "CREATE" "TRIGGER" ( "IF" "NOT" "EXISTS" )? 

 "ON" cf= "USING" 
class=

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/cqlhandling.py
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/test/run_cqlsh.py
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/test/test_cql_parsing.py
--
diff --cc pylib/cqlshlib/test/test_cql_parsing.py
index c011d94,f88b839..ad60c9b
--- a/pylib/cqlshlib/test/test_cql_parsing.py
+++ b/pylib/cqlshlib/test/test_cql_parsing.py
@@@ -17,558 -17,60 +17,558 @@@
  # to configure behavior, define $CQL_TEST_HOST to the destination address
  # for Thrift connections, and $CQL_TEST_PORT to the associated port.
  
 -from .basecase import BaseTestCase, cqlsh
 -from .cassconnect import get_test_keyspace, testrun_cqlsh, testcall_cqlsh
 +from unittest import TestCase
 +from operator import itemgetter
  
 -class TestCqlParsing(BaseTestCase):
 -def setUp(self):
 -self.cqlsh_runner = testrun_cqlsh(cqlver=cqlsh.DEFAULT_CQLVER, 
env={'COLUMNS': '10'})
 -self.cqlsh = self.cqlsh_runner.__enter__()
 +from ..cql3handling import CqlRuleSet
  
 -def tearDown(self):
 -pass
  
 +class TestCqlParsing(TestCase):
  def test_parse_string_literals(self):
 -pass
 +for n in ["'eggs'", "'Sausage 1'", "'spam\nspam\n\tsausage'", "''"]:
 +self.assertSequenceEqual(tokens_with_types(CqlRuleSet.lex(n)),
 + [(n, 'quotedStringLiteral')])
 +self.assertSequenceEqual(tokens_with_types(CqlRuleSet.lex("'eggs'")),
 + [("'eggs'", 'quotedStringLiteral')])
 +
 +tokens = CqlRuleSet.lex("'spam\nspam\n\tsausage'")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[0][0], "quotedStringLiteral")
 +
 +tokens = CqlRuleSet.lex("'spam\nspam\n")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[0][0], "unclosedString")
 +
 +tokens = CqlRuleSet.lex("'foo bar' 'spam\nspam\n")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[1][0], "unclosedString")
 +
 +def 

[4/4] cassandra git commit: Merge branch cassandra-3.0 into trunk

2015-10-16 Thread blerer
Merge branch cassandra-3.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ef817a5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ef817a5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ef817a5

Branch: refs/heads/trunk
Commit: 6ef817a51d0425ae51b5ab16a17ce7a92304fd0d
Parents: 1cb9a02 f3143e6
Author: blerer 
Authored: Fri Oct 16 12:01:18 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 12:01:29 2015 +0200

--
 pylib/cqlshlib/cql3handling.py  |   8 +-
 pylib/cqlshlib/cqlhandling.py   |  10 ++
 pylib/cqlshlib/test/run_cqlsh.py|   2 +-
 pylib/cqlshlib/test/test_cql_parsing.py | 240 +--
 4 files changed, 137 insertions(+), 123 deletions(-)
--




[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2015-10-16 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960450#comment-14960450
 ] 

Sergio Bossa commented on CASSANDRA-9318:
-

I'd like to resurrect this one and if time permits take it by following 
Jonathan's proposal above, except I'd also like to propose an additional form 
of back-pressure at the coordinator->replica level. Such beack-pressure would 
be applied by the coordinator when sending messages to *each* replica if some 
kind of flow control condition is met (i.e. number of in-flight requests, drop 
rate, we can talk about this more in a second time or even experiment); that 
is, each replica would have its own flow control, allowing to better fine tune 
the applied back-pressure. The memory-based back-pressure would at this point 
work as a kind of circuit breaker: if replicas can't keep up, and the applied 
flow control causes too many requests to accumulate on the coordinator, the 
memory-based limit will kick in and start pushing back to the client by either 
pausing or throwing OveloadedException.

There are obviously details we need to discuss and/or experiment with, i.e.:
1) The flow control algorithm (we could steal from the TCP literature, using 
something like CoDel or Adaptive RED).
2) If posing any limit to coordinator-level throttling, i.e. shedding requests 
that have been throttled for too much time (I would say no, because the memory 
limit should protect against OOMs and allow the in-flight requests to be 
processed).
3) What to do when the memory limit is reached (we could make this 
policy-based).

I hope it makes sense, and I hope you see the reason behind that: dropped 
mutations are a problem for many C* users, and even more for C* applications 
that cannot rely on QUORUM reads (i.e. inverted index queries, graph queries); 
the proposal above is not meant to be the definitive solution, but should 
greatly help reducing the number of dropped mutations on replicas, which 
memory-based back-pressure alone doesn't (as by the time you kick it, without 
flow control replicas will be already flooded with requests).

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ariel Weisberg
>Assignee: Jacek Lewandowski
> Fix For: 2.1.x, 2.2.x
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-10-16 Thread marcuse
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c20f6b37
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c20f6b37
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c20f6b37

Branch: refs/heads/trunk
Commit: c20f6b374bc93e08cc5e6a65784e25185e386d9d
Parents: 6ef817a e1fb18a
Author: Marcus Eriksson 
Authored: Fri Oct 16 12:31:24 2015 +0200
Committer: Marcus Eriksson 
Committed: Fri Oct 16 12:31:24 2015 +0200

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 12 
 .../db/compaction/AbstractCompactionStrategy.java   |  6 +-
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c20f6b37/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[1/2] cassandra git commit: Followup to CASSANDRA-8671 - additional data directories

2015-10-16 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6ef817a51 -> c20f6b374


Followup to CASSANDRA-8671 - additional data directories

Patch by Blake Eggleston; reviewed by marcuse for CASSANDRA-10518


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e1fb18a0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e1fb18a0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e1fb18a0

Branch: refs/heads/trunk
Commit: e1fb18a00b598431f52b88d12e2eddbe07233e88
Parents: f3143e6
Author: Blake Eggleston 
Authored: Thu Oct 15 14:14:37 2015 +0200
Committer: Marcus Eriksson 
Committed: Fri Oct 16 12:30:59 2015 +0200

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 12 
 .../db/compaction/AbstractCompactionStrategy.java   |  6 +-
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1fb18a0/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 062eb0a..4c9fc55 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -84,12 +84,17 @@ import static 
org.apache.cassandra.utils.Throwables.maybeFail;
 
 public class ColumnFamilyStore implements ColumnFamilyStoreMBean
 {
-// the directories used to load sstables on cfs instantiation
+// The directories which will be searched for sstables on cfs 
instantiation.
 private static volatile Directories.DataDirectory[] initialDirectories = 
Directories.dataDirectories;
 
 /**
- * a hook to add additional directories to initialDirectories.
+ * A hook to add additional directories to initialDirectories.
  * Any additional directories should be added prior to ColumnFamilyStore 
instantiation on startup
+ *
+ * Since the directories used by a given table are determined by the 
compaction strategy,
+ * it's possible for sstables to be written to directories specified 
outside of cassandra.yaml.
+ * By adding additional directories to initialDirectories, sstables in 
these extra locations are
+ * made discoverable on sstable instantiation.
  */
 public static synchronized void 
addInitialDirectories(Directories.DataDirectory[] newDirectories)
 {
@@ -363,7 +368,6 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 this.keyspace = keyspace;
 this.metadata = metadata;
-this.directories = directories;
 name = columnFamilyName;
 minCompactionThreshold = new 
DefaultValue<>(metadata.params.compaction.minCompactionThreshold());
 maxCompactionThreshold = new 
DefaultValue<>(metadata.params.compaction.maxCompactionThreshold());
@@ -388,7 +392,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 // compaction strategy should be created after the CFS has been 
prepared
 compactionStrategyManager = new CompactionStrategyManager(this);
-this.directories = this.compactionStrategyManager.getDirectories();
+this.directories = compactionStrategyManager.getDirectories();
 
 if (maxCompactionThreshold.value() <= 0 || 
minCompactionThreshold.value() <=0)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1fb18a0/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 721fd70..ae8839e 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -78,6 +78,8 @@ public abstract class AbstractCompactionStrategy
 protected boolean uncheckedTombstoneCompaction;
 protected boolean disableTombstoneCompactions = false;
 
+private final Directories directories;
+
 /**
  * pause/resume/getNextBackgroundTask must synchronize.  This guarantees 
that after pause completes,
  * no new tasks will be generated; or put another way, pause can't run 
until in-progress tasks are
@@ -117,11 +119,13 @@ public abstract class AbstractCompactionStrategy
 tombstoneCompactionInterval = 
DEFAULT_TOMBSTONE_COMPACTION_INTERVAL;
 uncheckedTombstoneCompaction = 
DEFAULT_UNCHECKED_TOMBSTONE_COMPACTION_OPTION;
 }
+
+directories = new Directories(cfs.metadata, 

cassandra git commit: Followup to CASSANDRA-8671 - additional data directories

2015-10-16 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 f3143e624 -> e1fb18a00


Followup to CASSANDRA-8671 - additional data directories

Patch by Blake Eggleston; reviewed by marcuse for CASSANDRA-10518


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e1fb18a0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e1fb18a0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e1fb18a0

Branch: refs/heads/cassandra-3.0
Commit: e1fb18a00b598431f52b88d12e2eddbe07233e88
Parents: f3143e6
Author: Blake Eggleston 
Authored: Thu Oct 15 14:14:37 2015 +0200
Committer: Marcus Eriksson 
Committed: Fri Oct 16 12:30:59 2015 +0200

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 12 
 .../db/compaction/AbstractCompactionStrategy.java   |  6 +-
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1fb18a0/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 062eb0a..4c9fc55 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -84,12 +84,17 @@ import static 
org.apache.cassandra.utils.Throwables.maybeFail;
 
 public class ColumnFamilyStore implements ColumnFamilyStoreMBean
 {
-// the directories used to load sstables on cfs instantiation
+// The directories which will be searched for sstables on cfs 
instantiation.
 private static volatile Directories.DataDirectory[] initialDirectories = 
Directories.dataDirectories;
 
 /**
- * a hook to add additional directories to initialDirectories.
+ * A hook to add additional directories to initialDirectories.
  * Any additional directories should be added prior to ColumnFamilyStore 
instantiation on startup
+ *
+ * Since the directories used by a given table are determined by the 
compaction strategy,
+ * it's possible for sstables to be written to directories specified 
outside of cassandra.yaml.
+ * By adding additional directories to initialDirectories, sstables in 
these extra locations are
+ * made discoverable on sstable instantiation.
  */
 public static synchronized void 
addInitialDirectories(Directories.DataDirectory[] newDirectories)
 {
@@ -363,7 +368,6 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 this.keyspace = keyspace;
 this.metadata = metadata;
-this.directories = directories;
 name = columnFamilyName;
 minCompactionThreshold = new 
DefaultValue<>(metadata.params.compaction.minCompactionThreshold());
 maxCompactionThreshold = new 
DefaultValue<>(metadata.params.compaction.maxCompactionThreshold());
@@ -388,7 +392,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 // compaction strategy should be created after the CFS has been 
prepared
 compactionStrategyManager = new CompactionStrategyManager(this);
-this.directories = this.compactionStrategyManager.getDirectories();
+this.directories = compactionStrategyManager.getDirectories();
 
 if (maxCompactionThreshold.value() <= 0 || 
minCompactionThreshold.value() <=0)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1fb18a0/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 721fd70..ae8839e 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -78,6 +78,8 @@ public abstract class AbstractCompactionStrategy
 protected boolean uncheckedTombstoneCompaction;
 protected boolean disableTombstoneCompactions = false;
 
+private final Directories directories;
+
 /**
  * pause/resume/getNextBackgroundTask must synchronize.  This guarantees 
that after pause completes,
  * no new tasks will be generated; or put another way, pause can't run 
until in-progress tasks are
@@ -117,11 +119,13 @@ public abstract class AbstractCompactionStrategy
 tombstoneCompactionInterval = 
DEFAULT_TOMBSTONE_COMPACTION_INTERVAL;
 uncheckedTombstoneCompaction = 
DEFAULT_UNCHECKED_TOMBSTONE_COMPACTION_OPTION;
 }
+
+directories = new 

[jira] [Created] (CASSANDRA-10540) RangeAwareCompaction

2015-10-16 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-10540:
---

 Summary: RangeAwareCompaction
 Key: CASSANDRA-10540
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10540
 Project: Cassandra
  Issue Type: New Feature
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 3.2


Broken out from CASSANDRA-6696, we should split sstables based on ranges during 
compaction.

Requirements;
* dont create tiny sstables - keep them bunched together until a single vnode 
is big enough (configurable how big that is)
* make it possible to run existing compaction strategies on the per-range 
sstables

We should probably add a global compaction strategy parameter that states 
whether this should be enabled or not.

My wip-branch is here (broken out from 6696, probably does not build); 
https://github.com/krummas/cassandra/commits/marcuse/vnodeawarecompaction - 
naming is wrong - we should split based on local ranges even without vnodes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10415) Fix cqlsh bugs

2015-10-16 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960403#comment-14960403
 ] 

Benjamin Lerer commented on CASSANDRA-10415:


The cqlshlib cannot run on Windows as they seems to rely on symlinks. I opened 
CASSANDRA-10541 for it.
I tested the patch using {{cqlsh}} manually.

The patches look good to me. Thanks for the work.

[~aholmber] I will prefer to address the {{authentification}} issue in another 
ticket as it is not really part of this ticket.




> Fix cqlsh bugs
> --
>
> Key: CASSANDRA-10415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10415
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.0 rc2
>
>
> This is followup to CASSANDRA-10289
> The tests currently failing should be:
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_columnfamily}}
> ** uses {{create_columnfamily_table_template}}. Stefania says "the {{(}} 
> after {{CREATE ... IF}} does not look valid to me."
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_table}}
> ** uses {{create_columnfamily_table_template}}, see above.
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_delete}}
> ** Stefania says: "I don't think keyspaces are a valid completion after 
> {{DELETE a [}} and after {{DELETE FROM twenty_rows_composite_table USING 
> TIMESTAMP 0 WHERE TOKEN(a) >=}}. From a quick analysis of {{cqlhandling.py}} 
> I think it comes from {{}}, which picks up {{}}, which 
> was changed to include {{ks.}} by CASSANDRA-7556.
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_drop_keyspace}}
> ** Stefania says: "the {{;}} after {{DROP KEYSPACE IF}} is not valid.
> * {{cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_timestamp_output}}
> ** already documented with CASSANDRA-10313 and CASSANDRA-10397
> I'm happy to break these out into separate tickets if necessary. 
> To run the tests locally, I cd to {{cassandra/pylib/cqlshlib}} and run the 
> following:
> {code}
> ccm create -n 1 --install-dir=../.. test
> ccm start --wait-for-binary-proto
> nosetests test 2>&1
> ccm remove
> {code}
> This requires nose and ccm. Until CASSANDRA-10289 is resolved, you'll have to 
> use my branch here: https://github.com/mambocab/cassandra/tree/fix-cqlsh-tests
> Tests for this branch are run (non-continuously) here:
> http://cassci.datastax.com/job/scratch_mambocab-fix_cqlsh/
> Assigning [~Stefania] for now, since she's already looked at 10289, but feel 
> free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10423) Paxos/LWT failures when moving node

2015-10-16 Thread Roger Schildmeijer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960477#comment-14960477
 ] 

Roger Schildmeijer edited comment on CASSANDRA-10423 at 10/16/15 10:16 AM:
---

We had to do yet another (nodetool) move. Same thing happened.
We moved a node from 6362172968960304802 to 4611686018427387907

It used to be between (node) 6148914691236517208 and (node) 7686143364045646509.
It moved between (node) 3074457345618258605 and (node) 6148914691236517208

Some (sorted) tokens (from the lwt queries) that failed:
1550752142907493170
1681261686482955214
1787784122186449673
2206896992809998407
2679778263008234502
3440226803292810454
3551446884592709276

My non scientific conclusion is that all lwt queries, that ended up in a 
certain range, failed. 


was (Author: rschildmeijer):
We had to do yet another (nodetool) move. Same thing happened.
We move a node from 6362172968960304802 to 4611686018427387907

It used to be between (node) 6148914691236517208 and (node) 7686143364045646509.
It moved between (node) 3074457345618258605 and (node) 6148914691236517208

Some (sorted) tokens (from the lwt queries) that failed:
1550752142907493170
1681261686482955214
1787784122186449673
2206896992809998407
2679778263008234502
3440226803292810454
3551446884592709276

My non scientific conclusion is that all lwt queries, that ended up in a 
certain range, failed. 

> Paxos/LWT failures when moving node
> ---
>
> Key: CASSANDRA-10423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra version: 2.0.14
> Java-driver version: 2.0.11
>Reporter: Roger Schildmeijer
>Assignee: Ryan McGuire
>
> While moving a node (nodetool move ) we noticed that lwt started 
> failing for some (~50%) requests. The java-driver (version 2.0.11) returned 
> com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout 
> during write query at consistency SERIAL (7 replica were required but only 0 
> acknowledged the write). The cluster was not under heavy load.
> I noticed that the failed lwt requests all took just above 1s. That 
> information and the WriteTimeoutException could indicate that this happens:
> https://github.com/apache/cassandra/blob/cassandra-2.0.14/src/java/org/apache/cassandra/service/StorageProxy.java#L268
> I can't explain why though. Why would there be more cas contention just 
> because a node is moving?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10423) Paxos/LWT failures when moving node

2015-10-16 Thread Roger Schildmeijer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960477#comment-14960477
 ] 

Roger Schildmeijer commented on CASSANDRA-10423:


We had to do yet another (nodetool) move. Same thing happened.
We move a node from 6362172968960304802 to 4611686018427387907

It used to be between (node) 6148914691236517208 and (node) 7686143364045646509.
It moved between (node) 3074457345618258605 and (node) 6148914691236517208

Some (sorted) tokens (from the lwt queries) that failed:
1550752142907493170
1681261686482955214
1787784122186449673
2206896992809998407
2679778263008234502
3440226803292810454
3551446884592709276

My non scientific conclusion is that all lwt queries, that ended up in a 
certain range, failed. 

> Paxos/LWT failures when moving node
> ---
>
> Key: CASSANDRA-10423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra version: 2.0.14
> Java-driver version: 2.0.11
>Reporter: Roger Schildmeijer
>Assignee: Ryan McGuire
>
> While moving a node (nodetool move ) we noticed that lwt started 
> failing for some (~50%) requests. The java-driver (version 2.0.11) returned 
> com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout 
> during write query at consistency SERIAL (7 replica were required but only 0 
> acknowledged the write). The cluster was not under heavy load.
> I noticed that the failed lwt requests all took just above 1s. That 
> information and the WriteTimeoutException could indicate that this happens:
> https://github.com/apache/cassandra/blob/cassandra-2.0.14/src/java/org/apache/cassandra/service/StorageProxy.java#L268
> I can't explain why though. Why would there be more cas contention just 
> because a node is moving?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10542) Deprecate Pig support in 2.2 and remove it in 3.0

2015-10-16 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-10542:
-

 Summary: Deprecate Pig support in 2.2 and remove it in 3.0
 Key: CASSANDRA-10542
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10542
 Project: Cassandra
  Issue Type: Task
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.x, 3.0.0 rc2


Nobody's currently responsible for Pig code. As a result, there is nobody to 
fix the issues, or even fix the failing tests (of which we unfortunately have 
plenty). Those tests take time to run, constantly hang, and fail with cryptic 
errors that we don't know how to fix and don't have enough resources to 
investigate.

Thus I propose we deprecate Pig support in 2.2 and remove it in 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10415) Fix cqlsh bugs

2015-10-16 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960452#comment-14960452
 ] 

Benjamin Lerer commented on CASSANDRA-10415:


Committed to 2.1 as 806378c8c295fb062f94eb8bf0f719b398d27745 and merged into 
2.2, 3.0 and trunk.

> Fix cqlsh bugs
> --
>
> Key: CASSANDRA-10415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10415
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.0 rc2
>
>
> This is followup to CASSANDRA-10289
> The tests currently failing should be:
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_columnfamily}}
> ** uses {{create_columnfamily_table_template}}. Stefania says "the {{(}} 
> after {{CREATE ... IF}} does not look valid to me."
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_table}}
> ** uses {{create_columnfamily_table_template}}, see above.
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_delete}}
> ** Stefania says: "I don't think keyspaces are a valid completion after 
> {{DELETE a [}} and after {{DELETE FROM twenty_rows_composite_table USING 
> TIMESTAMP 0 WHERE TOKEN(a) >=}}. From a quick analysis of {{cqlhandling.py}} 
> I think it comes from {{}}, which picks up {{}}, which 
> was changed to include {{ks.}} by CASSANDRA-7556.
> * 
> {{cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_drop_keyspace}}
> ** Stefania says: "the {{;}} after {{DROP KEYSPACE IF}} is not valid.
> * {{cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_timestamp_output}}
> ** already documented with CASSANDRA-10313 and CASSANDRA-10397
> I'm happy to break these out into separate tickets if necessary. 
> To run the tests locally, I cd to {{cassandra/pylib/cqlshlib}} and run the 
> following:
> {code}
> ccm create -n 1 --install-dir=../.. test
> ccm start --wait-for-binary-proto
> nosetests test 2>&1
> ccm remove
> {code}
> This requires nose and ccm. Until CASSANDRA-10289 is resolved, you'll have to 
> use my branch here: https://github.com/mambocab/cassandra/tree/fix-cqlsh-tests
> Tests for this branch are run (non-continuously) here:
> http://cassci.datastax.com/job/scratch_mambocab-fix_cqlsh/
> Assigning [~Stefania] for now, since she's already looked at 10289, but feel 
> free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2015-10-16 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94a7d068
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94a7d068
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94a7d068

Branch: refs/heads/cassandra-3.0
Commit: 94a7d0682e037c8fb4226a1c5f1f47918912f2e2
Parents: 3b7ccdf 806378c
Author: blerer 
Authored: Fri Oct 16 11:52:51 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:53:52 2015 +0200

--
 pylib/cqlshlib/cql3handling.py   |   8 +-
 pylib/cqlshlib/cqlhandling.py|  10 +
 pylib/cqlshlib/test/run_cqlsh.py |   2 +-
 pylib/cqlshlib/test/test_cql_parsing.py  | 240 +++---
 pylib/cqlshlib/test/test_cqlsh_completion.py |  10 +-
 pylib/cqlshlib/test/test_cqlsh_output.py |   9 +-
 6 files changed, 145 insertions(+), 134 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/cql3handling.py
--
diff --cc pylib/cqlshlib/cql3handling.py
index 40b7d6b,5f93003..0ee0a38
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@@ -166,10 -162,10 +166,14 @@@ JUNK ::= /([ \t\r\f\v]+|(--|[/][/])[^\n
   | 
   | 
   | 
-- |  
++ |  
   | "NULL"
   ;
  
++ ::= ( ( "."  )?)
++ | "TOKEN"
++ ;
++
   ::= "(" (  ( ","  )* )? ")"
   ;
  
@@@ -1391,21 -1227,6 +1395,21 @@@ def username_name_completer(ctxt, cass)
  session = cass.session
  return [maybe_quote(row.values()[0].replace("'", "''")) for row in 
session.execute("LIST USERS")]
  
 +
 +@completer_for('rolename', 'role')
 +def rolename_completer(ctxt, cass):
 +def maybe_quote(name):
 +if CqlRuleSet.is_valid_cql3_name(name):
 +return name
 +return "'%s'" % name
 +
 +# disable completion for CREATE ROLE.
- if ctxt.matched[0][0] == 'K_CREATE':
++if ctxt.matched[0][1].upper() == 'CREATE':
 +return [Hint('')]
 +
 +session = cass.session
 +return [maybe_quote(row[0].replace("'", "''")) for row in 
session.execute("LIST ROLES")]
 +
  syntax_rules += r'''
   ::= "CREATE" "TRIGGER" ( "IF" "NOT" "EXISTS" )? 

 "ON" cf= "USING" 
class=

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/cqlhandling.py
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/test/run_cqlsh.py
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/test/test_cql_parsing.py
--
diff --cc pylib/cqlshlib/test/test_cql_parsing.py
index c011d94,f88b839..ad60c9b
--- a/pylib/cqlshlib/test/test_cql_parsing.py
+++ b/pylib/cqlshlib/test/test_cql_parsing.py
@@@ -17,558 -17,60 +17,558 @@@
  # to configure behavior, define $CQL_TEST_HOST to the destination address
  # for Thrift connections, and $CQL_TEST_PORT to the associated port.
  
 -from .basecase import BaseTestCase, cqlsh
 -from .cassconnect import get_test_keyspace, testrun_cqlsh, testcall_cqlsh
 +from unittest import TestCase
 +from operator import itemgetter
  
 -class TestCqlParsing(BaseTestCase):
 -def setUp(self):
 -self.cqlsh_runner = testrun_cqlsh(cqlver=cqlsh.DEFAULT_CQLVER, 
env={'COLUMNS': '10'})
 -self.cqlsh = self.cqlsh_runner.__enter__()
 +from ..cql3handling import CqlRuleSet
  
 -def tearDown(self):
 -pass
  
 +class TestCqlParsing(TestCase):
  def test_parse_string_literals(self):
 -pass
 +for n in ["'eggs'", "'Sausage 1'", "'spam\nspam\n\tsausage'", "''"]:
 +self.assertSequenceEqual(tokens_with_types(CqlRuleSet.lex(n)),
 + [(n, 'quotedStringLiteral')])
 +self.assertSequenceEqual(tokens_with_types(CqlRuleSet.lex("'eggs'")),
 + [("'eggs'", 'quotedStringLiteral')])
 +
 +tokens = CqlRuleSet.lex("'spam\nspam\n\tsausage'")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[0][0], "quotedStringLiteral")
 +
 +tokens = CqlRuleSet.lex("'spam\nspam\n")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[0][0], "unclosedString")
 +
 +tokens = CqlRuleSet.lex("'foo bar' 'spam\nspam\n")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[1][0], "unclosedString")
 +
 +

[jira] [Updated] (CASSANDRA-10537) CONTAINS and CONTAINS KEY support for Lightweight Transactions

2015-10-16 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10537:
-
Labels: CQL  (was: )

> CONTAINS and CONTAINS KEY support for Lightweight Transactions
> --
>
> Key: CASSANDRA-10537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Nimi Wariboko Jr.
>  Labels: CQL
> Fix For: 2.1.x
>
>
> Conditional updates currently do not support CONTAINS and CONTAINS KEY 
> conditions. Queries such as 
> {{UPDATE mytable SET somefield = 4 WHERE pk = 'pkv' IF set_column CONTAINS 
> 5;}}
> are not possible.
> Would it also be possible to support the negation of these (ex. testing that 
> a value does not exist inside a set)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10536) Batch statements with multiple updates to partition error when table is indexed

2015-10-16 Thread Bryn Cooke (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960425#comment-14960425
 ] 

Bryn Cooke commented on CASSANDRA-10536:


Just to add, the following updates cannot be combined in to a single statement, 
so this cannot be worked around:
{noformat}
CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY ((a, b), c));
CREATE INDEX ON foo(d);
{noformat}

and batch updates
{noformat}
BEGIN BATCH
UPDATE foo SET d = 0 WHERE a = 0 AND b = 0 AND C = 0;
UPDATE foo SET d = 0 WHERE a = 0 AND b = 0 AND C = 1;
APPLY BATCH
{noformat}

> Batch statements with multiple updates to partition error when table is 
> indexed
> ---
>
> Key: CASSANDRA-10536
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10536
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.0 rc2
>
>
> If a {{BATCH}} statement contains multiple {{UPDATE}} statements that update 
> the same partition, and a secondary index exists on that table, the batch 
> statement will error:
> {noformat}
> ServerError:  message="java.lang.IllegalStateException: An update should not be written 
> again once it has been read">
> {noformat}
> with the following traceback in the logs:
> {noformat}
> ERROR 20:53:46 Unexpected exception during request
> java.lang.IllegalStateException: An update should not be written again once 
> it has been read
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.assertNotBuilt(PartitionUpdate.java:504)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.add(PartitionUpdate.java:535)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addUpdates(ModificationStatement.java:667)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:234)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:335)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:321)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:316)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:205)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:471)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:448)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {noformat}
> This is due to {{SecondaryIndexManager.validate()}} triggering a build of the 
> {{PartitionUpdate}} (stacktrace from debugging the build() call):
> {noformat}
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate.build(PartitionUpdate.java:571)
>  [main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.maybeBuild(PartitionUpdate.java:561)
>  [main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.iterator(PartitionUpdate.java:418)
>  [main/:na]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.validateRows(CassandraIndex.java:560)
>  [main/:na]
>   at 
> 

[2/2] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2015-10-16 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94a7d068
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94a7d068
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94a7d068

Branch: refs/heads/cassandra-2.2
Commit: 94a7d0682e037c8fb4226a1c5f1f47918912f2e2
Parents: 3b7ccdf 806378c
Author: blerer 
Authored: Fri Oct 16 11:52:51 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:53:52 2015 +0200

--
 pylib/cqlshlib/cql3handling.py   |   8 +-
 pylib/cqlshlib/cqlhandling.py|  10 +
 pylib/cqlshlib/test/run_cqlsh.py |   2 +-
 pylib/cqlshlib/test/test_cql_parsing.py  | 240 +++---
 pylib/cqlshlib/test/test_cqlsh_completion.py |  10 +-
 pylib/cqlshlib/test/test_cqlsh_output.py |   9 +-
 6 files changed, 145 insertions(+), 134 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/cql3handling.py
--
diff --cc pylib/cqlshlib/cql3handling.py
index 40b7d6b,5f93003..0ee0a38
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@@ -166,10 -162,10 +166,14 @@@ JUNK ::= /([ \t\r\f\v]+|(--|[/][/])[^\n
   | 
   | 
   | 
-- |  
++ |  
   | "NULL"
   ;
  
++ ::= ( ( "."  )?)
++ | "TOKEN"
++ ;
++
   ::= "(" (  ( ","  )* )? ")"
   ;
  
@@@ -1391,21 -1227,6 +1395,21 @@@ def username_name_completer(ctxt, cass)
  session = cass.session
  return [maybe_quote(row.values()[0].replace("'", "''")) for row in 
session.execute("LIST USERS")]
  
 +
 +@completer_for('rolename', 'role')
 +def rolename_completer(ctxt, cass):
 +def maybe_quote(name):
 +if CqlRuleSet.is_valid_cql3_name(name):
 +return name
 +return "'%s'" % name
 +
 +# disable completion for CREATE ROLE.
- if ctxt.matched[0][0] == 'K_CREATE':
++if ctxt.matched[0][1].upper() == 'CREATE':
 +return [Hint('')]
 +
 +session = cass.session
 +return [maybe_quote(row[0].replace("'", "''")) for row in 
session.execute("LIST ROLES")]
 +
  syntax_rules += r'''
   ::= "CREATE" "TRIGGER" ( "IF" "NOT" "EXISTS" )? 

 "ON" cf= "USING" 
class=

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/cqlhandling.py
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/test/run_cqlsh.py
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/94a7d068/pylib/cqlshlib/test/test_cql_parsing.py
--
diff --cc pylib/cqlshlib/test/test_cql_parsing.py
index c011d94,f88b839..ad60c9b
--- a/pylib/cqlshlib/test/test_cql_parsing.py
+++ b/pylib/cqlshlib/test/test_cql_parsing.py
@@@ -17,558 -17,60 +17,558 @@@
  # to configure behavior, define $CQL_TEST_HOST to the destination address
  # for Thrift connections, and $CQL_TEST_PORT to the associated port.
  
 -from .basecase import BaseTestCase, cqlsh
 -from .cassconnect import get_test_keyspace, testrun_cqlsh, testcall_cqlsh
 +from unittest import TestCase
 +from operator import itemgetter
  
 -class TestCqlParsing(BaseTestCase):
 -def setUp(self):
 -self.cqlsh_runner = testrun_cqlsh(cqlver=cqlsh.DEFAULT_CQLVER, 
env={'COLUMNS': '10'})
 -self.cqlsh = self.cqlsh_runner.__enter__()
 +from ..cql3handling import CqlRuleSet
  
 -def tearDown(self):
 -pass
  
 +class TestCqlParsing(TestCase):
  def test_parse_string_literals(self):
 -pass
 +for n in ["'eggs'", "'Sausage 1'", "'spam\nspam\n\tsausage'", "''"]:
 +self.assertSequenceEqual(tokens_with_types(CqlRuleSet.lex(n)),
 + [(n, 'quotedStringLiteral')])
 +self.assertSequenceEqual(tokens_with_types(CqlRuleSet.lex("'eggs'")),
 + [("'eggs'", 'quotedStringLiteral')])
 +
 +tokens = CqlRuleSet.lex("'spam\nspam\n\tsausage'")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[0][0], "quotedStringLiteral")
 +
 +tokens = CqlRuleSet.lex("'spam\nspam\n")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[0][0], "unclosedString")
 +
 +tokens = CqlRuleSet.lex("'foo bar' 'spam\nspam\n")
 +tokens = CqlRuleSet.cql_massage_tokens(tokens)
 +self.assertEqual(tokens[1][0], "unclosedString")
 +
 +

[jira] [Commented] (CASSANDRA-10542) Deprecate Pig support in 2.2 and remove it in 3.0

2015-10-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960442#comment-14960442
 ] 

Sylvain Lebresne commented on CASSANDRA-10542:
--

I agree. No one in the core maintainers has any expertise of Pig itself and our 
Pig code in particular, and no one else has step up to maintain that code 
properly. And it's something that can totally live outside of the main project 
if others have willingness to maintain it.

> Deprecate Pig support in 2.2 and remove it in 3.0
> -
>
> Key: CASSANDRA-10542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10542
> Project: Cassandra
>  Issue Type: Task
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 2.2.x, 3.0.0 rc2
>
>
> Nobody's currently responsible for Pig code. As a result, there is nobody to 
> fix the issues, or even fix the failing tests (of which we unfortunately have 
> plenty). Those tests take time to run, constantly hang, and fail with cryptic 
> errors that we don't know how to fix and don't have enough resources to 
> investigate.
> Thus I propose we deprecate Pig support in 2.2 and remove it in 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix cqlsh rules

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 3b7ccdfb1 -> 94a7d0682


Fix cqlsh rules

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for
CASSANDRA-10415


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/806378c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/806378c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/806378c8

Branch: refs/heads/cassandra-2.2
Commit: 806378c8c295fb062f94eb8bf0f719b398d27745
Parents: a61fc01
Author: Stefania Alborghetti 
Authored: Fri Oct 16 11:47:07 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:47:07 2015 +0200

--
 pylib/cqlshlib/cqlhandling.py| 10 ++
 pylib/cqlshlib/test/cassconnect.py   |  3 ++-
 pylib/cqlshlib/test/run_cqlsh.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_completion.py | 11 ---
 pylib/cqlshlib/test/test_cqlsh_output.py |  4 ++--
 pylib/cqlshlib/test/test_keyspace_init.cql   |  2 +-
 6 files changed, 20 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/cqlhandling.py
--
diff --git a/pylib/cqlshlib/cqlhandling.py b/pylib/cqlshlib/cqlhandling.py
index 5fe311f..9ea30cd 100644
--- a/pylib/cqlshlib/cqlhandling.py
+++ b/pylib/cqlshlib/cqlhandling.py
@@ -18,6 +18,7 @@
 # i.e., stuff that's not necessarily cqlsh-specific
 
 import traceback
+from cassandra.metadata import cql_keywords_reserved
 from . import pylexotron, util
 
 Hint = pylexotron.Hint
@@ -55,6 +56,15 @@ class CqlParsingRuleSet(pylexotron.ParsingRuleSet):
 
 # note: commands_end_with_newline may be extended by callers.
 self.commands_end_with_newline = set()
+self.set_reserved_keywords(cql_keywords_reserved)
+
+def set_reserved_keywords(self, keywords):
+"""
+We cannot let resreved cql keywords be simple 'identifier' since this 
caused
+problems with completion, see CASSANDRA-10415
+"""
+syntax = ' ::= /(' + '|'.join(r'\b{}\b'.format(k) 
for k in keywords) + ')/ ;'
+self.append_rules(syntax)
 
 def completer_for(self, rulename, symname):
 def registrator(f):

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/cassconnect.py
--
diff --git a/pylib/cqlshlib/test/cassconnect.py 
b/pylib/cqlshlib/test/cassconnect.py
index 21dddcd..a67407b 100644
--- a/pylib/cqlshlib/test/cassconnect.py
+++ b/pylib/cqlshlib/test/cassconnect.py
@@ -20,6 +20,7 @@ import contextlib
 import tempfile
 import os.path
 from .basecase import cql, cqlsh, cqlshlog, TEST_HOST, TEST_PORT, rundir
+from cassandra.metadata import maybe_escape_name
 from .run_cqlsh import run_cqlsh, call_cqlsh
 
 test_keyspace_init = os.path.join(rundir, 'test_keyspace_init.cql')
@@ -126,7 +127,7 @@ def cql_rule_set():
 return cqlsh.cql3handling.CqlRuleSet
 
 def quote_name(name):
-return cql_rule_set().maybe_escape_name(name)
+return maybe_escape_name(name)
 
 class DEFAULTVAL: pass
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/run_cqlsh.py
--
diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py
index 6ae295c..88b0ca6 100644
--- a/pylib/cqlshlib/test/run_cqlsh.py
+++ b/pylib/cqlshlib/test/run_cqlsh.py
@@ -231,7 +231,7 @@ class CqlshRunner(ProcRunner):
 self.output_header = self.read_to_next_prompt()
 
 def read_to_next_prompt(self):
-return self.read_until(self.prompt, timeout=4.0)
+return self.read_until(self.prompt, timeout=10.0)
 
 def read_up_to_timeout(self, timeout, blksize=4096):
 output = ProcRunner.read_up_to_timeout(self, timeout, blksize=blksize)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 97bd96b..5f7b6e4 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -142,8 +142,8 @@ class TestCqlshCompletion(CqlshCompletionCase):
 def test_complete_on_empty_string(self):
 self.trycompletions('', choices=('?', 'ALTER', 'BEGIN', 'CAPTURE', 
'CONSISTENCY',
  'COPY', 'CREATE', 'DEBUG', 'DELETE', 
'DESC', 'DESCRIBE',
- 'DROP', 'GRANT', 'HELP', 'INSERT', 
'LIST', 

[3/4] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2015-10-16 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3143e62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3143e62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3143e62

Branch: refs/heads/trunk
Commit: f3143e624cb73e86ea11dd2f9994c70008fc26aa
Parents: a52597d 94a7d06
Author: blerer 
Authored: Fri Oct 16 11:56:46 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:57:22 2015 +0200

--
 pylib/cqlshlib/cql3handling.py  |   8 +-
 pylib/cqlshlib/cqlhandling.py   |  10 ++
 pylib/cqlshlib/test/run_cqlsh.py|   2 +-
 pylib/cqlshlib/test/test_cql_parsing.py | 240 +--
 4 files changed, 137 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3143e62/pylib/cqlshlib/cql3handling.py
--



[1/4] cassandra git commit: Fix cqlsh rules

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1cb9a02bd -> 6ef817a51


Fix cqlsh rules

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for
CASSANDRA-10415


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/806378c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/806378c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/806378c8

Branch: refs/heads/trunk
Commit: 806378c8c295fb062f94eb8bf0f719b398d27745
Parents: a61fc01
Author: Stefania Alborghetti 
Authored: Fri Oct 16 11:47:07 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 11:47:07 2015 +0200

--
 pylib/cqlshlib/cqlhandling.py| 10 ++
 pylib/cqlshlib/test/cassconnect.py   |  3 ++-
 pylib/cqlshlib/test/run_cqlsh.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_completion.py | 11 ---
 pylib/cqlshlib/test/test_cqlsh_output.py |  4 ++--
 pylib/cqlshlib/test/test_keyspace_init.cql   |  2 +-
 6 files changed, 20 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/cqlhandling.py
--
diff --git a/pylib/cqlshlib/cqlhandling.py b/pylib/cqlshlib/cqlhandling.py
index 5fe311f..9ea30cd 100644
--- a/pylib/cqlshlib/cqlhandling.py
+++ b/pylib/cqlshlib/cqlhandling.py
@@ -18,6 +18,7 @@
 # i.e., stuff that's not necessarily cqlsh-specific
 
 import traceback
+from cassandra.metadata import cql_keywords_reserved
 from . import pylexotron, util
 
 Hint = pylexotron.Hint
@@ -55,6 +56,15 @@ class CqlParsingRuleSet(pylexotron.ParsingRuleSet):
 
 # note: commands_end_with_newline may be extended by callers.
 self.commands_end_with_newline = set()
+self.set_reserved_keywords(cql_keywords_reserved)
+
+def set_reserved_keywords(self, keywords):
+"""
+We cannot let resreved cql keywords be simple 'identifier' since this 
caused
+problems with completion, see CASSANDRA-10415
+"""
+syntax = ' ::= /(' + '|'.join(r'\b{}\b'.format(k) 
for k in keywords) + ')/ ;'
+self.append_rules(syntax)
 
 def completer_for(self, rulename, symname):
 def registrator(f):

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/cassconnect.py
--
diff --git a/pylib/cqlshlib/test/cassconnect.py 
b/pylib/cqlshlib/test/cassconnect.py
index 21dddcd..a67407b 100644
--- a/pylib/cqlshlib/test/cassconnect.py
+++ b/pylib/cqlshlib/test/cassconnect.py
@@ -20,6 +20,7 @@ import contextlib
 import tempfile
 import os.path
 from .basecase import cql, cqlsh, cqlshlog, TEST_HOST, TEST_PORT, rundir
+from cassandra.metadata import maybe_escape_name
 from .run_cqlsh import run_cqlsh, call_cqlsh
 
 test_keyspace_init = os.path.join(rundir, 'test_keyspace_init.cql')
@@ -126,7 +127,7 @@ def cql_rule_set():
 return cqlsh.cql3handling.CqlRuleSet
 
 def quote_name(name):
-return cql_rule_set().maybe_escape_name(name)
+return maybe_escape_name(name)
 
 class DEFAULTVAL: pass
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/run_cqlsh.py
--
diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py
index 6ae295c..88b0ca6 100644
--- a/pylib/cqlshlib/test/run_cqlsh.py
+++ b/pylib/cqlshlib/test/run_cqlsh.py
@@ -231,7 +231,7 @@ class CqlshRunner(ProcRunner):
 self.output_header = self.read_to_next_prompt()
 
 def read_to_next_prompt(self):
-return self.read_until(self.prompt, timeout=4.0)
+return self.read_until(self.prompt, timeout=10.0)
 
 def read_up_to_timeout(self, timeout, blksize=4096):
 output = ProcRunner.read_up_to_timeout(self, timeout, blksize=blksize)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/806378c8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 97bd96b..5f7b6e4 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -142,8 +142,8 @@ class TestCqlshCompletion(CqlshCompletionCase):
 def test_complete_on_empty_string(self):
 self.trycompletions('', choices=('?', 'ALTER', 'BEGIN', 'CAPTURE', 
'CONSISTENCY',
  'COPY', 'CREATE', 'DEBUG', 'DELETE', 
'DESC', 'DESCRIBE',
- 'DROP', 'GRANT', 'HELP', 'INSERT', 
'LIST', 'PAGING', 'REVOKE',
-

[jira] [Commented] (CASSANDRA-10542) Deprecate Pig support in 2.2 and remove it in 3.0

2015-10-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960591#comment-14960591
 ] 

Aleksey Yeschenko commented on CASSANDRA-10542:
---

Committed as 
[def58031d54fef239a4051201d097349da05ad50|https://github.com/apache/cassandra/commit/def58031d54fef239a4051201d097349da05ad50]
 to 2.2 and as 
[56cfc6ea35d1410f2f5a8ae711ae33342f286d79|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]
 to 3.0, thank you.

> Deprecate Pig support in 2.2 and remove it in 3.0
> -
>
> Key: CASSANDRA-10542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10542
> Project: Cassandra
>  Issue Type: Task
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 2.2.x, 3.0.0 rc2
>
> Attachments: 10542-2.2.txt, 10542-3.0.txt
>
>
> Nobody's currently responsible for Pig code. As a result, there is nobody to 
> fix the issues, or even fix the failing tests (of which we unfortunately have 
> plenty). Those tests take time to run, constantly hang, and fail with cryptic 
> errors that we don't know how to fix and don't have enough resources to 
> investigate.
> Thus I propose we deprecate Pig support in 2.2 and remove it in 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2015-10-16 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f497c13e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f497c13e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f497c13e

Branch: refs/heads/cassandra-2.2
Commit: f497c13ee33cc76b7c7bd4c6d4d12caf475ca79d
Parents: def5803 f587397
Author: blerer 
Authored: Fri Oct 16 14:44:29 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 14:45:08 2015 +0200

--
 .../org/apache/cassandra/cql3/ResultSet.java|  5 ++
 .../apache/cassandra/cql3/UntypedResultSet.java |  6 +--
 .../cassandra/cql3/selection/Selection.java | 57 +---
 .../cassandra/cql3/selection/Selector.java  | 12 +
 .../cql3/selection/SelectorFactories.java   | 20 +++
 .../cql3/selection/SimpleSelector.java  |  6 +++
 .../cql3/statements/SelectStatement.java|  2 +-
 .../operations/SelectOrderByTest.java   | 52 ++
 8 files changed, 138 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f497c13e/src/java/org/apache/cassandra/cql3/ResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f497c13e/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--
diff --cc src/java/org/apache/cassandra/cql3/UntypedResultSet.java
index 49e0d86,a0b6ae7..e8d610d
--- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
@@@ -73,9 -74,9 +73,9 @@@ public abstract class UntypedResultSet 
  
  public Row one()
  {
 -if (cqlRows.rows.size() != 1)
 -throw new IllegalStateException("One row required, " + 
cqlRows.rows.size() + " found");
 +if (cqlRows.size() != 1)
 +throw new IllegalStateException("One row required, " + 
cqlRows.size() + " found");
- return new Row(cqlRows.metadata.names, cqlRows.rows.get(0));
+ return new Row(cqlRows.metadata.requestNames(), 
cqlRows.rows.get(0));
  }
  
  public Iterator iterator()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f497c13e/src/java/org/apache/cassandra/cql3/selection/Selection.java
--
diff --cc src/java/org/apache/cassandra/cql3/selection/Selection.java
index 13e030f,000..f6925b2
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/cql3/selection/Selection.java
+++ b/src/java/org/apache/cassandra/cql3/selection/Selection.java
@@@ -1,545 -1,0 +1,566 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.cql3.selection;
 +
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.base.Objects;
 +import com.google.common.base.Predicate;
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Iterators;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.cql3.*;
 +import org.apache.cassandra.cql3.functions.Function;
 +import org.apache.cassandra.db.Cell;
 +import org.apache.cassandra.db.CounterCell;
 +import org.apache.cassandra.db.ExpiringCell;
 +import org.apache.cassandra.db.context.CounterContext;
 +import org.apache.cassandra.db.marshal.UTF8Type;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +public abstract class Selection
 +{
 +/**
 + * A predicate that returns true for static columns.
 + */
 +private static final Predicate STATIC_COLUMN_FILTER = 
new Predicate()
 +{
 +public boolean apply(ColumnDefinition def)
 +{
 +return def.isStatic();
 +}
 + 

[1/2] cassandra git commit: Fix sorting for queries with an IN condition on partition key columns

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 def58031d -> f497c13ee


Fix sorting for queries with an IN condition on partition key columns

patch by Benjamin Lerer; reviewed by Sam Tunnicliffe for CASSANDRA-10363


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f587397c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f587397c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f587397c

Branch: refs/heads/cassandra-2.2
Commit: f587397c9c41c1a68b4e46fc16bad8d48c975e4d
Parents: 806378c
Author: blerer 
Authored: Fri Oct 16 14:41:54 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 14:41:54 2015 +0200

--
 .../cassandra/cql3/ColumnSpecification.java | 10 
 .../org/apache/cassandra/cql3/ResultSet.java|  5 ++
 .../apache/cassandra/cql3/UntypedResultSet.java |  6 +--
 .../cql3/statements/SelectStatement.java| 18 +--
 .../cassandra/cql3/statements/Selection.java| 45 +++-
 .../org/apache/cassandra/cql3/CQLTester.java|  3 +-
 .../operations/SelectOrderByTest.java   | 54 +++-
 7 files changed, 116 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ColumnSpecification.java 
b/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
index f5f921d..836c6b9 100644
--- a/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
+++ b/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
@@ -55,4 +55,14 @@ public class ColumnSpecification
 {
 return Objects.hashCode(ksName, cfName, name, type);
 }
+
+@Override
+public String toString()
+{
+return Objects.toStringHelper(this)
+  .add("name", name)
+  .add("type", type)
+  .toString();
+}
+
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/ResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ResultSet.java 
b/src/java/org/apache/cassandra/cql3/ResultSet.java
index 813fd48..85cba57 100644
--- a/src/java/org/apache/cassandra/cql3/ResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/ResultSet.java
@@ -284,6 +284,11 @@ public class ResultSet
 return names == null ? columnCount : names.size();
 }
 
+/**
+ * Adds the specified column which will not be serialized.
+ *
+ * @param name the column
+ */
 public void addNonSerializedColumn(ColumnSpecification name)
 {
 // See comment above. Because columnCount doesn't account the 
newly added name, it

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java 
b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
index 81482ef..a0b6ae7 100644
--- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
@@ -76,7 +76,7 @@ public abstract class UntypedResultSet implements 
Iterable
 {
 if (cqlRows.rows.size() != 1)
 throw new IllegalStateException("One row required, " + 
cqlRows.rows.size() + " found");
-return new Row(cqlRows.metadata.names, cqlRows.rows.get(0));
+return new Row(cqlRows.metadata.requestNames(), 
cqlRows.rows.get(0));
 }
 
 public Iterator iterator()
@@ -89,7 +89,7 @@ public abstract class UntypedResultSet implements 
Iterable
 {
 if (!iter.hasNext())
 return endOfData();
-return new Row(cqlRows.metadata.names, iter.next());
+return new Row(cqlRows.metadata.requestNames(), 
iter.next());
 }
 };
 }
@@ -154,7 +154,7 @@ public abstract class UntypedResultSet implements 
Iterable
 this.select = select;
 this.pager = pager;
 this.pageSize = pageSize;
-this.metadata = select.getResultMetadata().names;
+this.metadata = select.getResultMetadata().requestNames();
 }
 
 public int size()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java

[jira] [Comment Edited] (CASSANDRA-10515) Commit logs back up with move to 2.1.10

2015-10-16 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960606#comment-14960606
 ] 

Jeff Griffith edited comment on CASSANDRA-10515 at 10/16/15 12:47 PM:
--

thanks [~krummas] see cfstats-clean.txt which i obfuscated and uploaded. we 
didn't actually name them CF001 ;-)

For your convenience I grabbed the sstable counts > 500:
SSTable count: 3454
SSTable count: 55392 <---
SSTable count: 687



was (Author: jeffery.griffith):
thanks [~krummas] see cfstats-clean.txt which i obfuscated and uploaded. we 
didn't actually name them CF001 ;-)

> Commit logs back up with move to 2.1.10
> ---
>
> Key: CASSANDRA-10515
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10515
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: redhat 6.5, cassandra 2.1.10
>Reporter: Jeff Griffith
>Assignee: Branimir Lambov
>Priority: Critical
>  Labels: commitlog, triage
> Attachments: CommitLogProblem.jpg, CommitLogSize.jpg, 
> RUN3tpstats.jpg, cfstats-clean.txt, stacktrace.txt, system.log.clean
>
>
> After upgrading from cassandra 2.0.x to 2.1.10, we began seeing problems 
> where some nodes break the 12G commit log max we configured and go as high as 
> 65G or more before it restarts. Once it reaches the state of more than 12G 
> commit log files, "nodetool compactionstats" hangs. Eventually C* restarts 
> without errors (not sure yet whether it is crashing but I'm checking into it) 
> and the cleanup occurs and the commit logs shrink back down again. Here is 
> the nodetool compactionstats immediately after restart.
> {code}
> jgriffith@prod1xc1.c2.bf1:~$ ndc
> pending tasks: 2185
>compaction type   keyspace  table completed
>   totalunit   progress
> Compaction   SyncCore  *cf1*   61251208033   
> 170643574558   bytes 35.89%
> Compaction   SyncCore  *cf2*   19262483904
> 19266079916   bytes 99.98%
> Compaction   SyncCore  *cf3*6592197093
>  6592316682   bytes100.00%
> Compaction   SyncCore  *cf4*3411039555
>  3411039557   bytes100.00%
> Compaction   SyncCore  *cf5*2879241009
>  2879487621   bytes 99.99%
> Compaction   SyncCore  *cf6*   21252493623
> 21252635196   bytes100.00%
> Compaction   SyncCore  *cf7*   81009853587
> 81009854438   bytes100.00%
> Compaction   SyncCore  *cf8*3005734580
>  3005768582   bytes100.00%
> Active compaction remaining time :n/a
> {code}
> I was also doing periodic "nodetool tpstats" which were working but not being 
> logged in system.log on the StatusLogger thread until after the compaction 
> started working again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/5] cassandra git commit: Fix sorting for queries with an IN condition on partition key columns

2015-10-16 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6f2b855c6 -> 737a3385c


Fix sorting for queries with an IN condition on partition key columns

patch by Benjamin Lerer; reviewed by Sam Tunnicliffe for CASSANDRA-10363


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f587397c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f587397c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f587397c

Branch: refs/heads/trunk
Commit: f587397c9c41c1a68b4e46fc16bad8d48c975e4d
Parents: 806378c
Author: blerer 
Authored: Fri Oct 16 14:41:54 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 14:41:54 2015 +0200

--
 .../cassandra/cql3/ColumnSpecification.java | 10 
 .../org/apache/cassandra/cql3/ResultSet.java|  5 ++
 .../apache/cassandra/cql3/UntypedResultSet.java |  6 +--
 .../cql3/statements/SelectStatement.java| 18 +--
 .../cassandra/cql3/statements/Selection.java| 45 +++-
 .../org/apache/cassandra/cql3/CQLTester.java|  3 +-
 .../operations/SelectOrderByTest.java   | 54 +++-
 7 files changed, 116 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ColumnSpecification.java 
b/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
index f5f921d..836c6b9 100644
--- a/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
+++ b/src/java/org/apache/cassandra/cql3/ColumnSpecification.java
@@ -55,4 +55,14 @@ public class ColumnSpecification
 {
 return Objects.hashCode(ksName, cfName, name, type);
 }
+
+@Override
+public String toString()
+{
+return Objects.toStringHelper(this)
+  .add("name", name)
+  .add("type", type)
+  .toString();
+}
+
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/ResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ResultSet.java 
b/src/java/org/apache/cassandra/cql3/ResultSet.java
index 813fd48..85cba57 100644
--- a/src/java/org/apache/cassandra/cql3/ResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/ResultSet.java
@@ -284,6 +284,11 @@ public class ResultSet
 return names == null ? columnCount : names.size();
 }
 
+/**
+ * Adds the specified column which will not be serialized.
+ *
+ * @param name the column
+ */
 public void addNonSerializedColumn(ColumnSpecification name)
 {
 // See comment above. Because columnCount doesn't account the 
newly added name, it

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java 
b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
index 81482ef..a0b6ae7 100644
--- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
@@ -76,7 +76,7 @@ public abstract class UntypedResultSet implements 
Iterable
 {
 if (cqlRows.rows.size() != 1)
 throw new IllegalStateException("One row required, " + 
cqlRows.rows.size() + " found");
-return new Row(cqlRows.metadata.names, cqlRows.rows.get(0));
+return new Row(cqlRows.metadata.requestNames(), 
cqlRows.rows.get(0));
 }
 
 public Iterator iterator()
@@ -89,7 +89,7 @@ public abstract class UntypedResultSet implements 
Iterable
 {
 if (!iter.hasNext())
 return endOfData();
-return new Row(cqlRows.metadata.names, iter.next());
+return new Row(cqlRows.metadata.requestNames(), 
iter.next());
 }
 };
 }
@@ -154,7 +154,7 @@ public abstract class UntypedResultSet implements 
Iterable
 this.select = select;
 this.pager = pager;
 this.pageSize = pageSize;
-this.metadata = select.getResultMetadata().names;
+this.metadata = select.getResultMetadata().requestNames();
 }
 
 public int size()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f587397c/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java

[jira] [Commented] (CASSANDRA-10471) fix flapping empty_in_test dtest

2015-10-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960618#comment-14960618
 ] 

Sylvain Lebresne commented on CASSANDRA-10471:
--

bq. I can't tell if the dtests were harmed. There are 20 failures on the 
branch. The 3.0 branch hasn't had 20 failures in the past few builds.

I noticed that some dtest build are bad with a lot of tests timeouting (tests 
that usually pass). For instance this morning we had a build with [37 
failures|http://cassci.datastax.com/job/cassandra-3.0_dtest/261/] (and this 
wasn't committed). Anyway, I'm relatively confident the patch doesn't break 
anything unrelated as the only code path added are clearly ones that weren't 
allowed before, so committed. We can always revert if it surprisingly turns out 
that it breaks dtests consistently.

> fix flapping empty_in_test dtest
> 
>
> Key: CASSANDRA-10471
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10471
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Sylvain Lebresne
> Fix For: 3.0.0 rc2
>
>
> {{upgrade_tests/cql_tests.py:TestCQL.empty_in_test}} fails about half the 
> time on the upgrade path from 2.2 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/42/testReport/upgrade_tests.cql_tests/TestCQL/empty_in_test/history/
> Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is 
> merged, these tests should also run with this upgrade path on normal 3.0 
> jobs. Until then, you can run it with the following command:
> {code}
> SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 
> nosetests 2>&1 upgrade_tests/cql_tests.py:TestCQL.empty_in_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r10831 - in /release/cassandra: 2.1.11/ 2.1.9/ 2.2.1/ 2.2.3/ debian/dists/21x/ debian/dists/21x/main/binary-amd64/ debian/dists/21x/main/binary-i386/ debian/dists/21x/main/source/ debian/d

2015-10-16 Thread jake
Author: jake
Date: Fri Oct 16 12:57:36 2015
New Revision: 10831

Log:
2.2.3 and 2.1.11

Added:
release/cassandra/2.1.11/
release/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz   (with props)
release/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.asc
release/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.asc.md5
release/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.asc.sha1
release/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.md5
release/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz.sha1
release/cassandra/2.1.11/apache-cassandra-2.1.11-src.tar.gz   (with props)
release/cassandra/2.1.11/apache-cassandra-2.1.11-src.tar.gz.asc
release/cassandra/2.1.11/apache-cassandra-2.1.11-src.tar.gz.asc.md5
release/cassandra/2.1.11/apache-cassandra-2.1.11-src.tar.gz.asc.sha1
release/cassandra/2.1.11/apache-cassandra-2.1.11-src.tar.gz.md5
release/cassandra/2.1.11/apache-cassandra-2.1.11-src.tar.gz.sha1
release/cassandra/2.2.3/
release/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz   (with props)
release/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.asc
release/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.asc.md5
release/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.asc.sha1
release/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.md5
release/cassandra/2.2.3/apache-cassandra-2.2.3-bin.tar.gz.sha1
release/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz   (with props)
release/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.asc
release/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.asc.md5
release/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.asc.sha1
release/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.md5
release/cassandra/2.2.3/apache-cassandra-2.2.3-src.tar.gz.sha1

release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_2.1.11_all.deb   
(with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_2.2.3_all.deb   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.11.diff.gz   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.11.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.11.orig.tar.gz 
  (with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.11.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.11_all.deb   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.3.diff.gz   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.3.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.3.orig.tar.gz  
 (with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.3.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.3_all.deb   
(with props)
Removed:
release/cassandra/2.1.9/
release/cassandra/2.2.1/
release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_2.1.9_all.deb
release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_2.2.1_all.deb
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.9.diff.gz
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.9.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.9.orig.tar.gz

release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.9.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.9_all.deb
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.1.diff.gz
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.1.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.1.orig.tar.gz

release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.1.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.1_all.deb
Modified:
release/cassandra/debian/dists/21x/InRelease
release/cassandra/debian/dists/21x/Release
release/cassandra/debian/dists/21x/Release.gpg
release/cassandra/debian/dists/21x/main/binary-amd64/Packages
release/cassandra/debian/dists/21x/main/binary-amd64/Packages.gz
release/cassandra/debian/dists/21x/main/binary-i386/Packages
release/cassandra/debian/dists/21x/main/binary-i386/Packages.gz
release/cassandra/debian/dists/21x/main/source/Sources.gz
release/cassandra/debian/dists/22x/InRelease
release/cassandra/debian/dists/22x/Release
release/cassandra/debian/dists/22x/Release.gpg
release/cassandra/debian/dists/22x/main/binary-amd64/Packages
release/cassandra/debian/dists/22x/main/binary-amd64/Packages.gz
release/cassandra/debian/dists/22x/main/binary-i386/Packages
release/cassandra/debian/dists/22x/main/binary-i386/Packages.gz
release/cassandra/debian/dists/22x/main/source/Sources.gz

Added: release/cassandra/2.1.11/apache-cassandra-2.1.11-bin.tar.gz

[jira] [Updated] (CASSANDRA-10542) Deprecate Pig support in 2.2 and remove it in 3.0

2015-10-16 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10542:
--
Attachment: 10542-2.2.txt

> Deprecate Pig support in 2.2 and remove it in 3.0
> -
>
> Key: CASSANDRA-10542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10542
> Project: Cassandra
>  Issue Type: Task
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 2.2.x, 3.0.0 rc2
>
> Attachments: 10542-2.2.txt
>
>
> Nobody's currently responsible for Pig code. As a result, there is nobody to 
> fix the issues, or even fix the failing tests (of which we unfortunately have 
> plenty). Those tests take time to run, constantly hang, and fail with cryptic 
> errors that we don't know how to fix and don't have enough resources to 
> investigate.
> Thus I propose we deprecate Pig support in 2.2 and remove it in 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Accept empty selections in ColumnFilter builder

2015-10-16 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 56cfc6ea3 -> 9aefe13ab


Accept empty selections in ColumnFilter builder

patch by slebresne; reviewed by aweisberg for CASSANDRA-10471

The builder for ColumnFilter was asserting that the built selection was
at least selection one column. But some empty IN queries actually
select nothing and so that assertion was triggered on some tests.
The patch modify the builder so it accepts that case and return an
empty filter as expected.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9aefe13a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9aefe13a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9aefe13a

Branch: refs/heads/cassandra-3.0
Commit: 9aefe13abd2fae9067e58027079e1959bf897e9b
Parents: 56cfc6e
Author: Sylvain Lebresne 
Authored: Tue Oct 13 12:10:33 2015 +0200
Committer: Sylvain Lebresne 
Committed: Fri Oct 16 14:45:21 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/filter/ColumnFilter.java | 9 +++--
 2 files changed, 8 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9aefe13a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6bdaa04..a53a299 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)
  * Remove Pig support (CASSANDRA-10542)
  * Fix LogFile throws Exception when assertion is disabled (CASSANDRA-10522)
  * Revert CASSANDRA-7486, make CMS default GC, move GC config to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9aefe13a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java 
b/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
index 1a4573e..62329ab 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
@@ -289,7 +289,12 @@ public class ColumnFilter
 public ColumnFilter build()
 {
 boolean isFetchAll = metadata != null;
-assert isFetchAll || selection != null;
+
+PartitionColumns selectedColumns = selection == null ? null : 
selection.build();
+// It's only ok to have selection == null in ColumnFilter if 
isFetchAll. So deal with the case of a "selection" builder
+// with nothing selected (we can at least happen on some backward 
compatible queries - CASSANDRA-10471).
+if (!isFetchAll && selectedColumns == null)
+selectedColumns = PartitionColumns.NONE;
 
 SortedSetMultimap s = null;
 if (subSelections != null)
@@ -299,7 +304,7 @@ public class ColumnFilter
 s.put(subSelection.column().name, subSelection);
 }
 
-return new ColumnFilter(isFetchAll, metadata, selection == null ? 
null : selection.build(), s);
+return new ColumnFilter(isFetchAll, metadata, selectedColumns, s);
 }
 }
 



[5/5] cassandra git commit: Merge branch cassandra-3.0 into trunk

2015-10-16 Thread blerer
Merge branch cassandra-3.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/737a3385
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/737a3385
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/737a3385

Branch: refs/heads/trunk
Commit: 737a3385ce7aea275a4c8234a4ff43d360c57a84
Parents: 6f2b855 1b93eb4
Author: blerer 
Authored: Fri Oct 16 14:51:02 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 14:51:54 2015 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/cql3/ResultSet.java|  5 ++
 .../apache/cassandra/cql3/UntypedResultSet.java |  6 +--
 .../cassandra/cql3/selection/Selection.java | 57 +---
 .../cassandra/cql3/selection/Selector.java  | 12 +
 .../cql3/selection/SelectorFactories.java   | 20 +++
 .../cql3/selection/SimpleSelector.java  |  6 +++
 .../cql3/statements/SelectStatement.java|  2 +-
 .../cassandra/db/filter/ColumnFilter.java   |  9 +++-
 .../operations/SelectOrderByTest.java   | 52 ++
 10 files changed, 146 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/737a3385/CHANGES.txt
--
diff --cc CHANGES.txt
index b1feeab,a53a299..eb59885
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.2
 + * Abort in-progress queries that time out (CASSANDRA-7392)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 +
 +
  3.0-rc2
+  * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)
   * Remove Pig support (CASSANDRA-10542)
   * Fix LogFile throws Exception when assertion is disabled (CASSANDRA-10522)
   * Revert CASSANDRA-7486, make CMS default GC, move GC config to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/737a3385/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/737a3385/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/737a3385/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
--



[2/5] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2015-10-16 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f497c13e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f497c13e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f497c13e

Branch: refs/heads/trunk
Commit: f497c13ee33cc76b7c7bd4c6d4d12caf475ca79d
Parents: def5803 f587397
Author: blerer 
Authored: Fri Oct 16 14:44:29 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 14:45:08 2015 +0200

--
 .../org/apache/cassandra/cql3/ResultSet.java|  5 ++
 .../apache/cassandra/cql3/UntypedResultSet.java |  6 +--
 .../cassandra/cql3/selection/Selection.java | 57 +---
 .../cassandra/cql3/selection/Selector.java  | 12 +
 .../cql3/selection/SelectorFactories.java   | 20 +++
 .../cql3/selection/SimpleSelector.java  |  6 +++
 .../cql3/statements/SelectStatement.java|  2 +-
 .../operations/SelectOrderByTest.java   | 52 ++
 8 files changed, 138 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f497c13e/src/java/org/apache/cassandra/cql3/ResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f497c13e/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--
diff --cc src/java/org/apache/cassandra/cql3/UntypedResultSet.java
index 49e0d86,a0b6ae7..e8d610d
--- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
@@@ -73,9 -74,9 +73,9 @@@ public abstract class UntypedResultSet 
  
  public Row one()
  {
 -if (cqlRows.rows.size() != 1)
 -throw new IllegalStateException("One row required, " + 
cqlRows.rows.size() + " found");
 +if (cqlRows.size() != 1)
 +throw new IllegalStateException("One row required, " + 
cqlRows.size() + " found");
- return new Row(cqlRows.metadata.names, cqlRows.rows.get(0));
+ return new Row(cqlRows.metadata.requestNames(), 
cqlRows.rows.get(0));
  }
  
  public Iterator iterator()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f497c13e/src/java/org/apache/cassandra/cql3/selection/Selection.java
--
diff --cc src/java/org/apache/cassandra/cql3/selection/Selection.java
index 13e030f,000..f6925b2
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/cql3/selection/Selection.java
+++ b/src/java/org/apache/cassandra/cql3/selection/Selection.java
@@@ -1,545 -1,0 +1,566 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.cql3.selection;
 +
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.base.Objects;
 +import com.google.common.base.Predicate;
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Iterators;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.cql3.*;
 +import org.apache.cassandra.cql3.functions.Function;
 +import org.apache.cassandra.db.Cell;
 +import org.apache.cassandra.db.CounterCell;
 +import org.apache.cassandra.db.ExpiringCell;
 +import org.apache.cassandra.db.context.CounterContext;
 +import org.apache.cassandra.db.marshal.UTF8Type;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +public abstract class Selection
 +{
 +/**
 + * A predicate that returns true for static columns.
 + */
 +private static final Predicate STATIC_COLUMN_FILTER = 
new Predicate()
 +{
 +public boolean apply(ColumnDefinition def)
 +{
 +return def.isStatic();
 +}
 +};
 

[3/5] cassandra git commit: Accept empty selections in ColumnFilter builder

2015-10-16 Thread blerer
Accept empty selections in ColumnFilter builder

patch by slebresne; reviewed by aweisberg for CASSANDRA-10471

The builder for ColumnFilter was asserting that the built selection was
at least selection one column. But some empty IN queries actually
select nothing and so that assertion was triggered on some tests.
The patch modify the builder so it accepts that case and return an
empty filter as expected.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9aefe13a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9aefe13a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9aefe13a

Branch: refs/heads/trunk
Commit: 9aefe13abd2fae9067e58027079e1959bf897e9b
Parents: 56cfc6e
Author: Sylvain Lebresne 
Authored: Tue Oct 13 12:10:33 2015 +0200
Committer: Sylvain Lebresne 
Committed: Fri Oct 16 14:45:21 2015 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/filter/ColumnFilter.java | 9 +++--
 2 files changed, 8 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9aefe13a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6bdaa04..a53a299 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0-rc2
+ * Support empty ColumnFilter for backward compatility on empty IN 
(CASSANDRA-10471)
  * Remove Pig support (CASSANDRA-10542)
  * Fix LogFile throws Exception when assertion is disabled (CASSANDRA-10522)
  * Revert CASSANDRA-7486, make CMS default GC, move GC config to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9aefe13a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java 
b/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
index 1a4573e..62329ab 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
@@ -289,7 +289,12 @@ public class ColumnFilter
 public ColumnFilter build()
 {
 boolean isFetchAll = metadata != null;
-assert isFetchAll || selection != null;
+
+PartitionColumns selectedColumns = selection == null ? null : 
selection.build();
+// It's only ok to have selection == null in ColumnFilter if 
isFetchAll. So deal with the case of a "selection" builder
+// with nothing selected (we can at least happen on some backward 
compatible queries - CASSANDRA-10471).
+if (!isFetchAll && selectedColumns == null)
+selectedColumns = PartitionColumns.NONE;
 
 SortedSetMultimap s = null;
 if (subSelections != null)
@@ -299,7 +304,7 @@ public class ColumnFilter
 s.put(subSelection.column().name, subSelection);
 }
 
-return new ColumnFilter(isFetchAll, metadata, selection == null ? 
null : selection.build(), s);
+return new ColumnFilter(isFetchAll, metadata, selectedColumns, s);
 }
 }
 



[4/5] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2015-10-16 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b93eb40
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b93eb40
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b93eb40

Branch: refs/heads/trunk
Commit: 1b93eb403b5358a1f2a6cbebdc52bb9069e91a76
Parents: 9aefe13 f497c13
Author: blerer 
Authored: Fri Oct 16 14:48:00 2015 +0200
Committer: blerer 
Committed: Fri Oct 16 14:48:12 2015 +0200

--
 .../org/apache/cassandra/cql3/ResultSet.java|  5 ++
 .../apache/cassandra/cql3/UntypedResultSet.java |  6 +--
 .../cassandra/cql3/selection/Selection.java | 57 +---
 .../cassandra/cql3/selection/Selector.java  | 12 +
 .../cql3/selection/SelectorFactories.java   | 20 +++
 .../cql3/selection/SimpleSelector.java  |  6 +++
 .../cql3/statements/SelectStatement.java|  2 +-
 .../operations/SelectOrderByTest.java   | 52 ++
 8 files changed, 138 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b93eb40/src/java/org/apache/cassandra/cql3/ResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b93eb40/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b93eb40/src/java/org/apache/cassandra/cql3/selection/Selection.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b93eb40/src/java/org/apache/cassandra/cql3/selection/Selector.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b93eb40/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b93eb40/test/unit/org/apache/cassandra/cql3/validation/operations/SelectOrderByTest.java
--



  1   2   3   >