[09/10] cassandra git commit: Merge branch 'cassandra-3.11' into cassandra-3.X

2016-12-05 Thread slebresne
Merge branch 'cassandra-3.11' into cassandra-3.X

* cassandra-3.11:
  Reject default_time_to_live option when creating or altering MVs


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/918a0621
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/918a0621
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/918a0621

Branch: refs/heads/trunk
Commit: 918a062122d108f7f8055cf287b2508e70d38e7a
Parents: 36a3ba0 a06b469
Author: Sylvain Lebresne 
Authored: Mon Dec 5 12:11:33 2016 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 5 12:11:33 2016 +0100

--
 CHANGES.txt |  4 ++
 NEWS.txt|  9 +
 .../cql3/statements/AlterViewStatement.java |  8 
 .../cql3/statements/CreateViewStatement.java| 11 -
 .../org/apache/cassandra/cql3/ViewTest.java | 42 
 5 files changed, 73 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/918a0621/CHANGES.txt
--
diff --cc CHANGES.txt
index b75a1e4,e69a67a..5a3fedf
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,7 +1,13 @@@
 +3.12
 + * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
 + * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
 + * Add support for arithmetic operators (CASSANDRA-11935)
 + * Tables in system_distributed should not use gcgs of 0 (CASSANDRA-12954)
 +
+ 3.11
+ Merged from 3.0:
+  * Reject default_time_to_live option when creating or altering MVs 
(CASSANDRA-12868)
+ 
  3.10
   * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
   * Remove timing window in test case (CASSANDRA-12875)



[01/10] cassandra git commit: Reject default_time_to_live option when creating or altering MVs

2016-12-05 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 255505ea7 -> 4d5a53e9b
  refs/heads/cassandra-3.11 b63c03bc5 -> a06b469c6
  refs/heads/cassandra-3.X 36a3ba0d0 -> 918a06212
  refs/heads/trunk ce631bdd1 -> 7e668271a


Reject default_time_to_live option when creating or altering MVs

patch by Sundar Srinivasan; reviewed by Sylvain Lebresne for CASSANDRA-12868


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d5a53e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d5a53e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d5a53e9

Branch: refs/heads/cassandra-3.0
Commit: 4d5a53e9b7008c1159164f1fb2107511df015332
Parents: 255505e
Author: Sundar Srinivasan 
Authored: Mon Nov 21 16:29:43 2016 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 5 12:04:48 2016 +0100

--
 CHANGES.txt |  1 +
 NEWS.txt|  3 ++
 .../cql3/statements/AlterViewStatement.java |  8 
 .../cql3/statements/CreateViewStatement.java| 11 -
 .../org/apache/cassandra/cql3/ViewTest.java | 46 
 5 files changed, 68 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d5a53e9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fff1d54..8cdca57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.11
+ * Reject default_time_to_live option when creating or altering MVs 
(CASSANDRA-12868)
  * Nodetool should use a more sane max heap size (CASSANDRA-12739)
  * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
  * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d5a53e9/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index d5f4f06..32b5084 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -20,6 +20,9 @@ Upgrading
 -
- Nothing specific to this release, but please see previous versions 
upgrading section,
  especially if you are upgrading from 2.2.
+   - Specifying the default_time_to_live option when creating or altering a
+ materialized view was erroneously accepted (and ignored). It is now
+ properly rejected.
 
 3.0.10
 =

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d5a53e9/src/java/org/apache/cassandra/cql3/statements/AlterViewStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterViewStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterViewStatement.java
index 5b1699b..ba077c7 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterViewStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterViewStatement.java
@@ -75,6 +75,14 @@ public class AlterViewStatement extends 
SchemaAlteringStatement
   "value is used to TTL 
undelivered updates. Setting gc_grace_seconds too " +
   "low might cause undelivered 
updates to expire before being replayed.");
 }
+
+if (params.defaultTimeToLive > 0)
+{
+throw new InvalidRequestException("Cannot set or alter 
default_time_to_live for a materialized view. " +
+  "Data in a materialized view 
always expire at the same time than " +
+  "the corresponding data in the 
parent table.");
+}
+
 viewCopy.metadata.params(params);
 
 MigrationManager.announceViewUpdate(viewCopy, isLocalOnly);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d5a53e9/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
index 13e528c..30e55a0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
@@ -275,12 +275,21 @@ public class CreateViewStatement extends 
SchemaAlteringStatement
 if (targetClusteringColumns.isEmpty())
 throw new InvalidRequestException("No columns are defined for 
Materialized View other than primary key");
 
+TableParams params = properties.properties.asNewTableParams();
+
+if (params.defaultTimeToLive > 0)
+{
+throw new 

[3/3] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-12-05 Thread samt
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc70e490
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc70e490
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc70e490

Branch: refs/heads/trunk
Commit: bc70e49037b17298f9273da3ae23220209c3eeca
Parents: 7e66827 afbc2e8
Author: Sam Tunnicliffe 
Authored: Mon Dec 5 11:21:04 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Dec 5 11:21:04 2016 +

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/db/SystemKeyspace.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc70e490/CHANGES.txt
--
diff --cc CHANGES.txt
index 14e45be,3d27690..f428e31
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 +4.0
 + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716)
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter 
(CASSANDRA-12422)
 +
 +
  3.12
+  * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
   * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
   * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
   * Add support for arithmetic operators (CASSANDRA-11935)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc70e490/src/java/org/apache/cassandra/db/SystemKeyspace.java
--



[1/3] cassandra git commit: Conditionally update index build status

2016-12-05 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X 918a06212 -> afbc2e850
  refs/heads/trunk 7e668271a -> bc70e4903


Conditionally update index build status

Patch by Corentin Chary; reviewed by Sam Tunnicliffe for CASSANDRA-12969


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/afbc2e85
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/afbc2e85
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/afbc2e85

Branch: refs/heads/cassandra-3.X
Commit: afbc2e8502a8a8d1d6a319017dfc3c2a45bebaca
Parents: 918a062
Author: Corentin Chary 
Authored: Mon Nov 28 16:23:01 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Mon Dec 5 11:20:51 2016 +

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/db/SystemKeyspace.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/afbc2e85/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5a3fedf..3d27690 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.12
+ * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
  * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
  * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
  * Add support for arithmetic operators (CASSANDRA-11935)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/afbc2e85/src/java/org/apache/cassandra/db/SystemKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index 31a461b..aac424d 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -1043,7 +1043,7 @@ public final class SystemKeyspace
 
 public static void setIndexBuilt(String keyspaceName, String indexName)
 {
-String req = "INSERT INTO %s.\"%s\" (table_name, index_name) VALUES 
(?, ?)";
+String req = "INSERT INTO %s.\"%s\" (table_name, index_name) VALUES 
(?, ?) IF NOT EXISTS;";
 executeInternal(String.format(req, 
SchemaConstants.SYSTEM_KEYSPACE_NAME, BUILT_INDEXES), keyspaceName, indexName);
 forceBlockingFlush(BUILT_INDEXES);
 }



[2/3] cassandra git commit: Conditionally update index build status

2016-12-05 Thread samt
Conditionally update index build status

Patch by Corentin Chary; reviewed by Sam Tunnicliffe for CASSANDRA-12969


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/afbc2e85
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/afbc2e85
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/afbc2e85

Branch: refs/heads/trunk
Commit: afbc2e8502a8a8d1d6a319017dfc3c2a45bebaca
Parents: 918a062
Author: Corentin Chary 
Authored: Mon Nov 28 16:23:01 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Mon Dec 5 11:20:51 2016 +

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/db/SystemKeyspace.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/afbc2e85/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5a3fedf..3d27690 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.12
+ * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
  * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
  * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
  * Add support for arithmetic operators (CASSANDRA-11935)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/afbc2e85/src/java/org/apache/cassandra/db/SystemKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index 31a461b..aac424d 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -1043,7 +1043,7 @@ public final class SystemKeyspace
 
 public static void setIndexBuilt(String keyspaceName, String indexName)
 {
-String req = "INSERT INTO %s.\"%s\" (table_name, index_name) VALUES 
(?, ?)";
+String req = "INSERT INTO %s.\"%s\" (table_name, index_name) VALUES 
(?, ?) IF NOT EXISTS;";
 executeInternal(String.format(req, 
SchemaConstants.SYSTEM_KEYSPACE_NAME, BUILT_INDEXES), keyspaceName, indexName);
 forceBlockingFlush(BUILT_INDEXES);
 }



[jira] [Updated] (CASSANDRA-12969) Index: index can significantly slow down boot

2016-12-05 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12969:

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.12
   4.0
   Status: Resolved  (was: Patch Available)

bq.Hello, did it go well ?
Yes, thanks for the patch and sorry that I forgot to include the link to the CI 
jobs in my previous comment:

||branch||testall||dtest||
|[12969-3.X|https://github.com/beobal/cassandra/tree/12969-3.X]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12969-3.X-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12969-3.X-dtest]|
|[12969-trunk|https://github.com/beobal/cassandra/tree/12969-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12969-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12969-trunk-dtest]|

The CI looks good relative to upstream (there are some dtest failures also 
present in trunk), so I've committed to 3.X in 
{{afbc2e8502a8a8d1d6a319017dfc3c2a45bebaca}} and merged to trunk. 

bq.Another sensible optimization will be CASSANDRA-12962
I'm sure someone will take a look at that as soon as they have chance. In the 
meantime, I quickly glanced at the ticket and I'm afraid I don't really 
understand the problem you're describing. Perhaps the description could use a 
little work to make it a bit clearer?

> Index: index can significantly slow down boot
> -
>
> Key: CASSANDRA-12969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Corentin Chary
> Fix For: 4.0, 3.12
>
> Attachments: 0004-index-do-not-re-insert-values-in-IndexInfo.patch
>
>
> During startup, each existing index is opened and marked as built by adding 
> an entry in "IndexInfo" and forcing a flush. Because of that we end up 
> flushing one sstable per index. On systems on HDD this can take minutes for 
> nothing.
> Thw following patch allows to avoid creating useless new sstables if the 
> index was already marked as built and will greatly reduce the startup time 
> (and improve availability during restarts).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12649) Add BATCH metrics

2016-12-05 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722250#comment-15722250
 ] 

Benjamin Lerer commented on CASSANDRA-12649:


@Alwyn Davis any concern with the changes I did to your patch? 

[~iamaleksey] could you have a look at the patch to check that I did not miss 
anything?

> Add BATCH metrics
> -
>
> Key: CASSANDRA-12649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12649
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Alwyn Davis
>Assignee: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12649-3.x-v2.patch, 12649-3.x.patch, 
> stress-batch-metrics.tar.gz, stress-trunk.tar.gz, trunk-12649.txt
>
>
> To identify causes of load on a cluster, it would be useful to have some 
> additional metrics:
> * *Mutation size distribution:* I believe this would be relevant when 
> tracking the performance of unlogged batches.
> * *Logged / Unlogged Partitions per batch distribution:* This would also give 
> a count of batch types processed. Multiple distinct tables in batch would 
> just be considered as separate partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12649) Add BATCH metrics

2016-12-05 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722250#comment-15722250
 ] 

Benjamin Lerer edited comment on CASSANDRA-12649 at 12/5/16 1:22 PM:
-

[~alwyn] any concern with the changes I did to your patch? 

[~iamaleksey] could you have a look at the patch to check that I did not miss 
anything?


was (Author: blerer):
@Alwyn Davis any concern with the changes I did to your patch? 

[~iamaleksey] could you have a look at the patch to check that I did not miss 
anything?

> Add BATCH metrics
> -
>
> Key: CASSANDRA-12649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12649
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Alwyn Davis
>Assignee: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12649-3.x-v2.patch, 12649-3.x.patch, 
> stress-batch-metrics.tar.gz, stress-trunk.tar.gz, trunk-12649.txt
>
>
> To identify causes of load on a cluster, it would be useful to have some 
> additional metrics:
> * *Mutation size distribution:* I believe this would be relevant when 
> tracking the performance of unlogged batches.
> * *Logged / Unlogged Partitions per batch distribution:* This would also give 
> a count of batch types processed. Multiple distinct tables in batch would 
> just be considered as separate partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12991) Inter-node race condition in validation compaction

2016-12-05 Thread Benjamin Roth (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Roth updated CASSANDRA-12991:
--
Description: 
Problem:
When a validation compaction is triggered by a repair it may happen that due to 
flying in mutations the merkle trees differ but the data is consistent however.

Example:
t = 1: 
Repair starts, triggers validations
Node A starts validation
t = 10001:
Mutation arrives at Node A
t = 10002:
Mutation arrives at Node B
t = 10003:
Node B starts validation

Hashes of node A+B will differ but data is consistent from a view (think of it 
like a snapshot) t = 1.

Impact:
Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
partitions but on high traffic CFs and maybe very big partitions, this may have 
a bigger impact and is a waste of resources.

Possible solution:
Build hashes based upon a snapshot timestamp.
This requires SSTables created after that timestamp to be filtered when doing a 
validation compaction:
- Cells with timestamp > snapshot time have to be removed
- Tombstone range markers have to be handled
 - Bounds have to be removed if delete timestamp > snapshot time
 - Boundary markers have to be either changed to a bound or completely removed, 
depending if start and/or end are both affected or not

Probably this is a known behaviour. Have there been any discussions about this 
in the past? Did not find an matching issue, so I created this one.

I am happy about any feedback, whatsoever.

  was:
Problem:
When a validation compaction is triggered by a repair it may happen that due to 
flying in mutations the merkle trees differ but the data is consistent however.

Example:
t = 1: 
Repair starts validation
Node A starts validation
t = 10001:
Mutation arrives at Node A
t = 10002:
Mutation arrives at Node B
t = 10003:
Node B starts validation

Hashes of node A+B will differ but data is consistent from a view (think of it 
like a snapshot) t = 1.

Impact:
Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
partitions but on high traffic CFs and maybe very big partitions, this may have 
a bigger impact and is a waste of resources.

Possible solution:
Build hashes based upon a snapshot timestamp.
This requires SSTables created after that timestamp to be filtered when doing a 
validation compaction:
- Cells with timestamp > snapshot time have to be removed
- Tombstone range markers have to be handled
 - Bounds have to be removed if delete timestamp > snapshot time
 - Boundary markers have to be either changed to a bound or completely removed, 
depending if start and/or end are both affected or not

Probably this is a known behaviour. Have there been any discussions about this 
in the past? Did not find an matching issue, so I created this one.

I am happy about any feedback, whatsoever.


> Inter-node race condition in validation compaction
> --
>
> Key: CASSANDRA-12991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12991
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Priority: Minor
>
> Problem:
> When a validation compaction is triggered by a repair it may happen that due 
> to flying in mutations the merkle trees differ but the data is consistent 
> however.
> Example:
> t = 1: 
> Repair starts, triggers validations
> Node A starts validation
> t = 10001:
> Mutation arrives at Node A
> t = 10002:
> Mutation arrives at Node B
> t = 10003:
> Node B starts validation
> Hashes of node A+B will differ but data is consistent from a view (think of 
> it like a snapshot) t = 1.
> Impact:
> Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
> partitions but on high traffic CFs and maybe very big partitions, this may 
> have a bigger impact and is a waste of resources.
> Possible solution:
> Build hashes based upon a snapshot timestamp.
> This requires SSTables created after that timestamp to be filtered when doing 
> a validation compaction:
> - Cells with timestamp > snapshot time have to be removed
> - Tombstone range markers have to be handled
>  - Bounds have to be removed if delete timestamp > snapshot time
>  - Boundary markers have to be either changed to a bound or completely 
> removed, depending if start and/or end are both affected or not
> Probably this is a known behaviour. Have there been any discussions about 
> this in the past? Did not find an matching issue, so I created this one.
> I am happy about any feedback, whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12994) update arline dependecy to 0.7

2016-12-05 Thread Tomas Repik (JIRA)
Tomas Repik created CASSANDRA-12994:
---

 Summary: update arline dependecy to 0.7
 Key: CASSANDRA-12994
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12994
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tomas Repik
 Attachments: cassandra-3.9-airline0.7.patch

We want to include Cassandra into Fedora, and there are some tweaks to 
cassandra sources we need to do. The io.airlift:airline dependency is one of 
those tweak we gotta do. Cassandra depends on arline 0.6, but In Fedora we have 
the latest upstream version 0.7. It was released quite time ago on Nov 6 2014. 
I attached a patch updating cassandra sources to depend on the newest airline 
sources. The only actual changes are in the imports. Please consider updating. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12991) Inter-node race condition in validation compaction

2016-12-05 Thread Benjamin Roth (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Roth updated CASSANDRA-12991:
--
Description: 
Problem:
When a validation compaction is triggered by a repair it may happen that due to 
flying in mutations the merkle trees differ but the data is consistent however.

Example:
t = 1: 
Repair starts validation
Node A starts validation
t = 10001:
Mutation arrives at Node A
t = 10002:
Mutation arrives at Node B
t = 10003:
Node B starts validation

Hashes of node A+B will differ but data is consistent from a view (think of it 
like a snapshot) t = 1.

Impact:
Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
partitions but on high traffic CFs and maybe very big partitions, this may have 
a bigger impact and is a waste of resources.

Possible solution:
Build hashes based upon a snapshot timestamp.
This requires SSTables created after that timestamp to be filtered when doing a 
validation compaction:
- Cells with timestamp > snapshot time have to be removed
- Tombstone range markers have to be handled
 - Bounds have to be removed if delete timestamp > snapshot time
 - Boundary markers have to be either changed to a bound or completely removed, 
depending if start and/or end are both affected or not

Probably this is a known behaviour. Have there been any discussions about this 
in the past? Did not find an matching issue, so I created this one.

I am happy about any feedback, whatsoever.

  was:
Problem:
When a validation compaction is triggered by a repair it may happen that due to 
flying in mutations the merkle trees differ but the data is not consistent.

Example:
t = 1: 
Repair starts validation
Node A starts validation
t = 10001:
Mutation arrives at Node A
t = 10002:
Mutation arrives at Node B
t = 10003:
Node B starts validation

Hashes of node A+B will differ but data is consistent from a view (think of it 
like a snapshot) t = 1.

Impact:
Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
partitions but on high traffic CFs and maybe very big partitions, this may have 
a bigger impact and is a waste of resources.

Possible solution:
Build hashes based upon a snapshot timestamp.
This requires SSTables created after that timestamp to be filtered when doing a 
validation compaction:
- Cells with timestamp > snapshot time have to be removed
- Tombstone range markers have to be handled
 - Bounds have to be removed if delete timestamp > snapshot time
 - Boundary markers have to be either changed to a bound or completely removed, 
depending if start and/or end are both affected or not

Probably this is a known behaviour. Have there been any discussions about this 
in the past? Did not find an matching issue, so I created this one.

I am happy about any feedback, whatsoever.


> Inter-node race condition in validation compaction
> --
>
> Key: CASSANDRA-12991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12991
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Priority: Minor
>
> Problem:
> When a validation compaction is triggered by a repair it may happen that due 
> to flying in mutations the merkle trees differ but the data is consistent 
> however.
> Example:
> t = 1: 
> Repair starts validation
> Node A starts validation
> t = 10001:
> Mutation arrives at Node A
> t = 10002:
> Mutation arrives at Node B
> t = 10003:
> Node B starts validation
> Hashes of node A+B will differ but data is consistent from a view (think of 
> it like a snapshot) t = 1.
> Impact:
> Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
> partitions but on high traffic CFs and maybe very big partitions, this may 
> have a bigger impact and is a waste of resources.
> Possible solution:
> Build hashes based upon a snapshot timestamp.
> This requires SSTables created after that timestamp to be filtered when doing 
> a validation compaction:
> - Cells with timestamp > snapshot time have to be removed
> - Tombstone range markers have to be handled
>  - Bounds have to be removed if delete timestamp > snapshot time
>  - Boundary markers have to be either changed to a bound or completely 
> removed, depending if start and/or end are both affected or not
> Probably this is a known behaviour. Have there been any discussions about 
> this in the past? Did not find an matching issue, so I created this one.
> I am happy about any feedback, whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12991) Inter-node race condition in validation compaction

2016-12-05 Thread Benjamin Roth (JIRA)
Benjamin Roth created CASSANDRA-12991:
-

 Summary: Inter-node race condition in validation compaction
 Key: CASSANDRA-12991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12991
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benjamin Roth
Priority: Minor


Problem:
When a validation compaction is triggered by a repair it may happen that due to 
flying in mutations the merkle trees differ but the data is not consistent.

Example:
t = 1: 
Repair starts validation
Node A starts validation
t = 10001:
Mutation arrives at Node A
t = 10002:
Mutation arrives at Node B
t = 10003:
Node B starts validation

Hashes of node A+B will differ but data is consistent from a view (think of it 
like a snapshot) t = 1.

Impact:
Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
partitions but on high traffic CFs and maybe very big partitions, this may have 
a bigger impact and is a waste of resources.

Possible solution:
Build hashes based upon a snapshot timestamp.
This requires SSTables created after that timestamp to be filtered when doing a 
validation compaction:
- Cells with timestamp > snapshot time have to be removed
- Tombstone range markers have to be handled
 - Bounds have to be removed if delete timestamp > snapshot time
 - Boundary markers have to be either changed to a bound or completely removed, 
depending if start and/or end are both affected or not

Probably this is a known behaviour. Have there been any discussions about this 
in the past? Did not find an matching issue, so I created this one.

I am happy about any feedback, whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12992) when mapreduce create sstables and load to cassandra cluster,then drop the table there are much data file not moved to snapshot

2016-12-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

翟玉勇 updated CASSANDRA-12992:

Description: 
when mapreduce create sstables and load to cassandra cluster,then drop the 
table there are much data file not move to snapshot,

nodetool clearsnapshot can not free the disk,

wo must Manual delete the files 


cassandra table schema:

CREATE TABLE test.st_platform_api_restaurant_export (
id_date text PRIMARY KEY,
dt text,
eleme_order_total double,
order_amt bigint,
order_date text,
restaurant_id int,
total double
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = 'restaurant'
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 2592000
AND gc_grace_seconds = 1800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


mapreduce job:
CREATE EXTERNAL TABLE st_platform_api_restaurant_export_h2c_sstable
(
id_date string,
order_amt bigint,
total double,
eleme_order_total double,
order_date string,
restaurant_id int,
dt string)  STORED BY 
'org.apache.hadoop.hive.cassandra.bulkload.CqlBulkStorageHandler'
TBLPROPERTIES (
'cassandra.output.keyspace.username' = 'cassandra',
'cassandra.output.keyspace'='test',
'cassandra.output.partitioner.class'='org.apache.cassandra.dht.Murmur3Partitioner',
'cassandra.output.keyspace.passwd'='cassandra',
'mapreduce.output.basename'='st_platform_api_restaurant_export',
'cassandra.output.thrift.address'='casandra cluster ips',
'cassandra.output.delete.source'='true',
'cassandra.columnfamily.insert.st_platform_api_restaurant_export'='insert into 
test.st_platform_api_restaurant_export(id_date,order_amt,total,eleme_order_total,order_date,restaurant_id,dt)values(?,?,?,?,?,?,?)',
'cassandra.columnfamily.schema.st_platform_api_restaurant_export'='CREATE TABLE 
test.st_platform_api_restaurant_export (id_date text PRIMARY KEY,dt 
text,eleme_order_total double,order_amt bigint,order_date text,restaurant_id 
int,total double)');


  was:
when mapreduce create sstables and load to cassandra cluster,then drop the 
table there are much data file not move to snapshot,

nodetool clearsnapshot can not free the disk,

wo must Manual delete the files 


Summary: when mapreduce create sstables and load to cassandra 
cluster,then drop the table there are much data file not moved to snapshot  
(was: when mapreduce create sstables and load to cassandra cluster,then drop 
the table there are much data file not move to snapshot)

> when mapreduce create sstables and load to cassandra cluster,then drop the 
> table there are much data file not moved to snapshot
> ---
>
> Key: CASSANDRA-12992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12992
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: cassandra 2.1.15
>Reporter: 翟玉勇
>Priority: Minor
> Attachments: after-droptable.png, before-droptable.png
>
>
> when mapreduce create sstables and load to cassandra cluster,then drop the 
> table there are much data file not move to snapshot,
> nodetool clearsnapshot can not free the disk,
> wo must Manual delete the files 
> cassandra table schema:
> CREATE TABLE test.st_platform_api_restaurant_export (
> id_date text PRIMARY KEY,
> dt text,
> eleme_order_total double,
> order_amt bigint,
> order_date text,
> restaurant_id int,
> total double
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = 'restaurant'
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 2592000
> AND gc_grace_seconds = 1800
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> mapreduce job:
> CREATE EXTERNAL TABLE st_platform_api_restaurant_export_h2c_sstable
> (
> id_date string,
> order_amt bigint,
> total double,
> eleme_order_total double,
> order_date string,
> restaurant_id int,
> dt string)  STORED BY 
> 

[jira] [Created] (CASSANDRA-12993) License headers missing in some source files

2016-12-05 Thread Tomas Repik (JIRA)
Tomas Repik created CASSANDRA-12993:
---

 Summary: License headers missing in some source files
 Key: CASSANDRA-12993
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12993
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tomas Repik


The following source files are without license headers:
  doc/source/_static/extra.css
  src/java/org/apache/cassandra/db/commitlog/IntervalSet.java
  src/java/org/apache/cassandra/utils/IntegerInterval.java
  test/unit/org/apache/cassandra/db/commitlog/CommitLogCQLTest.java
  test/unit/org/apache/cassandra/utils/IntegerIntervalsTest.java
  tools/stress/src/org/apache/cassandra/stress/WorkManager.java

Could you please confirm the licensing of code and/or content/s, and add 
license headers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12995) update hppc dependency to 0.7

2016-12-05 Thread Tomas Repik (JIRA)
Tomas Repik created CASSANDRA-12995:
---

 Summary: update hppc dependency to 0.7
 Key: CASSANDRA-12995
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12995
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tomas Repik
 Attachments: cassandra-3.9-hppc.patch

We want to include Cassandra into Fedora, and there are some tweaks to 
cassandra sources we need to do. The com.carrotsearch:hppc dependency is one of 
those tweak we gotta do. Cassandra depends on hppc 0.5.4, but In Fedora we have 
the newer version 0.7.1 However upstream recenlty released even newer version 
0.7.2. I attached a patch updating cassandra sources to depend on the 0.7.1 
hppc sources. It should be also compatible with the newest upstream version. 
The only actual changes are the removal of Open infix in class names. The issue 
was discussed in here: https://bugzilla.redhat.com/show_bug.cgi?id=1340876 
Please consider updating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12996) update slf4j dependency to 1.7.21

2016-12-05 Thread Tomas Repik (JIRA)
Tomas Repik created CASSANDRA-12996:
---

 Summary: update slf4j dependency to 1.7.21
 Key: CASSANDRA-12996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12996
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tomas Repik
 Attachments: cassandra-3.9-slf4j.patch

We want to include Cassandra into Fedora, and there are some tweaks to 
cassandra sources we need to do. The slf4j dependency is one of those tweak we 
gotta do. Cassandra depends on slf4j 1.7.7, but In Fedora we have the latest 
upstream version 1.7.21 It was released some time ago on April 6 2016. I 
attached a patch updating cassandra sources to depend on the newest slf4j 
sources. The only actual change is the number of parameters accepted by 
SubstituteLogger class. Please consider updating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12992) when mapreduce create sstables and load to cassandra cluster,then drop the table there are much data file not move

2016-12-05 Thread JIRA
翟玉勇 created CASSANDRA-12992:
---

 Summary: when mapreduce create sstables and load to cassandra 
cluster,then drop the table there are much data file not move
 Key: CASSANDRA-12992
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12992
 Project: Cassandra
  Issue Type: Bug
Reporter: 翟玉勇






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12671) Support changing hinted handoff throttle in real time

2016-12-05 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12671:
--
Status: Open  (was: Patch Available)

> Support changing hinted handoff throttle in real time 
> --
>
> Key: CASSANDRA-12671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12671
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
>
> Problem: currently the sethintedhandoffthrottlekb takes effect when current 
> hints handoff tasks finish, and then applies to next task, which could take 
> hours for big node. 
> I think it would be great to change the hinted handoff throttle in real time, 
> which means it takes effect immediately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12671) Support changing hinted handoff throttle in real time

2016-12-05 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721721#comment-15721721
 ] 

Aleksey Yeschenko commented on CASSANDRA-12671:
---

Sorry, but I don't think this particular implementation will do.

1. {{getHintedHandoffThrottleBytesPerNode()}} does not belong to 
{{DatabaseDescriptor}}. That class should be kept as close to a dumb wrapper 
over {{Config}} as possible.

2. That calculation is not cheap, and as such should not be performed on *every 
hint read*. Hint:

{code}
public Set getAllEndpoints()
{
lock.readLock().lock();
try
{
return ImmutableSet.copyOf(endpointToHostIdMap.keySet());
}
finally
{
lock.readLock().unlock();
}
}
{code}

Generally I would recommend reading the source code of every method you call, 
to know what each of them does under the hood. Can't always move code from one 
place to another: what's acceptable as a one-off call might not be acceptable 
on a hot path.

3. How are you even updating {{conf.hinted_handoff_throttle_in_kb}} live? Am I 
missing something? Are you using a custom configuration provider?

4. Either way, this change, I assume, is not a frequent event. As such changes 
to the rate limit should be push-based, not pull-based.

> Support changing hinted handoff throttle in real time 
> --
>
> Key: CASSANDRA-12671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12671
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
>
> Problem: currently the sethintedhandoffthrottlekb takes effect when current 
> hints handoff tasks finish, and then applies to next task, which could take 
> hours for big node. 
> I think it would be great to change the hinted handoff throttle in real time, 
> which means it takes effect immediately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12991) Inter-node race condition in validation compaction

2016-12-05 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721760#comment-15721760
 ] 

Stefan Podkowinski commented on CASSANDRA-12991:


My assumption is that validation compaction works as follows:
* involved nodes receive a ValidationRequest message
* affected keyspace is being flushed
* validation is started using sstables candidates determined right after the 
flush

I don't see why you'd have to "SSTables created after that timestamp to be 
filtered when doing a validation compaction". Any SSTable created after the 
validation compaction was started should not be involved in the validation 
process anyways. 


> Inter-node race condition in validation compaction
> --
>
> Key: CASSANDRA-12991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12991
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Priority: Minor
>
> Problem:
> When a validation compaction is triggered by a repair it may happen that due 
> to flying in mutations the merkle trees differ but the data is consistent 
> however.
> Example:
> t = 1: 
> Repair starts, triggers validations
> Node A starts validation
> t = 10001:
> Mutation arrives at Node A
> t = 10002:
> Mutation arrives at Node B
> t = 10003:
> Node B starts validation
> Hashes of node A+B will differ but data is consistent from a view (think of 
> it like a snapshot) t = 1.
> Impact:
> Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
> partitions but on high traffic CFs and maybe very big partitions, this may 
> have a bigger impact and is a waste of resources.
> Possible solution:
> Build hashes based upon a snapshot timestamp.
> This requires SSTables created after that timestamp to be filtered when doing 
> a validation compaction:
> - Cells with timestamp > snapshot time have to be removed
> - Tombstone range markers have to be handled
>  - Bounds have to be removed if delete timestamp > snapshot time
>  - Boundary markers have to be either changed to a bound or completely 
> removed, depending if start and/or end are both affected or not
> Probably this is a known behaviour. Have there been any discussions about 
> this in the past? Did not find an matching issue, so I created this one.
> I am happy about any feedback, whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-12991) Inter-node race condition in validation compaction

2016-12-05 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-12991:
---
Comment: was deleted

(was: My assumption is that validation compaction works as follows:
* involved nodes receive a ValidationRequest message
* affected keyspace is being flushed
* validation is started using sstables candidates determined right after the 
flush

I don't see why you'd have to "SSTables created after that timestamp to be 
filtered when doing a validation compaction". Any SSTable created after the 
validation compaction was started should not be involved in the validation 
process anyways. 
)

> Inter-node race condition in validation compaction
> --
>
> Key: CASSANDRA-12991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12991
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Priority: Minor
>
> Problem:
> When a validation compaction is triggered by a repair it may happen that due 
> to flying in mutations the merkle trees differ but the data is consistent 
> however.
> Example:
> t = 1: 
> Repair starts, triggers validations
> Node A starts validation
> t = 10001:
> Mutation arrives at Node A
> t = 10002:
> Mutation arrives at Node B
> t = 10003:
> Node B starts validation
> Hashes of node A+B will differ but data is consistent from a view (think of 
> it like a snapshot) t = 1.
> Impact:
> Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
> partitions but on high traffic CFs and maybe very big partitions, this may 
> have a bigger impact and is a waste of resources.
> Possible solution:
> Build hashes based upon a snapshot timestamp.
> This requires SSTables created after that timestamp to be filtered when doing 
> a validation compaction:
> - Cells with timestamp > snapshot time have to be removed
> - Tombstone range markers have to be handled
>  - Bounds have to be removed if delete timestamp > snapshot time
>  - Boundary markers have to be either changed to a bound or completely 
> removed, depending if start and/or end are both affected or not
> Probably this is a known behaviour. Have there been any discussions about 
> this in the past? Did not find an matching issue, so I created this one.
> I am happy about any feedback, whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12991) Inter-node race condition in validation compaction

2016-12-05 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721759#comment-15721759
 ] 

Stefan Podkowinski commented on CASSANDRA-12991:


My assumption is that validation compaction works as follows:
* involved nodes receive a ValidationRequest message
* affected keyspace is being flushed
* validation is started using sstables candidates determined right after the 
flush

I don't see why you'd have to "SSTables created after that timestamp to be 
filtered when doing a validation compaction". Any SSTable created after the 
validation compaction was started should not be involved in the validation 
process anyways. 


> Inter-node race condition in validation compaction
> --
>
> Key: CASSANDRA-12991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12991
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Priority: Minor
>
> Problem:
> When a validation compaction is triggered by a repair it may happen that due 
> to flying in mutations the merkle trees differ but the data is consistent 
> however.
> Example:
> t = 1: 
> Repair starts, triggers validations
> Node A starts validation
> t = 10001:
> Mutation arrives at Node A
> t = 10002:
> Mutation arrives at Node B
> t = 10003:
> Node B starts validation
> Hashes of node A+B will differ but data is consistent from a view (think of 
> it like a snapshot) t = 1.
> Impact:
> Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
> partitions but on high traffic CFs and maybe very big partitions, this may 
> have a bigger impact and is a waste of resources.
> Possible solution:
> Build hashes based upon a snapshot timestamp.
> This requires SSTables created after that timestamp to be filtered when doing 
> a validation compaction:
> - Cells with timestamp > snapshot time have to be removed
> - Tombstone range markers have to be handled
>  - Bounds have to be removed if delete timestamp > snapshot time
>  - Boundary markers have to be either changed to a bound or completely 
> removed, depending if start and/or end are both affected or not
> Probably this is a known behaviour. Have there been any discussions about 
> this in the past? Did not find an matching issue, so I created this one.
> I am happy about any feedback, whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12969) Index: index can significantly slow down boot

2016-12-05 Thread Corentin Chary (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721571#comment-15721571
 ] 

Corentin Chary commented on CASSANDRA-12969:


Hello, did it go well ?

Another sensible optimization will be CASSANDRA-12962

> Index: index can significantly slow down boot
> -
>
> Key: CASSANDRA-12969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Corentin Chary
> Fix For: 3.x
>
> Attachments: 0004-index-do-not-re-insert-values-in-IndexInfo.patch
>
>
> During startup, each existing index is opened and marked as built by adding 
> an entry in "IndexInfo" and forcing a flush. Because of that we end up 
> flushing one sstable per index. On systems on HDD this can take minutes for 
> nothing.
> Thw following patch allows to avoid creating useless new sstables if the 
> index was already marked as built and will greatly reduce the startup time 
> (and improve availability during restarts).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12992) when mapreduce create sstables and load to cassandra cluster,then drop the table there are much data file not move to snapshot

2016-12-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

翟玉勇 updated CASSANDRA-12992:

   Attachment: after-droptable.png
   before-droptable.png
  Environment: cassandra 2.1.15
Since Version: 2.1.15
 Priority: Minor  (was: Major)
  Description: 
when mapreduce create sstables and load to cassandra cluster,then drop the 
table there are much data file not move to snapshot,

nodetool clearsnapshot can not free the disk,

wo must Manual delete the files 

  Component/s: Compaction
  Summary: when mapreduce create sstables and load to cassandra 
cluster,then drop the table there are much data file not move to snapshot  
(was: when mapreduce create sstables and load to cassandra cluster,then drop 
the table there are much data file not move)

> when mapreduce create sstables and load to cassandra cluster,then drop the 
> table there are much data file not move to snapshot
> --
>
> Key: CASSANDRA-12992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12992
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: cassandra 2.1.15
>Reporter: 翟玉勇
>Priority: Minor
> Attachments: after-droptable.png, before-droptable.png
>
>
> when mapreduce create sstables and load to cassandra cluster,then drop the 
> table there are much data file not move to snapshot,
> nodetool clearsnapshot can not free the disk,
> wo must Manual delete the files 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12966) Gossip thread slows down when using batch commit log

2016-12-05 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721705#comment-15721705
 ] 

Stefan Podkowinski commented on CASSANDRA-12966:


My personally preference for this kind of API design is to provide the executor 
as a method parameter. This makes it obvious (while reading the code on the 
caller side) to recognize the asynchronous nature of the function, as well as 
in what executor context will be used to run the code. In case of the discussed 
{{updateTokens}} method, I'd probably just add the {{Stage}} enum as another 
parameter, so you'd call {{SystemKeyspace.updateTokens(endpoint, 
tokensToUpdateInSystemKeyspace, Stage.MUTATION)}}.

> Gossip thread slows down when using batch commit log
> 
>
> Key: CASSANDRA-12966
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12966
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>
> When using batch commit log mode, the Gossip thread slows down when peers 
> after a node bounces. This is because we perform a bunch of updates to the 
> peers table via {{SystemKeyspace.updatePeerInfo}}, which is a synchronized 
> method. How quickly each one of those individual updates takes depends on how 
> busy the system is at the time wrt write traffic. If the system is largely 
> quiescent, each update will be relatively quick (just waiting for the fsync). 
> If the system is getting a lot of writes, and depending on the 
> commitlog_sync_batch_window_in_ms, each of the Gossip thread's updates can 
> get stuck in the backlog, which causes the Gossip thread to stop processing. 
> We have observed in large clusters that a rolling restart causes triggers and 
> exacerbates this behavior. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12997) dtest failure in org.apache.cassandra.cql3.validation.operations.AlterTest.testDropListAndAddListWithSameName

2016-12-05 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12997:
-

 Summary: dtest failure in 
org.apache.cassandra.cql3.validation.operations.AlterTest.testDropListAndAddListWithSameName
 Key: CASSANDRA-12997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12997
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy


example failure:

http://cassci.datastax.com/job/trunk_testall/1298/testReport/org.apache.cassandra.cql3.validation.operations/AlterTest/testDropListAndAddListWithSameName

{code}
Error Message

Invalid value for row 0 column 2 (mycollection of type list), expected 
 but got <[first element]>
{code}{code}Stacktrace

junit.framework.AssertionFailedError: Invalid value for row 0 column 2 
(mycollection of type list), expected  but got <[first element]>
at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:908)
at 
org.apache.cassandra.cql3.validation.operations.AlterTest.testDropListAndAddListWithSameName(AlterTest.java:87)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r1772680 - in /cassandra/site: publish/download/index.html src/download.md

2016-12-05 Thread mshuler
Author: mshuler
Date: Mon Dec  5 14:35:47 2016
New Revision: 1772680

URL: http://svn.apache.org/viewvc?rev=1772680=rev
Log:
Change EOLs to "after 4.0 release (date TBD)"

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/download.md

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1772680=1772679=1772680=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Mon Dec  5 14:35:47 2016
@@ -110,9 +110,9 @@ released against the most recent bug fix
 The following older Cassandra releases are still supported:
 
 
-  Apache Cassandra 3.0 is supported until May 2017. The 
latest release is http://www.apache.org/dyn/closer.lua/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz;>3.0.10
 (http://www.apache.org/dist/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz.sha1;>sha1),
 released on 2016-11-16.
-  Apache Cassandra 2.2 is supported until November 2016. 
The latest release is http://www.apache.org/dyn/closer.lua/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz;>2.2.8
 (http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.sha1;>sha1),
 released on 2016-09-28.
-  Apache Cassandra 2.1 is supported until November 2016 
with critical fixes only. The latest release is
+  Apache Cassandra 3.0 is supported until 6 months after 4.0 
release (date TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz;>3.0.10
 (http://www.apache.org/dist/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.0.10/apache-cassandra-3.0.10-bin.tar.gz.sha1;>sha1),
 released on 2016-11-16.
+  Apache Cassandra 2.2 is supported until 4.0 release (date 
TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz;>2.2.8
 (http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.2.8/apache-cassandra-2.2.8-bin.tar.gz.sha1;>sha1),
 released on 2016-09-28.
+  Apache Cassandra 2.1 is supported until 4.0 release (date 
TBD) with critical fixes only. The latest release is
 http://www.apache.org/dyn/closer.lua/cassandra/2.1.16/apache-cassandra-2.1.16-bin.tar.gz;>2.1.16
 (http://www.apache.org/dist/cassandra/2.1.16/apache-cassandra-2.1.16-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.1.16/apache-cassandra-2.1.16-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.1.16/apache-cassandra-2.1.16-bin.tar.gz.sha1;>sha1),
 released on 2016-10-10.
 
 

Modified: cassandra/site/src/download.md
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/download.md?rev=1772680=1772679=1772680=diff
==
--- cassandra/site/src/download.md (original)
+++ cassandra/site/src/download.md Mon Dec  5 14:35:47 2016
@@ -21,9 +21,9 @@ Download the latest Cassandra release: {
 
 The following older Cassandra releases are still supported:
 
-* Apache Cassandra 3.0 is supported until **May 2017**. The latest release is 
{{ "3.0" | full_release_link }}.
-* Apache Cassandra 2.2 is supported until **November 2016**. The latest 
release is {{ "2.2" | full_release_link }}.
-* Apache Cassandra 2.1 is supported until **November 2016** with **critical 
fixes only**. The latest release is
+* Apache Cassandra 3.0 is supported until **6 months after 4.0 release (date 
TBD)**. The latest release is {{ "3.0" | full_release_link }}.
+* Apache Cassandra 2.2 is supported until **4.0 release (date TBD)**. The 
latest release is {{ "2.2" | full_release_link }}.
+* Apache Cassandra 2.1 is supported until **4.0 release (date TBD)** with 
**critical fixes only**. The latest release is
   {{ "2.1" | full_release_link }}.
 
 Older (unsupported) versions of Cassandra are [archived 
here](http://archive.apache.org/dist/cassandra/).




[jira] [Updated] (CASSANDRA-12997) dtest failure in org.apache.cassandra.cql3.validation.operations.AlterTest.testDropListAndAddListWithSameName

2016-12-05 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12997:
--
Issue Type: Bug  (was: Test)

> dtest failure in 
> org.apache.cassandra.cql3.validation.operations.AlterTest.testDropListAndAddListWithSameName
> -
>
> Key: CASSANDRA-12997
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12997
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: test-failure, testall
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1298/testReport/org.apache.cassandra.cql3.validation.operations/AlterTest/testDropListAndAddListWithSameName
> {code}
> Error Message
> Invalid value for row 0 column 2 (mycollection of type list), expected 
>  but got <[first element]>
> {code}{code}Stacktrace
> junit.framework.AssertionFailedError: Invalid value for row 0 column 2 
> (mycollection of type list), expected  but got <[first element]>
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:908)
>   at 
> org.apache.cassandra.cql3.validation.operations.AlterTest.testDropListAndAddListWithSameName(AlterTest.java:87)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12829) DELETE query with an empty IN clause can delete more than expected

2016-12-05 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12829:

Status: Patch Available  (was: Open)

> DELETE query with an empty IN clause can delete more than expected
> --
>
> Key: CASSANDRA-12829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12829
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Arch Linux x64, kernel 4.7.6, Cassandra 3.9 downloaded 
> from the website
>Reporter: Jason T. Bradshaw
>Assignee: Alex Petrov
>
> When deleting from a table with a certain structure and using an *in* clause 
> with an empty list, the *in* clause with an empty list can be ignored, 
> resulting in deleting more than is expected.
> *Setup:*
> {code}
> cqlsh> create table test (a text, b text, id uuid, primary key ((a, b), id));
> cqlsh> insert into test (a, b, id) values ('a', 'b', 
> ----);
> cqlsh> insert into test (a, b, id) values ('b', 'c', 
> ----);
> cqlsh> insert into test (a, b, id) values ('a', 'c', 
> ----);
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Expected:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Actual:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  b | c | ----
> (1 rows)
> {code}
> Instead of deleting nothing, as the final empty *in* clause would imply, it 
> instead deletes everything that matches the first two clauses, acting as if 
> the following query had been issued instead:
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c');
> {code}
> This seems to be related to the presence of a tuple clustering key, as I 
> could not reproduce it without one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-12-05 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722595#comment-15722595
 ] 

T Jake Luciani commented on CASSANDRA-8398:
---

Looks good thanks.  I think we should include the reporting of this (p50 p99) 
in nodetool tablestats 

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Brice Dutheil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722612#comment-15722612
 ] 

Brice Dutheil commented on CASSANDRA-12620:
---

[~blerer] We have rollbacked an environment. I'm not sure I can upload the dump 
(maybe in private). But here's another primary key that is *expired* before and 
after the upgrade :

For information 1479813209 : Tue, 22 Nov 2016 11:13:29 GMT

{code:title=sstable2json / DSE 4.8.6}
{"key": "1c5598b3-70de-4ba5-a9ec-a57a61d3f494",
 "cells": [["",1479813209,147981320982,"d"],
   ["field1",1479813209,147981320982,"d"],
   ["field2",1479813209,147981320982,"d"],
   ["field3",1479813209,147981320982,"d"],
   ["field4",1479813209,147981320982,"d"],
   ["field5",1479813209,147981320982,"d"],
   ["field7:_","scopes:!",147981320981,"t",1479813209],
   
["field7:41444d494e5f4150504c49434154494f4e",1479813209,147981320982,"d"],
   
["field7:41444d494e5f434154414c4f475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f434154414c4f475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f5452414e53414354494f4e535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f454e41424c494e475f5041434b",1479813209,147981320982,"d"],
   
["field7:41444d494e5f464f524249445f4d534953444e5f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f464f524249445f4d534953444e5f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f434845434b",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f4f464645525f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f4f464645525f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5041434b5f50524943494e475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5041434b5f50524943494e475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f504152544e45525f50524f564953494f4e494e475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f504152544e45525f50524f564953494f4e494e475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f50524f4d4f54494f4e5f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f50524f4d4f54494f4e5f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5245504f52545f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f53434f5045535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f534d535f53454e44",1479813209,147981320982,"d"],
   
["field7:41444d494e5f555345525f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f555345525f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f56414c49444154494f4e5f53454e44",1479813209,147981320982,"d"],
   
["field7:41444d494e5f564f4943455f44455354494e4154494f4e5f42554e444c45535f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f434c49454e545f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f434c49454e545f5752495445",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f47524f55505f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f47524f55505f5752495445",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f555345525f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f555345525f5752495445",1479813209,147981320982,"d"],
   ["field6",1479813209,147981320982,"d"],
   ["field8",1479813209,147981320982,"d"]]},
{code}

{code:title=sstabledump / DSE 5.0.4}
{
  "partition" : {
"key" : [ "1c5598b3-70de-4ba5-a9ec-a57a61d3f494" ],
"position" : 0
  },
  "rows" : [
{
  "type" : "row",
  "position" : 50,
  "liveness_info" : { "tstamp" : "2016-11-22T11:13:29.820Z" },
  "cells" : [
{ "name" : "field1", "deletion_info" : { "local_delete_time" : 
"2016-11-22T11:13:29Z" } },
{ "name" : "field2", "deletion_info" : { "local_delete_time" : 
"2016-11-22T11:13:29Z" } },
{ "name" : "field3", "deletion_info" : { "local_delete_time" : 
"2016-11-22T11:13:29Z" } },
{ "name" : "field4", "deletion_info" : { 

[jira] [Commented] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Brice Dutheil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722517#comment-15722517
 ] 

Brice Dutheil commented on CASSANDRA-12620:
---

Hi,

We just had the same problem, upgrading from DSE 4.8.6 to DSE 5.0.4. The 
problem has been identified on 3 different environments.

Dead / tombstoned rows with an expired TTL show up after the upgrade. The thing 
is only the primary key is non null, all other fields are null. We only insert 
non null values using TTL that have various TTL from 60s to 1day.

{code}
select * from ttl_entries where key='d7c10084-724c-4117-9927-b927a972203b';

key | field1 | field2 | field3 | field4 | field5 | field6 | field7 | field8
--+--+---+---+--+--+--++---
d7c10084-724c-4117-9927-b927a972203b | null | null | null | null | null | null 
| null | null
{code}


Here's an {{sstabledump}} of one of the impacted row. Note that the 
{{liveness_info}} is missing TTL informations:

{code:title=sstabledump row d7c10084-724c-4117-9927-b927a972203b}
{
  "partition" : { 
"key" : [ "d7c10084-724c-4117-9927-b927a972203b" ], 
"position" : 6557684 
  }, 
  "rows" : [ 
{ 
  "type" : "row", 
  "position" : 6557734, 
  "liveness_info" : { "tstamp" : "2016-11-25T02:24:06.237Z" }, 
  "cells" : [ 
  { "name" : "field1", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field2", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field3", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field4", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field5", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field6", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field8", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "deletion_info" : { "marked_deleted" : 
"2016-11-25T02:24:06.236999Z", "local_delete_time" : "2016-11-25T02:24:06Z" } 
}, 
  { "name" : "field7", "path" : [ "VALUE_01" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_02" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_03" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_04" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_05" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_06" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_07" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_08" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_09" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_10" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_11" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_12" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_13" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_14" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_15" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_16" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_17" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_18" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_19" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_20" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_21" ], "deletion_info" : { 
"local_delete_time" : 

[jira] [Commented] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722542#comment-15722542
 ] 

Benjamin Lerer commented on CASSANDRA-12620:


[~bric3] do you have a dump of the original data (before upgrade)?

> Resurrected empty rows on update to 3.x
> ---
>
> Key: CASSANDRA-12620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12620
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Collin Sauve
>Assignee: Benjamin Lerer
>
> We had the below table on C* 2.x (dse 4.8.4, we assume was 2.1.15.1423 
> according to documentation), and were entering TTLs at write-time using the 
> DataStax C# Driver (using the POCO mapper).
> Upon upgrade to 3.0.8.1293 (DSE 5.0.2), we are seeing a lot of rows that:
> * should have been TTL'd
> * have no non-primary-key column data
> {code}
> CREATE TABLE applicationservices.aggregate_bucket_event_v3 (
> bucket_type int,
> bucket_id text,
> date timestamp,
> aggregate_id text,
> event_type int,
> event_id text,
> entities list>>,
> identity_sid text,
> PRIMARY KEY ((bucket_type, bucket_id), date, aggregate_id, event_type, 
> event_id)
> ) WITH CLUSTERING ORDER BY (date DESC, aggregate_id ASC, event_type ASC, 
> event_id ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> {code}
> {
> "partition" : {
>   "key" : [ "0", "26492" ],
>   "position" : 54397932
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 54397961,
> "clustering" : [ "2016-09-07 23:33Z", "3651664", "0", 
> "773665449947099136" ],
> "liveness_info" : { "tstamp" : "2016-09-07T23:34:09.758Z", "ttl" : 
> 172741, "expires_at" : "2016-09-09T23:33:10Z", "expired" : false },
> "cells" : [
>   { "name" : "identity_sid", "value" : "p_tw_zahidana" },
>   { "name" : "entities", "deletion_info" : { "marked_deleted" : 
> "2016-09-07T23:34:09.757999Z", "local_delete_time" : "2016-09-07T23:34:09Z" } 
> },
>   { "name" : "entities", "path" : [ 
> "936e17e1-7553-11e6-9b92-29a33b5827c3" ], "value" : 
> "0:https\\://www.youtube.com/watch?v=pwAJAssv6As" },
>   { "name" : "entities", "path" : [ 
> "936e17e2-7553-11e6-9b92-29a33b5827c3" ], "value" : "2:youtube" }
> ]
>   },
>   {
> "type" : "row",
>},
>   {
> "type" : "row",
> "position" : 54397177,
> "clustering" : [ "2016-08-17 10:00Z", "6387376", "0", 
> "765850666296225792" ],
> "liveness_info" : { "tstamp" : "2016-08-17T11:26:15.917001Z" },
> "cells" : [ ]
>   },
>   {
> "type" : "row",
> "position" : 54397227,
> "clustering" : [ "2016-08-17 07:00Z", "6387376", "0", 
> "765805367347601409" ],
> "liveness_info" : { "tstamp" : "2016-08-17T08:11:17.587Z" },
> "cells" : [ ]
>   },
>   {
> "type" : "row",
> "position" : 54397276,
> "clustering" : [ "2016-08-17 04:00Z", "6387376", "0", 
> "765760069858365441" ],
> "liveness_info" : { "tstamp" : "2016-08-17T05:58:11.228Z" },
> "cells" : [ ]
>   },
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12829) DELETE query with an empty IN clause can delete more than expected

2016-12-05 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722593#comment-15722593
 ] 

Alex Petrov commented on CASSANDRA-12829:
-

I've simplified the code as you suggested and fixed the inconsistency with 
{{IN}}, it was related to the
fact we were only checking for {{> 1}} clustering. I've found same problem with 
batch statements and for
partition keys and fixed it there, too. Tests are refactored as well.

|[3.0|https://github.com/ifesdjeen/cassandra/tree/12829-3.0]|[dtest|http://cassci.datastax.com/job/ifesdjeen-12829-3.0-dtest/]|[utest|http://cassci.datastax.com/job/ifesdjeen-12829-3.0-testall/]|
|[3.x|https://github.com/ifesdjeen/cassandra/tree/12829-3.x]|[dtest|http://cassci.datastax.com/job/ifesdjeen-12829-3.x-dtest/]|[utest|http://cassci.datastax.com/job/ifesdjeen-12829-3.x-testall/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12829-trunk]|[dtest|http://cassci.datastax.com/job/ifesdjeen-12829-trunk-dtest/]|[utest|http://cassci.datastax.com/job/ifesdjeen-12829-trunk-testall/]|

> DELETE query with an empty IN clause can delete more than expected
> --
>
> Key: CASSANDRA-12829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12829
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Arch Linux x64, kernel 4.7.6, Cassandra 3.9 downloaded 
> from the website
>Reporter: Jason T. Bradshaw
>Assignee: Alex Petrov
>
> When deleting from a table with a certain structure and using an *in* clause 
> with an empty list, the *in* clause with an empty list can be ignored, 
> resulting in deleting more than is expected.
> *Setup:*
> {code}
> cqlsh> create table test (a text, b text, id uuid, primary key ((a, b), id));
> cqlsh> insert into test (a, b, id) values ('a', 'b', 
> ----);
> cqlsh> insert into test (a, b, id) values ('b', 'c', 
> ----);
> cqlsh> insert into test (a, b, id) values ('a', 'c', 
> ----);
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Expected:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Actual:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  b | c | ----
> (1 rows)
> {code}
> Instead of deleting nothing, as the final empty *in* clause would imply, it 
> instead deletes everything that matches the first two clauses, acting as if 
> the following query had been issued instead:
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c');
> {code}
> This seems to be related to the presence of a tuple clustering key, as I 
> could not reproduce it without one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12956) CL is not replayed on custom 2i exception

2016-12-05 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722609#comment-15722609
 ] 

Alex Petrov commented on CASSANDRA-12956:
-

Great, thank you.
I've removed the duplicate {{await}}, thanks for catching that. {{truncate}} 
check for {{false}} is done in [flush 
memtable|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:12956-3.X#diff-98f5acb96aa6d684781936c141132e2aR1097].

I've applied the change to all branches and did CI.

> CL is not replayed on custom 2i exception
> -
>
> Key: CASSANDRA-12956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12956
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Critical
>
> If during the node shutdown / drain the custom (non-cf) 2i throws an 
> exception, CommitLog will get correctly preserved (segments won't get 
> discarded because segment tracking is correct). 
> However, when it gets replayed on node startup,  we're making a decision 
> whether or not to replay the commit log. CL segment starts getting replayed, 
> since there are non-discarded segments and during this process we're checking 
> whether there every [individual 
> mutation|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L215]
>  in commit log is already committed or no. Information about the sstables is 
> taken from [live sstables on 
> disk|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L250-L256].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Brice Dutheil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722612#comment-15722612
 ] 

Brice Dutheil edited comment on CASSANDRA-12620 at 12/5/16 4:00 PM:


[~blerer] We have rollbacked an environment. I'm not sure I can upload the dump 
(maybe in private). But here's another primary key that is *expired* before and 
after the upgrade :

For information 1479813209 : Tue, 22 Nov 2016 11:13:29 GMT

{code:title=sstable2json / DSE 4.8.6 / Cassandra 2.1.13}
{"key": "1c5598b3-70de-4ba5-a9ec-a57a61d3f494",
 "cells": [["",1479813209,147981320982,"d"],
   ["field1",1479813209,147981320982,"d"],
   ["field2",1479813209,147981320982,"d"],
   ["field3",1479813209,147981320982,"d"],
   ["field4",1479813209,147981320982,"d"],
   ["field5",1479813209,147981320982,"d"],
   ["field7:_","scopes:!",147981320981,"t",1479813209],
   
["field7:41444d494e5f4150504c49434154494f4e",1479813209,147981320982,"d"],
   
["field7:41444d494e5f434154414c4f475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f434154414c4f475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f5452414e53414354494f4e535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f454e41424c494e475f5041434b",1479813209,147981320982,"d"],
   
["field7:41444d494e5f464f524249445f4d534953444e5f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f464f524249445f4d534953444e5f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f434845434b",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f4f464645525f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f4f464645525f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5041434b5f50524943494e475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5041434b5f50524943494e475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f504152544e45525f50524f564953494f4e494e475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f504152544e45525f50524f564953494f4e494e475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f50524f4d4f54494f4e5f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f50524f4d4f54494f4e5f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5245504f52545f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f53434f5045535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f534d535f53454e44",1479813209,147981320982,"d"],
   
["field7:41444d494e5f555345525f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f555345525f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f56414c49444154494f4e5f53454e44",1479813209,147981320982,"d"],
   
["field7:41444d494e5f564f4943455f44455354494e4154494f4e5f42554e444c45535f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f434c49454e545f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f434c49454e545f5752495445",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f47524f55505f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f47524f55505f5752495445",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f555345525f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f555345525f5752495445",1479813209,147981320982,"d"],
   ["field6",1479813209,147981320982,"d"],
   ["field8",1479813209,147981320982,"d"]]},
{code}

{code:title=sstabledump / DSE 5.0.4 / Cassandra 3.0.10}
{
  "partition" : {
"key" : [ "1c5598b3-70de-4ba5-a9ec-a57a61d3f494" ],
"position" : 0
  },
  "rows" : [
{
  "type" : "row",
  "position" : 50,
  "liveness_info" : { "tstamp" : "2016-11-22T11:13:29.820Z" },
  "cells" : [
{ "name" : "field1", "deletion_info" : { "local_delete_time" : 
"2016-11-22T11:13:29Z" } },
{ "name" : "field2", "deletion_info" : { "local_delete_time" : 
"2016-11-22T11:13:29Z" } },
{ "name" : "field3", "deletion_info" : { "local_delete_time" : 

[jira] [Comment Edited] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Brice Dutheil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722517#comment-15722517
 ] 

Brice Dutheil edited comment on CASSANDRA-12620 at 12/5/16 4:01 PM:


Hi,

We just had the same problem, upgrading from Cassandra 2.1.13 / DSE 4.8.6 to 
Cassandra 3.0.10 / DSE 5.0.4. The problem has been identified on 3 different 
environments.

Dead / tombstoned rows with an expired TTL show up after the upgrade. The thing 
is only the primary key is non null, all other fields are null. We only insert 
non null values using TTL that have various TTL from 60s to 1day.

{code}
select * from ttl_entries where key='d7c10084-724c-4117-9927-b927a972203b';

key | field1 | field2 | field3 | field4 | field5 | field6 | field7 | field8
--+--+---+---+--+--+--++---
d7c10084-724c-4117-9927-b927a972203b | null | null | null | null | null | null 
| null | null
{code}


Here's an {{sstabledump}} of one of the impacted row. Note that the 
{{liveness_info}} is missing TTL informations:

{code:title=sstabledump row d7c10084-724c-4117-9927-b927a972203b}
{
  "partition" : { 
"key" : [ "d7c10084-724c-4117-9927-b927a972203b" ], 
"position" : 6557684 
  }, 
  "rows" : [ 
{ 
  "type" : "row", 
  "position" : 6557734, 
  "liveness_info" : { "tstamp" : "2016-11-25T02:24:06.237Z" }, 
  "cells" : [ 
  { "name" : "field1", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field2", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field3", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field4", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field5", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field6", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field8", "deletion_info" : { "local_delete_time" : 
"2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "deletion_info" : { "marked_deleted" : 
"2016-11-25T02:24:06.236999Z", "local_delete_time" : "2016-11-25T02:24:06Z" } 
}, 
  { "name" : "field7", "path" : [ "VALUE_01" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_02" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_03" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_04" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_05" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_06" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_07" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_08" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_09" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_10" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_11" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_12" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_13" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_14" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_15" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_16" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_17" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_18" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_19" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  { "name" : "field7", "path" : [ "VALUE_20" ], "deletion_info" : { 
"local_delete_time" : "2016-11-25T02:24:06Z" } }, 
  

[jira] [Comment Edited] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Brice Dutheil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722612#comment-15722612
 ] 

Brice Dutheil edited comment on CASSANDRA-12620 at 12/5/16 4:11 PM:


[~blerer] We have rollbacked an environment. I'm not sure I can upload the dump 
(maybe in private). But here's another primary key that is *expired* before and 
after the upgrade :

For information 1479813209 : Tue, 22 Nov 2016 11:13:29 GMT

{code:title=sstable2json / DSE 4.8.6 / Cassandra 2.1.13}
{"key": "1c5598b3-70de-4ba5-a9ec-a57a61d3f494",
 "cells": [["",1479813209,147981320982,"d"],
   ["field1",1479813209,147981320982,"d"],
   ["field2",1479813209,147981320982,"d"],
   ["field3",1479813209,147981320982,"d"],
   ["field4",1479813209,147981320982,"d"],
   ["field5",1479813209,147981320982,"d"],
   ["field7:_","scopes:!",147981320981,"t",1479813209],
   
["field7:41444d494e5f4150504c49434154494f4e",1479813209,147981320982,"d"],
   
["field7:41444d494e5f434154414c4f475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f434154414c4f475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f5452414e53414354494f4e535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f435245444954535f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f454e41424c494e475f5041434b",1479813209,147981320982,"d"],
   
["field7:41444d494e5f464f524249445f4d534953444e5f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f464f524249445f4d534953444e5f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f434845434b",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f484f54425f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f4f464645525f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f4f464645525f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5041434b5f50524943494e475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5041434b5f50524943494e475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f504152544e45525f50524f564953494f4e494e475f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f504152544e45525f50524f564953494f4e494e475f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f50524f4d4f54494f4e5f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f50524f4d4f54494f4e5f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f5245504f52545f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f53434f5045535f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f534d535f53454e44",1479813209,147981320982,"d"],
   
["field7:41444d494e5f555345525f52454144",1479813209,147981320982,"d"],
   
["field7:41444d494e5f555345525f5752495445",1479813209,147981320982,"d"],
   
["field7:41444d494e5f56414c49444154494f4e5f53454e44",1479813209,147981320982,"d"],
   
["field7:41444d494e5f564f4943455f44455354494e4154494f4e5f42554e444c45535f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f434c49454e545f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f434c49454e545f5752495445",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f47524f55505f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f47524f55505f5752495445",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f555345525f52454144",1479813209,147981320982,"d"],
   
["field7:415554485f41444d494e5f555345525f5752495445",1479813209,147981320982,"d"],
   ["field6",1479813209,147981320982,"d"],
   ["field8",1479813209,147981320982,"d"]]},
{code}

{code:title=sstabledump / DSE 5.0.4 / Cassandra 3.0.10}
{
  "partition" : {
"key" : [ "1c5598b3-70de-4ba5-a9ec-a57a61d3f494" ],
"position" : 0
  },
  "rows" : [
{
  "type" : "row",
  "position" : 50,
  "liveness_info" : { "tstamp" : "2016-11-22T11:13:29.820Z" },
  "cells" : [
{ "name" : "field1", "deletion_info" : { "local_delete_time" : 
"2016-11-22T11:13:29Z" } },
{ "name" : "field2", "deletion_info" : { "local_delete_time" : 
"2016-11-22T11:13:29Z" } },
{ "name" : "field3", "deletion_info" : { "local_delete_time" : 

[jira] [Commented] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Brice Dutheil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722709#comment-15722709
 ] 

Brice Dutheil commented on CASSANDRA-12620:
---

Not sure but the liveness info looks wrong, it seems the conversion for the the 
_ka_ format to the new _mc_ format _forgot_ to report the expired / delete row.

If I read it correctly {{"cells": [["",1479813209,147981320982,"d"],}} is 
the row marker, and it shows the row has the status deleted. However the 
upgraded entry don't show this deleted status ({{"liveness_info" : { "tstamp" : 
"2016-11-22T11:13:29.820Z" },}}).



> Resurrected empty rows on update to 3.x
> ---
>
> Key: CASSANDRA-12620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12620
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Collin Sauve
>Assignee: Benjamin Lerer
>
> We had the below table on C* 2.x (dse 4.8.4, we assume was 2.1.15.1423 
> according to documentation), and were entering TTLs at write-time using the 
> DataStax C# Driver (using the POCO mapper).
> Upon upgrade to 3.0.8.1293 (DSE 5.0.2), we are seeing a lot of rows that:
> * should have been TTL'd
> * have no non-primary-key column data
> {code}
> CREATE TABLE applicationservices.aggregate_bucket_event_v3 (
> bucket_type int,
> bucket_id text,
> date timestamp,
> aggregate_id text,
> event_type int,
> event_id text,
> entities list>>,
> identity_sid text,
> PRIMARY KEY ((bucket_type, bucket_id), date, aggregate_id, event_type, 
> event_id)
> ) WITH CLUSTERING ORDER BY (date DESC, aggregate_id ASC, event_type ASC, 
> event_id ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> {code}
> {
> "partition" : {
>   "key" : [ "0", "26492" ],
>   "position" : 54397932
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 54397961,
> "clustering" : [ "2016-09-07 23:33Z", "3651664", "0", 
> "773665449947099136" ],
> "liveness_info" : { "tstamp" : "2016-09-07T23:34:09.758Z", "ttl" : 
> 172741, "expires_at" : "2016-09-09T23:33:10Z", "expired" : false },
> "cells" : [
>   { "name" : "identity_sid", "value" : "p_tw_zahidana" },
>   { "name" : "entities", "deletion_info" : { "marked_deleted" : 
> "2016-09-07T23:34:09.757999Z", "local_delete_time" : "2016-09-07T23:34:09Z" } 
> },
>   { "name" : "entities", "path" : [ 
> "936e17e1-7553-11e6-9b92-29a33b5827c3" ], "value" : 
> "0:https\\://www.youtube.com/watch?v=pwAJAssv6As" },
>   { "name" : "entities", "path" : [ 
> "936e17e2-7553-11e6-9b92-29a33b5827c3" ], "value" : "2:youtube" }
> ]
>   },
>   {
> "type" : "row",
>},
>   {
> "type" : "row",
> "position" : 54397177,
> "clustering" : [ "2016-08-17 10:00Z", "6387376", "0", 
> "765850666296225792" ],
> "liveness_info" : { "tstamp" : "2016-08-17T11:26:15.917001Z" },
> "cells" : [ ]
>   },
>   {
> "type" : "row",
> "position" : 54397227,
> "clustering" : [ "2016-08-17 07:00Z", "6387376", "0", 
> "765805367347601409" ],
> "liveness_info" : { "tstamp" : "2016-08-17T08:11:17.587Z" },
> "cells" : [ ]
>   },
>   {
> "type" : "row",
> "position" : 54397276,
> "clustering" : [ "2016-08-17 04:00Z", "6387376", "0", 
> "765760069858365441" ],
> "liveness_info" : { "tstamp" : "2016-08-17T05:58:11.228Z" },
> "cells" : [ ]
>   },
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12620) Resurrected empty rows on update to 3.x

2016-12-05 Thread Brice Dutheil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722709#comment-15722709
 ] 

Brice Dutheil edited comment on CASSANDRA-12620 at 12/5/16 4:46 PM:


Not sure but the liveness info looks wrong, it seems the conversion for the the 
_ka_ format to the new _mc_ format _forgot_ to report the expired / delete row.

If I read it correctly {{"cells": \[\["",1479813209,147981320982,"d"],}} is 
the row marker, and it shows the row has the status deleted. However the 
upgraded entry don't show this deleted status ({{"liveness_info" : \{ "tstamp" 
: "2016-11-22T11:13:29.820Z" \},}}).




was (Author: bric3):
Not sure but the liveness info looks wrong, it seems the conversion for the the 
_ka_ format to the new _mc_ format _forgot_ to report the expired / delete row.

If I read it correctly {{"cells": [["",1479813209,147981320982,"d"],}} is 
the row marker, and it shows the row has the status deleted. However the 
upgraded entry don't show this deleted status ({{"liveness_info" : { "tstamp" : 
"2016-11-22T11:13:29.820Z" },}}).



> Resurrected empty rows on update to 3.x
> ---
>
> Key: CASSANDRA-12620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12620
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Collin Sauve
>Assignee: Benjamin Lerer
>
> We had the below table on C* 2.x (dse 4.8.4, we assume was 2.1.15.1423 
> according to documentation), and were entering TTLs at write-time using the 
> DataStax C# Driver (using the POCO mapper).
> Upon upgrade to 3.0.8.1293 (DSE 5.0.2), we are seeing a lot of rows that:
> * should have been TTL'd
> * have no non-primary-key column data
> {code}
> CREATE TABLE applicationservices.aggregate_bucket_event_v3 (
> bucket_type int,
> bucket_id text,
> date timestamp,
> aggregate_id text,
> event_type int,
> event_id text,
> entities list>>,
> identity_sid text,
> PRIMARY KEY ((bucket_type, bucket_id), date, aggregate_id, event_type, 
> event_id)
> ) WITH CLUSTERING ORDER BY (date DESC, aggregate_id ASC, event_type ASC, 
> event_id ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> {code}
> {
> "partition" : {
>   "key" : [ "0", "26492" ],
>   "position" : 54397932
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 54397961,
> "clustering" : [ "2016-09-07 23:33Z", "3651664", "0", 
> "773665449947099136" ],
> "liveness_info" : { "tstamp" : "2016-09-07T23:34:09.758Z", "ttl" : 
> 172741, "expires_at" : "2016-09-09T23:33:10Z", "expired" : false },
> "cells" : [
>   { "name" : "identity_sid", "value" : "p_tw_zahidana" },
>   { "name" : "entities", "deletion_info" : { "marked_deleted" : 
> "2016-09-07T23:34:09.757999Z", "local_delete_time" : "2016-09-07T23:34:09Z" } 
> },
>   { "name" : "entities", "path" : [ 
> "936e17e1-7553-11e6-9b92-29a33b5827c3" ], "value" : 
> "0:https\\://www.youtube.com/watch?v=pwAJAssv6As" },
>   { "name" : "entities", "path" : [ 
> "936e17e2-7553-11e6-9b92-29a33b5827c3" ], "value" : "2:youtube" }
> ]
>   },
>   {
> "type" : "row",
>},
>   {
> "type" : "row",
> "position" : 54397177,
> "clustering" : [ "2016-08-17 10:00Z", "6387376", "0", 
> "765850666296225792" ],
> "liveness_info" : { "tstamp" : "2016-08-17T11:26:15.917001Z" },
> "cells" : [ ]
>   },
>   {
> "type" : "row",
> "position" : 54397227,
> "clustering" : [ "2016-08-17 07:00Z", "6387376", "0", 
> "765805367347601409" ],
> "liveness_info" : { "tstamp" : "2016-08-17T08:11:17.587Z" },
> "cells" : [ ]
>   },
>   {
> "type" : "row",
> "position" : 54397276,
> "clustering" : [ "2016-08-17 04:00Z", "6387376", "0", 
> "765760069858365441" ],
> "liveness_info" : { "tstamp" : "2016-08-17T05:58:11.228Z" },
> "cells" : [ ]
>   },
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12998) Remove a silly hint.isLive() check in HintsService.write()

2016-12-05 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-12998:
-

 Summary: Remove a silly hint.isLive() check in HintsService.write()
 Key: CASSANDRA-12998
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12998
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Trivial
 Fix For: 3.0.x, 3.x


Somehow made it to the final version of the codebase, this can practically 
never return false. The {{bufferPool.write()}} call should be unconditional.

{code}
public void write(Iterable hostIds, Hint hint)
{
if (isShutDown)
throw new IllegalStateException("HintsService is shut down and 
can't accept new hints");

// we have to make sure that the HintsStore instances get properly 
initialized - otherwise dispatch will not trigger
catalog.maybeLoadStores(hostIds);

if (hint.isLive())
bufferPool.write(hostIds, hint);

StorageMetrics.totalHints.inc(size(hostIds));
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12998) Remove a silly hint.isLive() check in HintsService.write()

2016-12-05 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722745#comment-15722745
 ] 

Aleksey Yeschenko commented on CASSANDRA-12998:
---

I think it was a misguided attempt at optimisation. Anyways,

||branch||testall||dtest||
|[12998-3.0|https://github.com/iamaleksey/cassandra/tree/12998-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-12998-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-12998-3.0-dtest]|

> Remove a silly hint.isLive() check in HintsService.write()
> --
>
> Key: CASSANDRA-12998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12998
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Trivial
> Fix For: 3.0.x, 3.x
>
>
> Somehow made it to the final version of the codebase, this can practically 
> never return false. The {{bufferPool.write()}} call should be unconditional.
> {code}
> public void write(Iterable hostIds, Hint hint)
> {
> if (isShutDown)
> throw new IllegalStateException("HintsService is shut down and 
> can't accept new hints");
> // we have to make sure that the HintsStore instances get properly 
> initialized - otherwise dispatch will not trigger
> catalog.maybeLoadStores(hostIds);
> if (hint.isLive())
> bufferPool.write(hostIds, hint);
> StorageMetrics.totalHints.inc(size(hostIds));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12986) dtest failure in upgrade_internal_auth_test.TestAuthUpgrade.test_upgrade_legacy_table

2016-12-05 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12986:

Assignee: (was: DS Test Eng)

> dtest failure in 
> upgrade_internal_auth_test.TestAuthUpgrade.test_upgrade_legacy_table
> -
>
> Key: CASSANDRA-12986
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12986
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1437/testReport/upgrade_internal_auth_test/TestAuthUpgrade/test_upgrade_legacy_table
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [main] 2016-12-01 03:08:30,985 CassandraDaemon.java:724 - Detected 
> unreadable sstables 
> 

[jira] [Updated] (CASSANDRA-12987) dtest failure in paxos_tests.TestPaxos.contention_test_many_threads

2016-12-05 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12987:

Assignee: (was: DS Test Eng)

> dtest failure in paxos_tests.TestPaxos.contention_test_many_threads
> ---
>
> Key: CASSANDRA-12987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12987
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1437/testReport/paxos_tests/TestPaxos/contention_test_many_threads
> {code}
> Error Message
> value=299, errors=0, retries=25559
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/paxos_tests.py", line 88, in 
> contention_test_many_threads
> self._contention_test(300, 1)
>   File "/home/automaton/cassandra-dtest/paxos_tests.py", line 192, in 
> _contention_test
> self.assertTrue((value == threads * iterations) and (errors == 0), 
> "value={}, errors={}, retries={}".format(value, errors, retries))
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12987) dtest failure in paxos_tests.TestPaxos.contention_test_many_threads

2016-12-05 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12987:

Issue Type: Bug  (was: Test)

> dtest failure in paxos_tests.TestPaxos.contention_test_many_threads
> ---
>
> Key: CASSANDRA-12987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12987
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1437/testReport/paxos_tests/TestPaxos/contention_test_many_threads
> {code}
> Error Message
> value=299, errors=0, retries=25559
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/paxos_tests.py", line 88, in 
> contention_test_many_threads
> self._contention_test(300, 1)
>   File "/home/automaton/cassandra-dtest/paxos_tests.py", line 192, in 
> _contention_test
> self.assertTrue((value == threads * iterations) and (errors == 0), 
> "value={}, errors={}, retries={}".format(value, errors, retries))
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2