[jira] [Comment Edited] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038189#comment-16038189
 ] 

Paulo Motta edited comment on CASSANDRA-10130 at 6/6/17 5:05 AM:
-

Sorry for being picky here but while we are fixing the original limitation, we 
are introducing a new limitation that if there's ever a non-fatal index build 
failure, a successful full index rebuild will not mark the index as built until 
the node is restarted and the index is unnecessarily rebuilt.

We can probably lift this limitation fairly simply by marking the index as 
built (and clear the pending counters) if there was no other index build 
submission since the start of the full rebuild - even if there are (likely 
failed) pending builds. We also should probably log a warning when there is an 
index build failure instructing the user to run a full index rebuild to fix it.


was (Author: pauloricardomg):
Sorry for being picky here but while we are fixing the original limitation, we 
are introducing a new limitation that if there's ever a non-fatal index build 
failure, a successful full index rebuild will not mark the index as built until 
the node is restarted and the index is unnecessarily rebuilt.

We can probably lift this limitation fairly simply by marking the index as 
built (and clear the pending counters) if there was no other index build 
submission since the start of the full rebuild - even if there are (likely 
failed) pending builds. We should probably log a warning when there is an index 
build failure instructing the user to run a full index rebuild to fix it.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038189#comment-16038189
 ] 

Paulo Motta commented on CASSANDRA-10130:
-

Sorry for being picky here but while we are fixing the original limitation, we 
are introducing a new limitation that if there's ever a non-fatal index build 
failure, a successful full index rebuild will not mark the index as built until 
the node is restarted and the index is unnecessarily rebuilt.

We can probably lift this limitation fairly simply by marking the index as 
built (and clear the pending counters) if there was no other index build 
submission since the start of the full rebuild - even if there are (likely 
failed) pending builds. We should probably log a warning when there is an index 
build failure instructing the user to run a full index rebuild to fix it.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-06-05 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038130#comment-16038130
 ] 

Jay Zhuang commented on CASSANDRA-13078:


Here is the unittest running time with 1-20 number of 
[{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62]:
!unittest_time.png!
With 6 runners it could reduce the time to from 40 minutes to 10 minutes.

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
> Attachments: unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-06-05 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13078:
---
Attachment: unittest_time.png

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
> Attachments: unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12744) Randomness of stress distributions is not good

2017-06-05 Thread Ben Slater (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038091#comment-16038091
 ] 

Ben Slater commented on CASSANDRA-12744:


Looks like the tests failures are unrelated? Are we OK to commit?

> Randomness of stress distributions is not good
> --
>
> Key: CASSANDRA-12744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12744
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: T Jake Luciani
>Assignee: Ben Slater
>Priority: Minor
>  Labels: stress
> Fix For: 4.0
>
> Attachments: CASSANDRA_12744_SeedManager_changes-trunk.patch
>
>
> The randomness of our distributions is pretty bad.  We are using the 
> JDKRandomGenerator() but in testing of uniform(1..3) we see for 100 
> iterations it's only outputting 3.  If you bump it to 10k it hits all 3 
> values. 
> I made a change to just use the default commons math random generator and now 
> see all 3 values for n=10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-06-05 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13078:
---
Description: The unittest takes very long time to run (about 40 minutes on 
a macbook). By overriding the 
[{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
 it could speed up the test, especially on powerful servers. Currently, it's 
set to 1 by default. I would like to propose to set the {{test.runners}} by the 
[number of CPUs 
dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
 For example, {{runners = num_cores / 4}}. What do you guys think?  (was: The 
unittest takes very long time to run (about 50 minutes on a macbook). By 
overriding the 
[{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
 it could speed up the test, especially on powerful servers. Currently, it's 
set to 1 by default. I would like to propose to set the {{test.runners}} by the 
[number of CPUs 
dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
 For example, {{runners = num_cores / 4}}. What do you guys think?)

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13209) test failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections

2017-06-05 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-13209:
-
   Resolution: Fixed
Fix Version/s: 3.11.0
   3.0.14
   2.2.10
   Status: Resolved  (was: Ready to Commit)

Committed to 2.2 as 
[5807ec|https://github.com/apache/cassandra/commit/5807eca8bbb30091b632e031441939c763b3e054]
 and merged upwards.

Thank you for the patch!

> test failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections
> --
>
> Key: CASSANDRA-13209
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13209
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Kurt Greaves
>  Labels: dtest, test-failure
> Fix For: 2.2.10, 3.0.14, 3.11.0
>
> Attachments: 13209.patch, node1.log, node2.log, node3.log, node4.log, 
> node5.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/528/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections
> {noformat}
> Error Message
> errors={'127.0.0.4': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.4
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-792s6j
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j
> dtest: DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-uNMsuW
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> dtest: DEBUG: Running stress with user profile 
> /home/automaton/cassandra-dtest/cqlsh_tests/blogposts.yaml
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 1090, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2571, in test_bulk_round_trip_blogposts_with_max_connections
> copy_from_options={'NUMPROCESSES': 2})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2500, in _test_bulk_round_trip
> num_records = create_records()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2473, in create_records
> ret = rows_to_list(self.session.execute(count_statement))[0][0]
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1998, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 3784, in result
> raise self._final_exception
> "errors={'127.0.0.4': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.4\n 
> >> begin captured logging << \ndtest: DEBUG: cluster ccm 
> directory: /tmp/dtest-792s6j\ndtest: DEBUG: Done setting configuration 
> options:\n{   'initial_token': None,\n'num_tokens': '32',\n
> 'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j\ndtest: 
> DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] 

[03/10] cassandra git commit: cqlsh COPY FROM: increment error count only for failures, not for attempts

2017-06-05 Thread stefania
cqlsh COPY FROM: increment error count only for failures, not for attempts

patch by Kurt Greaves; reviewed by Stefania Alborghetti for CASSANDRA-13209


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5807eca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5807eca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5807eca8

Branch: refs/heads/cassandra-3.11
Commit: 5807eca8bbb30091b632e031441939c763b3e054
Parents: 8405f73
Author: Kurt 
Authored: Fri Jun 2 09:26:16 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:49:17 2017 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ef2e12c..d00ddcb 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
  * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
  * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
  * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 83e1742..b72b517 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1068,11 +1068,11 @@ class ImportErrorHandler(object):
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
without retries"
% (len(err.rows), err.name, err.msg))
 else:
-self.insert_errors += len(err.rows)
 if not err.final:
 shell.printerr("Failed to import %d rows: %s - %s,  will retry 
later, attempt %d of %d"
% (len(err.rows), err.name, err.msg, 
err.attempts, self.max_attempts))
 else:
+self.insert_errors += len(err.rows)
 self.add_failed_rows(err.rows)
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
after %d attempts"
% (len(err.rows), err.name, err.msg, 
err.attempts))


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-06-05 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17a7a806
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17a7a806
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17a7a806

Branch: refs/heads/trunk
Commit: 17a7a806c8a5dd7f92500b17c0d7810608078ce5
Parents: 6b36d9f 5807eca
Author: Stefania Alborghetti 
Authored: Tue Jun 6 08:52:19 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:52:19 2017 +0800

--
 CHANGES.txt| 3 ++-
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/17a7a806/CHANGES.txt
--
diff --cc CHANGES.txt
index 8ab8422,d00ddcb..0076b2c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,48 -1,9 +1,49 @@@
 -2.2.10
 +3.0.14
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
- 
++Merged from 2.2:
+  * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
 - * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
 +
 +3.0.13
 + * Make reading of range tombstones more reliable (CASSANDRA-12811)
 + * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
 + * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
 + * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)
 + * Fix the conversion of 2.X expired rows without regular column data 
(CASSANDRA-13395)
 + * Fix hint delivery when using ext+internal IPs with prefer_local enabled 
(CASSANDRA-13020)
 + * Fix possible NPE on upgrade to 3.0/3.X in case of IO errors 
(CASSANDRA-13389)
 + * Legacy deserializer can create empty range tombstones (CASSANDRA-13341)
 + * Use the Kernel32 library to retrieve the PID on Windows and fix startup 
checks (CASSANDRA-1)
 + * Fix code to not exchange schema across major versions (CASSANDRA-13274)
 + * Dropping column results in "corrupt" SSTable (CASSANDRA-13337)
 + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340)
 + * Fix CONTAINS filtering for null collections (CASSANDRA-13246)
 + * Applying: Use a unique metric reservoir per test run when using 
Cassandra-wide metrics residing in MBeans (CASSANDRA-13216)
 + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 + * Legacy caching options can prevent 3.0 upgrade (CASSANDRA-13384)
 + * Nodetool upgradesstables/scrub/compact ignores system tables 
(CASSANDRA-13410)
 + * Fix NPE issue in StorageService (CASSANDRA-13060)
 +Merged from 2.2:
   * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
   * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)
 - * Fix JVM metric paths (CASSANDRA-13103)
   * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773)
   * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17a7a806/pylib/cqlshlib/copyutil.py
--



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-06-05 Thread stefania
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92e30427
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92e30427
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92e30427

Branch: refs/heads/cassandra-3.11
Commit: 92e304277c1b4d43049c59d5511f398e15c40b04
Parents: d8a3aa4 17a7a80
Author: Stefania Alborghetti 
Authored: Tue Jun 6 08:53:34 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:53:34 2017 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92e30427/CHANGES.txt
--
diff --cc CHANGES.txt
index 5202428,0076b2c..61bff07
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -73,33 -38,12 +73,34 @@@ Merged from 3.0
   * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
   * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
   * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 - * Legacy caching options can prevent 3.0 upgrade (CASSANDRA-13384)
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
 + * Coalescing strategy sleeps too much (CASSANDRA-13090)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 + * Use keyspace replication settings on system.size_estimates table 
(CASSANDRA-9639)
 + * Add vm.max_map_count StartupCheck (CASSANDRA-13008)
 + * Hint related logging should include the IP address of the destination in 
addition to
 +   host ID (CASSANDRA-13205)
 + * Reloading logback.xml does not work (CASSANDRA-13173)
 + * Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 
(CASSANDRA-13109)
 + * Duplicate rows after upgrading from 2.1.16 to 3.0.10/3.9 (CASSANDRA-13125)
 + * Fix UPDATE queries with empty IN restrictions (CASSANDRA-13152)
 + * Fix handling of partition with partition-level deletion plus
 +   live rows in sstabledump (CASSANDRA-13177)
 + * Provide user workaround when system_schema.columns does not contain entries
 +   for a table that's in system_schema.tables (CASSANDRA-13180)
   * Nodetool upgradesstables/scrub/compact ignores system tables 
(CASSANDRA-13410)
 - * Fix NPE issue in StorageService (CASSANDRA-13060)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
  Merged from 2.2:
++ * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
   * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
   * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)
 + * Fix JVM metric names (CASSANDRA-13103)
   * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773)
   * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92e30427/pylib/cqlshlib/copyutil.py
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-06-05 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17a7a806
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17a7a806
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17a7a806

Branch: refs/heads/cassandra-3.11
Commit: 17a7a806c8a5dd7f92500b17c0d7810608078ce5
Parents: 6b36d9f 5807eca
Author: Stefania Alborghetti 
Authored: Tue Jun 6 08:52:19 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:52:19 2017 +0800

--
 CHANGES.txt| 3 ++-
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/17a7a806/CHANGES.txt
--
diff --cc CHANGES.txt
index 8ab8422,d00ddcb..0076b2c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,48 -1,9 +1,49 @@@
 -2.2.10
 +3.0.14
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
- 
++Merged from 2.2:
+  * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
 - * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
 +
 +3.0.13
 + * Make reading of range tombstones more reliable (CASSANDRA-12811)
 + * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
 + * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
 + * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)
 + * Fix the conversion of 2.X expired rows without regular column data 
(CASSANDRA-13395)
 + * Fix hint delivery when using ext+internal IPs with prefer_local enabled 
(CASSANDRA-13020)
 + * Fix possible NPE on upgrade to 3.0/3.X in case of IO errors 
(CASSANDRA-13389)
 + * Legacy deserializer can create empty range tombstones (CASSANDRA-13341)
 + * Use the Kernel32 library to retrieve the PID on Windows and fix startup 
checks (CASSANDRA-1)
 + * Fix code to not exchange schema across major versions (CASSANDRA-13274)
 + * Dropping column results in "corrupt" SSTable (CASSANDRA-13337)
 + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340)
 + * Fix CONTAINS filtering for null collections (CASSANDRA-13246)
 + * Applying: Use a unique metric reservoir per test run when using 
Cassandra-wide metrics residing in MBeans (CASSANDRA-13216)
 + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 + * Legacy caching options can prevent 3.0 upgrade (CASSANDRA-13384)
 + * Nodetool upgradesstables/scrub/compact ignores system tables 
(CASSANDRA-13410)
 + * Fix NPE issue in StorageService (CASSANDRA-13060)
 +Merged from 2.2:
   * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
   * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)
 - * Fix JVM metric paths (CASSANDRA-13103)
   * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773)
   * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17a7a806/pylib/cqlshlib/copyutil.py

[01/10] cassandra git commit: cqlsh COPY FROM: increment error count only for failures, not for attempts

2017-06-05 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 8405f7366 -> 5807eca8b
  refs/heads/cassandra-3.0 6b36d9f05 -> 17a7a806c
  refs/heads/cassandra-3.11 d8a3aa46a -> 92e304277
  refs/heads/trunk 3d2f06547 -> 106691b2f


cqlsh COPY FROM: increment error count only for failures, not for attempts

patch by Kurt Greaves; reviewed by Stefania Alborghetti for CASSANDRA-13209


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5807eca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5807eca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5807eca8

Branch: refs/heads/cassandra-2.2
Commit: 5807eca8bbb30091b632e031441939c763b3e054
Parents: 8405f73
Author: Kurt 
Authored: Fri Jun 2 09:26:16 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:49:17 2017 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ef2e12c..d00ddcb 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
  * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
  * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
  * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 83e1742..b72b517 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1068,11 +1068,11 @@ class ImportErrorHandler(object):
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
without retries"
% (len(err.rows), err.name, err.msg))
 else:
-self.insert_errors += len(err.rows)
 if not err.final:
 shell.printerr("Failed to import %d rows: %s - %s,  will retry 
later, attempt %d of %d"
% (len(err.rows), err.name, err.msg, 
err.attempts, self.max_attempts))
 else:
+self.insert_errors += len(err.rows)
 self.add_failed_rows(err.rows)
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
after %d attempts"
% (len(err.rows), err.name, err.msg, 
err.attempts))


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-06-05 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17a7a806
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17a7a806
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17a7a806

Branch: refs/heads/cassandra-3.0
Commit: 17a7a806c8a5dd7f92500b17c0d7810608078ce5
Parents: 6b36d9f 5807eca
Author: Stefania Alborghetti 
Authored: Tue Jun 6 08:52:19 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:52:19 2017 +0800

--
 CHANGES.txt| 3 ++-
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/17a7a806/CHANGES.txt
--
diff --cc CHANGES.txt
index 8ab8422,d00ddcb..0076b2c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,48 -1,9 +1,49 @@@
 -2.2.10
 +3.0.14
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
- 
++Merged from 2.2:
+  * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
 - * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
 +
 +3.0.13
 + * Make reading of range tombstones more reliable (CASSANDRA-12811)
 + * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
 + * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
 + * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)
 + * Fix the conversion of 2.X expired rows without regular column data 
(CASSANDRA-13395)
 + * Fix hint delivery when using ext+internal IPs with prefer_local enabled 
(CASSANDRA-13020)
 + * Fix possible NPE on upgrade to 3.0/3.X in case of IO errors 
(CASSANDRA-13389)
 + * Legacy deserializer can create empty range tombstones (CASSANDRA-13341)
 + * Use the Kernel32 library to retrieve the PID on Windows and fix startup 
checks (CASSANDRA-1)
 + * Fix code to not exchange schema across major versions (CASSANDRA-13274)
 + * Dropping column results in "corrupt" SSTable (CASSANDRA-13337)
 + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340)
 + * Fix CONTAINS filtering for null collections (CASSANDRA-13246)
 + * Applying: Use a unique metric reservoir per test run when using 
Cassandra-wide metrics residing in MBeans (CASSANDRA-13216)
 + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 + * Legacy caching options can prevent 3.0 upgrade (CASSANDRA-13384)
 + * Nodetool upgradesstables/scrub/compact ignores system tables 
(CASSANDRA-13410)
 + * Fix NPE issue in StorageService (CASSANDRA-13060)
 +Merged from 2.2:
   * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
   * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)
 - * Fix JVM metric paths (CASSANDRA-13103)
   * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773)
   * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17a7a806/pylib/cqlshlib/copyutil.py

[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-06-05 Thread stefania
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92e30427
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92e30427
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92e30427

Branch: refs/heads/trunk
Commit: 92e304277c1b4d43049c59d5511f398e15c40b04
Parents: d8a3aa4 17a7a80
Author: Stefania Alborghetti 
Authored: Tue Jun 6 08:53:34 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:53:34 2017 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92e30427/CHANGES.txt
--
diff --cc CHANGES.txt
index 5202428,0076b2c..61bff07
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -73,33 -38,12 +73,34 @@@ Merged from 3.0
   * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320)
   * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
   * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 - * Legacy caching options can prevent 3.0 upgrade (CASSANDRA-13384)
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
 + * Coalescing strategy sleeps too much (CASSANDRA-13090)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 + * Use keyspace replication settings on system.size_estimates table 
(CASSANDRA-9639)
 + * Add vm.max_map_count StartupCheck (CASSANDRA-13008)
 + * Hint related logging should include the IP address of the destination in 
addition to
 +   host ID (CASSANDRA-13205)
 + * Reloading logback.xml does not work (CASSANDRA-13173)
 + * Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 
(CASSANDRA-13109)
 + * Duplicate rows after upgrading from 2.1.16 to 3.0.10/3.9 (CASSANDRA-13125)
 + * Fix UPDATE queries with empty IN restrictions (CASSANDRA-13152)
 + * Fix handling of partition with partition-level deletion plus
 +   live rows in sstabledump (CASSANDRA-13177)
 + * Provide user workaround when system_schema.columns does not contain entries
 +   for a table that's in system_schema.tables (CASSANDRA-13180)
   * Nodetool upgradesstables/scrub/compact ignores system tables 
(CASSANDRA-13410)
 - * Fix NPE issue in StorageService (CASSANDRA-13060)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
  Merged from 2.2:
++ * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
   * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
   * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)
 + * Fix JVM metric names (CASSANDRA-13103)
   * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773)
   * Discard in-flight shadow round responses (CASSANDRA-12653)
   * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92e30427/pylib/cqlshlib/copyutil.py
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[04/10] cassandra git commit: cqlsh COPY FROM: increment error count only for failures, not for attempts

2017-06-05 Thread stefania
cqlsh COPY FROM: increment error count only for failures, not for attempts

patch by Kurt Greaves; reviewed by Stefania Alborghetti for CASSANDRA-13209


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5807eca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5807eca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5807eca8

Branch: refs/heads/trunk
Commit: 5807eca8bbb30091b632e031441939c763b3e054
Parents: 8405f73
Author: Kurt 
Authored: Fri Jun 2 09:26:16 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:49:17 2017 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ef2e12c..d00ddcb 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
  * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
  * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
  * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 83e1742..b72b517 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1068,11 +1068,11 @@ class ImportErrorHandler(object):
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
without retries"
% (len(err.rows), err.name, err.msg))
 else:
-self.insert_errors += len(err.rows)
 if not err.final:
 shell.printerr("Failed to import %d rows: %s - %s,  will retry 
later, attempt %d of %d"
% (len(err.rows), err.name, err.msg, 
err.attempts, self.max_attempts))
 else:
+self.insert_errors += len(err.rows)
 self.add_failed_rows(err.rows)
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
after %d attempts"
% (len(err.rows), err.name, err.msg, 
err.attempts))


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[02/10] cassandra git commit: cqlsh COPY FROM: increment error count only for failures, not for attempts

2017-06-05 Thread stefania
cqlsh COPY FROM: increment error count only for failures, not for attempts

patch by Kurt Greaves; reviewed by Stefania Alborghetti for CASSANDRA-13209


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5807eca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5807eca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5807eca8

Branch: refs/heads/cassandra-3.0
Commit: 5807eca8bbb30091b632e031441939c763b3e054
Parents: 8405f73
Author: Kurt 
Authored: Fri Jun 2 09:26:16 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:49:17 2017 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ef2e12c..d00ddcb 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
  * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
  * Avoid starting gossiper in RemoveTest (CASSANDRA-13407)
  * Fix weightedSize() for row-cache reported by JMX and NodeTool 
(CASSANDRA-13393)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5807eca8/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 83e1742..b72b517 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1068,11 +1068,11 @@ class ImportErrorHandler(object):
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
without retries"
% (len(err.rows), err.name, err.msg))
 else:
-self.insert_errors += len(err.rows)
 if not err.final:
 shell.printerr("Failed to import %d rows: %s - %s,  will retry 
later, attempt %d of %d"
% (len(err.rows), err.name, err.msg, 
err.attempts, self.max_attempts))
 else:
+self.insert_errors += len(err.rows)
 self.add_failed_rows(err.rows)
 shell.printerr("Failed to import %d rows: %s - %s,  given up 
after %d attempts"
% (len(err.rows), err.name, err.msg, 
err.attempts))


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-06-05 Thread stefania
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/106691b2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/106691b2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/106691b2

Branch: refs/heads/trunk
Commit: 106691b2ff479582fa5b44a01f077d04ff39bf50
Parents: 3d2f065 92e3042
Author: Stefania Alborghetti 
Authored: Tue Jun 6 08:53:47 2017 +0800
Committer: Stefania Alborghetti 
Committed: Tue Jun 6 08:53:47 2017 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/106691b2/CHANGES.txt
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13346) Failed unregistering mbean during drop keyspace

2017-06-05 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037831#comment-16037831
 ] 

Lerh Chuan Low commented on CASSANDRA-13346:


[~jjirsa] The ones attached to the JIRA :) 

> Failed unregistering mbean during drop keyspace
> ---
>
> Key: CASSANDRA-13346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13346
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Cassandra 3.9
>Reporter: Gábor Auth
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
> Fix For: 3.10, 3.0.x
>
> Attachments: 13346-3.0.X.txt, 13346-3.X.txt, 13346-trunk.txt
>
>
> All node throw exceptions about materialized views during drop keyspace:
> {code}
> WARN  [MigrationStage:1] 2017-03-16 16:54:25,016 ColumnFamilyStore.java:535 - 
> Failed unregistering mbean: 
> org.apache.cassandra.db:type=Tables,keyspace=test20160810,table=unit_by_account
> java.lang.NullPointerException: null
> at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap$KeySetView.remove(ConcurrentHashMap.java:4569)
>  ~[na:1.8.0_121]
> at 
> org.apache.cassandra.metrics.TableMetrics.release(TableMetrics.java:712) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.unregisterMBean(ColumnFamilyStore.java:570)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:527)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:517)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.unloadCf(Keyspace.java:365) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.dropCf(Keyspace.java:358) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.config.Schema.dropView(Schema.java:744) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$mergeSchema$373(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_121]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1256)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13346) Failed unregistering mbean during drop keyspace

2017-06-05 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037817#comment-16037817
 ] 

Jeff Jirsa commented on CASSANDRA-13346:


[~cnlwsu] / [~Lerh Low] - just to be clear, which patches here are ready to 
commit? The ones attached to the JIRA, or the github URL 
https://github.com/apache/cassandra/compare/trunk...juiceblender:cassandra-13346
 ? 

 

> Failed unregistering mbean during drop keyspace
> ---
>
> Key: CASSANDRA-13346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13346
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Cassandra 3.9
>Reporter: Gábor Auth
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
> Fix For: 3.10, 3.0.x
>
> Attachments: 13346-3.0.X.txt, 13346-3.X.txt, 13346-trunk.txt
>
>
> All node throw exceptions about materialized views during drop keyspace:
> {code}
> WARN  [MigrationStage:1] 2017-03-16 16:54:25,016 ColumnFamilyStore.java:535 - 
> Failed unregistering mbean: 
> org.apache.cassandra.db:type=Tables,keyspace=test20160810,table=unit_by_account
> java.lang.NullPointerException: null
> at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap$KeySetView.remove(ConcurrentHashMap.java:4569)
>  ~[na:1.8.0_121]
> at 
> org.apache.cassandra.metrics.TableMetrics.release(TableMetrics.java:712) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.unregisterMBean(ColumnFamilyStore.java:570)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:527)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:517)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.unloadCf(Keyspace.java:365) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.dropCf(Keyspace.java:358) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.config.Schema.dropView(Schema.java:744) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$mergeSchema$373(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_121]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1256)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches

2017-06-05 Thread Simon Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037786#comment-16037786
 ] 

Simon Zhou commented on CASSANDRA-10876:


I intended to backport this (see CASSANDRA-13467) but may need a committer to 
review it.

> Alter behavior of batch WARN and fail on single partition batches
> -
>
> Key: CASSANDRA-10876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10876
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
> Attachments: 10876.txt
>
>
> In an attempt to give operator insight into potentially harmful batch usage, 
> Jiras were created to log WARN or fail on certain batch sizes. This ignores 
> the single partition batch, which doesn't create the same issues as a 
> multi-partition batch. 
> The proposal is to ignore size on single partition batch statements. 
> Reference:
> [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487]
> [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13574) mx4j default listening configuration comment is not correct

2017-06-05 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037623#comment-16037623
 ] 

Jay Zhuang commented on CASSANDRA-13574:


Please review the patch: 
https://github.com/apache/cassandra/compare/cassandra-3.0...cooldoger:13574-3.0?expand=1

> mx4j default listening configuration comment is not correct
> ---
>
> Key: CASSANDRA-13574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: jmx_interface
> Fix For: 3.0.14
>
>
> {noformat}
> By default mx4j listens on 0.0.0.0:8081.
> {noformat}
> https://github.com/apache/cassandra/blob/cassandra-2.2/conf/cassandra-env.sh#L302
> It's actually set to Cassandra broadcast_address and it will never be 
> {{0.0.0.0}}:
> https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/utils/Mx4jTool.java#L79



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13574) mx4j default listening configuration comment is not correct

2017-06-05 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13574:
---
Fix Version/s: 3.0.14
   Status: Patch Available  (was: Open)

> mx4j default listening configuration comment is not correct
> ---
>
> Key: CASSANDRA-13574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: jmx_interface
> Fix For: 3.0.14
>
>
> {noformat}
> By default mx4j listens on 0.0.0.0:8081.
> {noformat}
> https://github.com/apache/cassandra/blob/cassandra-2.2/conf/cassandra-env.sh#L302
> It's actually set to Cassandra broadcast_address and it will never be 
> {{0.0.0.0}}:
> https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/utils/Mx4jTool.java#L79



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13574) mx4j default listening configuration comment is not correct

2017-06-05 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-13574:
--

 Summary: mx4j default listening configuration comment is not 
correct
 Key: CASSANDRA-13574
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13574
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jay Zhuang
Assignee: Jay Zhuang
Priority: Minor


{noformat}
By default mx4j listens on 0.0.0.0:8081.
{noformat}
https://github.com/apache/cassandra/blob/cassandra-2.2/conf/cassandra-env.sh#L302

It's actually set to Cassandra broadcast_address and it will never be 
{{0.0.0.0}}:
https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/utils/Mx4jTool.java#L79



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-05 Thread Jonathan Owens (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037380#comment-16037380
 ] 

Jonathan Owens commented on CASSANDRA-13418:


We're chasing what may be a gotcha in our implementation of this. We have one 
cluster that does regular incremental repairs, and is ending up with a whole 
lot of duplicated data across sstables, we guess due to overstreaming. 
Explicitly ignoring overlap is awesome for compacting away tombstones, but does 
nothing to detect duplicate partitions across tables on disk. And in TWCS, 
because it uses largest-timestamp to bucket, tables with older data in them 
that was streamed later will never appear in the same compaction operation as 
the table they "should have" been written in the first time. CASSANDRA-10496 
would resolve this eventually by pushing that older data into the correct 
bucket, but we need a workaround sooner.

We're contemplating a few options:
* I remember, or imagined, a ticket to try to suss out overlapping sstables and 
include them in the current compaction operation if found, rather than 
cancelling the operation. That seems good here, because in TWCS you should not 
have many overlaps, and if you do they need to be addressed somehow or you end 
up with duplicates.
* We could switch to cassandra-reaper or something similar and do 
higher-precision repairs to reduce overstreaming, though that's a lot of work 
to fix what seems really like a compaction artifact.
* Reverting the change would put us back in the world where tombstones don't 
expire due to overlap checks failing, so that's out.
* We can write an external tool to detect overlaps and issue user-defined 
compactions against them, but that seems really yucky. 
* We could never run incremental repairs and rely only on higher consistency 
levels on write/read, and let read repair do the work. This fixes the problem 
only by decreasing the magnitude.

I still believe this patch is a good idea, as optimizing for tombstone expiry 
is essential with TWCS, but the repair interaction here is worth pointing out.


> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 9

2017-06-05 Thread Mandy Chung (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037369#comment-16037369
 ] 

Mandy Chung commented on CASSANDRA-9608:


https://bugs.openjdk.java.net/browse/JDK-6760712 defines a property 
"jmx.remote.x.daemon", if specified as "true" in the environment map when 
starting a connector server, the connector server will run as a daemon and will 
not prevent the VM from exiting.





> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Priority: Minor
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-9608) Support Java 9

2017-06-05 Thread Mandy Chung (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037369#comment-16037369
 ] 

Mandy Chung edited comment on CASSANDRA-9608 at 6/5/17 6:45 PM:


https://bugs.openjdk.java.net/browse/JDK-6760712 defines a property 
"jmx.remote.x.daemon", if specified as "true" in the environment map when 
starting a connector server, the connector server will run as a daemon and will 
not prevent the VM from exiting.  This should be used instead to replace the 
dependency to JDK internal API.





was (Author: mandy.ch...@oracle.com):
https://bugs.openjdk.java.net/browse/JDK-6760712 defines a property 
"jmx.remote.x.daemon", if specified as "true" in the environment map when 
starting a connector server, the connector server will run as a daemon and will 
not prevent the VM from exiting.





> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Priority: Minor
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13545) Exception in CompactionExecutor leading to tmplink files not being removed

2017-06-05 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13545:
---
Priority: Critical  (was: Major)

> Exception in CompactionExecutor leading to tmplink files not being removed
> --
>
> Key: CASSANDRA-13545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13545
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dmitry Erokhin
>Priority: Critical
>
> We are facing an issue where compactions fail on a few nodes with the 
> following message
> {code}
> ERROR [CompactionExecutor:1248] 2017-05-22 15:32:55,390 
> CassandraDaemon.java:185 - Exception in thread 
> Thread[CompactionExecutor:1248,1,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.io.sstable.IndexSummary.(IndexSummary.java:86) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.build(IndexSummaryBuilder.java:235)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:316)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:170)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:115)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:256)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}
> Also, the number of tmplink files in /var/lib/cassandra/data/ name>/blocks/tmplink* is growing constantly until node runs out of space. 
> Restarting cassandra removes all tmplink files, but the issue still continues.
> We are using Cassandra 2.2.5 on Debian 8 with Oracle Java 8
> {code}
> root@cassandra-p10:/var/lib/cassandra/data/mugenstorage/blocks-33167ef0447a11e68f3e5b42fc45b62f#
>  dpkg -l | grep -E "java|cassandra"
> ii  cassandra  2.2.5all  
> distributed storage system for structured data
> ii  cassandra-tools2.2.5all  
> distributed storage system for structured data
> ii  java-common0.52 all  
> Base of all Java packages
> ii  javascript-common  11   all  
> Base support for JavaScript library packages
> ii  oracle-java8-installer 8u121-1~webupd8~0all  
> Oracle Java(TM) Development Kit (JDK) 8
> ii  oracle-java8-set-default   8u121-1~webupd8~0all  
> Set Oracle JDK 8 as default Java
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13545) Exception in CompactionExecutor leading to tmplink files not being removed

2017-06-05 Thread Dmitry Erokhin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037233#comment-16037233
 ] 

Dmitry Erokhin commented on CASSANDRA-13545:


One of our engineers has been able to find at least one issue which leads to 
this condition. His findings are below.
---

With a consistent reproduction outside of the production cluster, I downloaded 
the cassandra source code, setup a remote debugger (eclipse) and connected it 
to the cassandra process running on my node.
 
At this point I was able to setup breakpoints and examine a live system, 
starting at the last frame in the traceback 
(org.apache.cassandra.io.sstable.IndexSummary.(IndexSummary.java:86)). 
Stepping through the code duing a live compaction, I was able to determine that 
the issue is indeed a bug in Cassandra that occurs when it is trying to run a 
compaction job with a very large number of partitions.
 
The SafeMemoryWriter class is used to build the index summary for the new 
sstable.
{code:java}
public class SafeMemoryWriter extends DataOutputBuffer
{
private SafeMemory memory;
 
@SuppressWarnings("resource")
public SafeMemoryWriter(long initialCapacity)
{
this(new SafeMemory(initialCapacity));
}
 
private SafeMemoryWriter(SafeMemory memory)
{
super(tailBuffer(memory).order(ByteOrder.BIG_ENDIAN));
this.memory = memory;
}
 
public SafeMemory currentBuffer()
{
return memory;
}
 
@Override
protected void reallocate(long count)
{
long newCapacity = calculateNewSize(count);
if (newCapacity != capacity())
{
long position = length();
ByteOrder order = buffer.order();
 
SafeMemory oldBuffer = memory;
memory = this.memory.copy(newCapacity);
buffer = tailBuffer(memory);
 
int newPosition = (int) (position - tailOffset(memory));
buffer.position(newPosition);
buffer.order(order);
 
oldBuffer.free();
}
}
 
public void setCapacity(long newCapacity)
{
reallocate(newCapacity);
}
 
public void close()
{
memory.close();
}
 
public Throwable close(Throwable accumulate)
{
return memory.close(accumulate);
}
 
public long length()
{
return tailOffset(memory) +  buffer.position();
}
 
public long capacity()
{
return memory.size();
}
 
@Override
public SafeMemoryWriter order(ByteOrder order)
{
super.order(order);
return this;
}
 
@Override
public long validateReallocation(long newSize)
{
return newSize;
}
 
private static long tailOffset(Memory memory)
{
return Math.max(0, memory.size - Integer.MAX_VALUE);
}
 
private static ByteBuffer tailBuffer(Memory memory)
{
return memory.asByteBuffer(tailOffset(memory), (int) 
Math.min(memory.size, Integer.MAX_VALUE));
}
}
{code}
The appears like it is intended to work with buffers larger than 
Integer.MAX_VALUE, however if the initial size of the buffer is larger than 
that the initial value of length() will be incorrect (it won’t be zero) and 
writing via the DataOutputBuffer will write in the wrong location (it won’t 
start at offset 0).
 
 
{code:java}
public IndexSummaryBuilder(long expectedKeys, int minIndexInterval, int 
samplingLevel)
{
this.samplingLevel = samplingLevel;
this.startPoints = Downsampling.getStartPoints(BASE_SAMPLING_LEVEL, 
samplingLevel);
 
long maxExpectedEntries = expectedKeys / minIndexInterval;
if (maxExpectedEntries > Integer.MAX_VALUE)
{
// that's a _lot_ of keys, and a very low min index interval
int effectiveMinInterval = (int) Math.ceil((double) 
Integer.MAX_VALUE / expectedKeys);
maxExpectedEntries = expectedKeys / effectiveMinInterval;
assert maxExpectedEntries <= Integer.MAX_VALUE : maxExpectedEntries;
logger.warn("min_index_interval of {} is too low for {} expected 
keys; using interval of {} instead",
minIndexInterval, expectedKeys, effectiveMinInterval);
this.minIndexInterval = effectiveMinInterval;
}
else
{
this.minIndexInterval = minIndexInterval;
}
 
// for initializing data structures, adjust our estimates based on the 
sampling level
maxExpectedEntries = Math.max(1, (maxExpectedEntries * samplingLevel) / 
BASE_SAMPLING_LEVEL);
offsets = new SafeMemoryWriter(4 * 
maxExpectedEntries).order(ByteOrder.nativeOrder());
entries = new SafeMemoryWriter(40 * 
maxExpectedEntries).order(ByteOrder.nativeOrder());
 
// the summary will always contain the first index entry (downsampling 
will never remove it)
nextSamplePosition = 0;
  

[jira] [Updated] (CASSANDRA-13572) describecluster shows sub-snitch for DynamicEndpointSnitch

2017-06-05 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13572:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> describecluster shows sub-snitch for DynamicEndpointSnitch
> --
>
> Key: CASSANDRA-13572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13572
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: nodetool
> Fix For: 4.0
>
>
> {{nodetool describecluster}} only shows the first level snitch name, if 
> DynamicSnitch is enable, it doesn't give the sub-snitch name, which is also 
> very useful. For example:
> {noformat}
> Cluster Information:
> Name: Test Cluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> 59a1610b-0384-337c-a2c5-9c8efaba12be: [127.0.0.1]
> {noformat}
> It would be better to show sub-snitch name if it's DynamicSnitch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 9

2017-06-05 Thread Alan Bateman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037074#comment-16037074
 ] 

Alan Bateman commented on CASSANDRA-9608:
-

It looks like JMXServerUtils has copied the approach used internally in the JMX 
agent. This should not be necessary and I hope can be ex-exmained. If needed, 
the jmx-...@openjdk.java.net mailing list is the place where JDK's JMX agent is 
maintained.

> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Priority: Minor
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13403) nodetool repair breaks SASI index

2017-06-05 Thread Igor Novgorodov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037049#comment-16037049
 ] 

Igor Novgorodov commented on CASSANDRA-13403:
-

Any updates? Thanks in advance

> nodetool repair breaks SASI index
> -
>
> Key: CASSANDRA-13403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13403
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.10
>Reporter: Igor Novgorodov
>Assignee: Alex Petrov
>
> I've got table:
> {code}
> CREATE TABLE cservice.bulks_recipients (
> recipient text,
> bulk_id uuid,
> datetime_final timestamp,
> datetime_sent timestamp,
> request_id uuid,
> status int,
> PRIMARY KEY (recipient, bulk_id)
> ) WITH CLUSTERING ORDER BY (bulk_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE CUSTOM INDEX bulk_recipients_bulk_id ON cservice.bulks_recipients 
> (bulk_id) USING 'org.apache.cassandra.index.sasi.SASIIndex';
> {code}
> There are 11 rows in it:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> Let's query by index (all rows have the same *bulk_id*):
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;   
> >   
> ...
> (11 rows)
> {code}
> Ok, everything is fine.
> Now i'm doing *nodetool repair --partitioner-range --job-threads 4 --full* on 
> each node in cluster sequentially.
> After it finished:
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;
> ...
> (2 rows)
> {code}
> Only two rows.
> While the rows are actually there:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> If i issue an incremental repair on a random node, i can get like 7 rows 
> after index query.
> Dropping index and recreating it fixes the issue. Is it a bug or am i doing 
> the repair the wrong way?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13067) Integer overflows with file system size reported by Amazon Elastic File System (EFS)

2017-06-05 Thread Matt Wringe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037017#comment-16037017
 ] 

Matt Wringe commented on CASSANDRA-13067:
-

That seems like an acceptable solution to me. I am fine with whatever allows us 
to get this patch committed upstream so that we don't have to continue to carry 
it ourselves.

[~blerer] is this something you can take care of yourself? or are you expecting 
an updated patch from me?

> Integer overflows with file system size reported by Amazon Elastic File 
> System (EFS)
> 
>
> Key: CASSANDRA-13067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13067
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra in OpenShift running on Amazon EC2 instance 
> with EFS mounted for data
>Reporter: Michael Hanselmann
>Assignee: Matt Wringe
> Attachments: 0001-Handle-exabyte-sized-filesystems.patch
>
>
> When not explicitly configured Cassandra uses 
> [{{nio.FileStore.getTotalSpace}}|https://docs.oracle.com/javase/7/docs/api/java/nio/file/FileStore.html]
>  to determine the total amount of available space in order to [calculate the 
> preferred commit log 
> size|https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/config/DatabaseDescriptor.java#L553].
>  [Amazon EFS|https://aws.amazon.com/efs/] instances report a filesystem size 
> of 8 EiB when empty. [{{getTotalSpace}} causes an integer overflow 
> (JDK-8162520)|https://bugs.openjdk.java.net/browse/JDK-8162520] and returns a 
> negative number, resulting in a negative preferred size and causing the 
> checked integer to throw.
> Overriding {{commitlog_total_space_in_mb}} is not sufficient as 
> [{{DataDirectory.getAvailableSpace}}|https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/db/Directories.java#L550]
>  makes use of {{nio.FileStore.getUsableSpace}}.
> [AMQ-6441] is a comparable issue in ActiveMQ.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13573) sstabledump doesn't print out tombstone information for frozen set collection

2017-06-05 Thread Stefano Ortolani (JIRA)
Stefano Ortolani created CASSANDRA-13573:


 Summary: sstabledump doesn't print out tombstone information for 
frozen set collection
 Key: CASSANDRA-13573
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13573
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Stefano Ortolani


Schema and data"
{noformat}
CREATE TABLE ks.cf (
hash blob,
report_id timeuuid,
subject_ids frozen,
PRIMARY KEY (hash, report_id)
) WITH CLUSTERING ORDER BY (report_id DESC);

INSERT INTO ks.cf (hash, report_id, subject_ids) VALUES (0x1213, now(), 
{1,2,4,5});
{noformat}

sstabledump output is:

{noformat}
sstabledump mc-1-big-Data.db 
[
  {
"partition" : {
  "key" : [ "1213" ],
  "position" : 0
},
"rows" : [
  {
"type" : "row",
"position" : 16,
"clustering" : [ "ec01eed0-49d9-11e7-b39a-97a96f529c02" ],
"liveness_info" : { "tstamp" : "2017-06-05T10:29:57.434856Z" },
"cells" : [
  { "name" : "subject_ids", "value" : "" }
]
  }
]
  }
]
{noformat}

While the values are really there:

{noformat}
cqlsh:ks> select * from cf ;

 hash   | report_id| subject_ids
+--+-
 0x1213 | 02bafff0-49d9-11e7-b39a-97a96f529c02 |   {1, 2, 4}
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13043) Exceptions when booting up node (state jump)

2017-06-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036783#comment-16036783
 ] 

Stefano Ortolani commented on CASSANDRA-13043:
--

Maybe a dev should rename the title as it's quite misleading right now :S

> Exceptions when booting up node (state jump)
> 
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where they come from or what can we do about them? Note that 
> the cluster is healthy (has sufficient live nodes).
> {noformat}
> 2/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-111,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_111]
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-118,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> 

[jira] [Comment Edited] (CASSANDRA-13043) Exceptions when booting up node (state jump)

2017-06-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036707#comment-16036707
 ] 

Stefano Ortolani edited comment on CASSANDRA-13043 at 6/5/17 9:24 AM:
--

Hi, I can confirm I have the same on C* 3.0.13.
As far as I understand it is connected to a substantial number of counter 
mutations in the commit log, and C* attempt to apply those mutations before all 
nodes are flagged as alive.

Note that I am having this after restarting the node via {{nodetool drain}}

{noformat}
INFO  [HANDSHAKE-/10.12.33.4] 2017-06-02 22:43:53,908 
OutboundTcpConnection.java:537 - Handshaking version with /10.12.33.4
INFO  [GossipStage:1] 2017-06-02 22:43:54,345 StorageService.java:1991 - Node 
/10.12.33.8 state jump to NORMAL
WARN  [SharedPool-Worker-129] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-129,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-132] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-132,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-130] 2017-06-02 22:43:54,367 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-130,5,main]: {}
{noformat}



was (Author: ostefano):
Hi, I can confirm I have the same on C* 3.0.13.
As far as I understand it is connected to a substantial number of counter 
mutations in the commit log, and C* attempt to apply those mutations before all 
nodes are flagged as alive.

{noformat}
INFO  [HANDSHAKE-/10.12.33.4] 2017-06-02 22:43:53,908 
OutboundTcpConnection.java:537 - Handshaking version with /10.12.33.4
INFO  

[jira] [Comment Edited] (CASSANDRA-13043) Exceptions when booting up node (state jump)

2017-06-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036707#comment-16036707
 ] 

Stefano Ortolani edited comment on CASSANDRA-13043 at 6/5/17 9:23 AM:
--

Hi, I can confirm I have the same on C* 3.0.13.
As far as I understand it is connected to a substantial number of counter 
mutations in the commit log, and C* attempt to apply those mutations before all 
nodes are flagged as alive.

{noformat}
INFO  [HANDSHAKE-/10.12.33.4] 2017-06-02 22:43:53,908 
OutboundTcpConnection.java:537 - Handshaking version with /10.12.33.4
INFO  [GossipStage:1] 2017-06-02 22:43:54,345 StorageService.java:1991 - Node 
/10.12.33.8 state jump to NORMAL
WARN  [SharedPool-Worker-129] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-129,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-132] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-132,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-130] 2017-06-02 22:43:54,367 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-130,5,main]: {}
{noformat}



was (Author: ostefano):
Hi, I can confirm I have the same on C* 3.0.13.
As far as I understand it is connected to many counter mutations in the commit 
log, and C* attempt to apply those mutations before all nodes are flagged as 
alive.

{noformat}
INFO  [HANDSHAKE-/10.12.33.4] 2017-06-02 22:43:53,908 
OutboundTcpConnection.java:537 - Handshaking version with /10.12.33.4
INFO  [GossipStage:1] 2017-06-02 22:43:54,345 StorageService.java:1991 - Node 
/10.12.33.8 state jump 

[jira] [Comment Edited] (CASSANDRA-13043) Exceptions when booting up node (state jump)

2017-06-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036707#comment-16036707
 ] 

Stefano Ortolani edited comment on CASSANDRA-13043 at 6/5/17 9:22 AM:
--

Hi, I can confirm I have the same on C* 3.0.13.
As far as I understand it is connected to many counter mutations in the commit 
log, and C* attempt to apply those mutations before all nodes are flagged as 
alive.

{noformat}
INFO  [HANDSHAKE-/10.12.33.4] 2017-06-02 22:43:53,908 
OutboundTcpConnection.java:537 - Handshaking version with /10.12.33.4
INFO  [GossipStage:1] 2017-06-02 22:43:54,345 StorageService.java:1991 - Node 
/10.12.33.8 state jump to NORMAL
WARN  [SharedPool-Worker-129] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-129,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-132] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-132,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-130] 2017-06-02 22:43:54,367 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-130,5,main]: {}
{noformat}



was (Author: ostefano):
Hi, I can confirm I have the same on C* 3.0.13.
As far as I understand it is connected to many counter mutations in the commit 
log, and C* attempt to apply those mutations before all nodes are flagged as 
alive.

{{
INFO  [HANDSHAKE-/10.12.33.4] 2017-06-02 22:43:53,908 
OutboundTcpConnection.java:537 - Handshaking version with /10.12.33.4
INFO  [GossipStage:1] 2017-06-02 22:43:54,345 StorageService.java:1991 - Node 
/10.12.33.8 state jump to NORMAL
WARN  

[jira] [Commented] (CASSANDRA-13043) Exceptions when booting up node (state jump)

2017-06-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036707#comment-16036707
 ] 

Stefano Ortolani commented on CASSANDRA-13043:
--

Hi, I can confirm I have the same on C* 3.0.13.
As far as I understand it is connected to many counter mutations in the commit 
log, and C* attempt to apply those mutations before all nodes are flagged as 
alive.

{{
INFO  [HANDSHAKE-/10.12.33.4] 2017-06-02 22:43:53,908 
OutboundTcpConnection.java:537 - Handshaking version with /10.12.33.4
INFO  [GossipStage:1] 2017-06-02 22:43:54,345 StorageService.java:1991 - Node 
/10.12.33.8 state jump to NORMAL
WARN  [SharedPool-Worker-129] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-129,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-132] 2017-06-02 22:43:54,366 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-132,5,main]: {}
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1051) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1447)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.13.jar:3.0.13]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.13.jar:3.0.13]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.13.jar:3.0.13]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.13.jar:3.0.13]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  [SharedPool-Worker-130] 2017-06-02 22:43:54,367 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-130,5,main]: {}
}}


> Exceptions when booting up node (state jump)
> 
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where 

[jira] [Comment Edited] (CASSANDRA-13068) Fully expired sstable not dropped when running out of disk space

2017-06-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036678#comment-16036678
 ] 

Marcus Eriksson edited comment on CASSANDRA-13068 at 6/5/17 8:37 AM:
-

instead of that {{fullyExpiredSSTables.isEmpty()}} check could we just subtract 
the size of the {{fullyExpiredSSTables}} from {{expectedWriteSize}} in 
{{CompactionTask#checkAvailableDiskSpace}}?


was (Author: krummas):
instead of that {{fullyExpiredSSTables.isEmpty()}} could we just subtract the 
size of the {{fullyExpiredSSTables}} from {{expectedWriteSize}} in 
{{CompactionTask#checkAvailableDiskSpace}}?

> Fully expired sstable not dropped when running out of disk space
> 
>
> Key: CASSANDRA-13068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13068
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Lerh Chuan Low
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If a fully expired sstable is larger than the remaining disk space we won't 
> run the compaction that can drop the sstable (ie, in our disk space check 
> should not include the fully expired sstables)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13068) Fully expired sstable not dropped when running out of disk space

2017-06-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036678#comment-16036678
 ] 

Marcus Eriksson commented on CASSANDRA-13068:
-

instead of that {{fullyExpiredSSTables.isEmpty()}} could we just subtract the 
size of the {{fullyExpiredSSTables}} from {{expectedWriteSize}} in 
{{CompactionTask#checkAvailableDiskSpace}}?

> Fully expired sstable not dropped when running out of disk space
> 
>
> Key: CASSANDRA-13068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13068
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Lerh Chuan Low
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If a fully expired sstable is larger than the remaining disk space we won't 
> run the compaction that can drop the sstable (ie, in our disk space check 
> should not include the fully expired sstables)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13572) describecluster shows sub-snitch for DynamicEndpointSnitch

2017-06-05 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036653#comment-16036653
 ] 

Romain Hardouin commented on CASSANDRA-13572:
-

Duplicate of CASSANDRA-13528

> describecluster shows sub-snitch for DynamicEndpointSnitch
> --
>
> Key: CASSANDRA-13572
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13572
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: nodetool
> Fix For: 4.0
>
>
> {{nodetool describecluster}} only shows the first level snitch name, if 
> DynamicSnitch is enable, it doesn't give the sub-snitch name, which is also 
> very useful. For example:
> {noformat}
> Cluster Information:
> Name: Test Cluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> 59a1610b-0384-337c-a2c5-9c8efaba12be: [127.0.0.1]
> {noformat}
> It would be better to show sub-snitch name if it's DynamicSnitch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13530) GroupCommitLogService

2017-06-05 Thread Yuji Ito (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036648#comment-16036648
 ] 

Yuji Ito commented on CASSANDRA-13530:
--

I measured latency by fixing concurrency at 256 and using a Guava rate limiter 
to generate requests at a fixed rate ([^GuavaRequestThread.java]).
The environment and Cassandra configuration are the same as before.

The latency of GroupCommitLog is less than half of BatchCommitLog for SELECT 
and UPDATE.
I attached the result file ([^groupCommitLog_result.xlsx]) including histograms 
of latency.

- SELECT - Batch 2ms
||Throughput [ops]||Avg. latency [ms]||
|100|2.6|
|200|12.6|
|500|20.3|
|1000|27.0|

- SELECT - Group 10ms
||Throughput [ops]||Avg. latency [ms]||
|100|8.1|
|200|8.3|
|500|9.3|
|1000|26.2|

- SELECT - Group 15ms
||Throughput [ops]||Avg. latency [ms]||
|100|10.8|
|200|11.0|
|500|13.5|
|1000|15.0|

- UPDATE - Batch 2ms
||Throughput [ops]||Avg. latency [ms]||
|100|86.1|
|200|108.6|
|500|121.6|
|1000|133.9|

- UPDATE - Group 10ms
||Throughput [ops]||Avg. latency [ms]||
|100|37.8|
|200|104.0|
|500|118.7|
|1000|131.4|

- UPDATE - Group 15ms
||Throughput [ops]||Avg. latency [ms]||
|100|54.6|
|200|56.2|
|500|115.2|
|1000|122.1|

> GroupCommitLogService
> -
>
> Key: CASSANDRA-13530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13530
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuji Ito
>Assignee: Yuji Ito
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
> Attachments: groupCommit22.patch, groupCommit30.patch, 
> groupCommit3x.patch, groupCommitLog_result.xlsx, GuavaRequestThread.java, 
> MicroRequestThread.java
>
>
> I propose a new CommitLogService, GroupCommitLogService, to improve the 
> throughput when lots of requests are received.
> It improved the throughput by maximum 94%.
> I'd like to discuss about this CommitLogService.
> Currently, we can select either 2 CommitLog services; Periodic and Batch.
> In Periodic, we might lose some commit log which hasn't written to the disk.
> In Batch, we can write commit log to the disk every time. The size of commit 
> log to write is too small (< 4KB). When high concurrency, these writes are 
> gathered and persisted to the disk at once. But, when insufficient 
> concurrency, many small writes are issued and the performance decreases due 
> to the latency of the disk. Even if you use SSD, processes of many IO 
> commands decrease the performance.
> GroupCommitLogService writes some commitlog to the disk at once.
> The patch adds GroupCommitLogService (It is enabled by setting 
> `commitlog_sync` and `commitlog_sync_group_window_in_ms` in cassandra.yaml).
> The difference from Batch is just only waiting for the semaphore.
> By waiting for the semaphore, some writes for commit logs are executed at the 
> same time.
> In GroupCommitLogService, the latency becomes worse if the there is no 
> concurrency.
> I measured the performance with my microbench (MicroRequestThread.java) by 
> increasing the number of threads.The cluster has 3 nodes (Replication factor: 
> 3). Each nodes is AWS EC2 m4.large instance + 200IOPS io1 volume.
> The result is as below. The GroupCommitLogService with 10ms window improved 
> update with Paxos by 94% and improved select with Paxos by 76%.
> h6. SELECT / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|192|103|
> |2|163|212|
> |4|264|416|
> |8|454|800|
> |16|744|1311|
> |32|1151|1481|
> |64|1767|1844|
> |128|2949|3011|
> |256|4723|5000|
> h6. UPDATE / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|45|26|
> |2|39|51|
> |4|58|102|
> |8|102|198|
> |16|167|213|
> |32|289|295|
> |64|544|548|
> |128|1046|1058|
> |256|2020|2061|



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13530) GroupCommitLogService

2017-06-05 Thread Yuji Ito (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuji Ito updated CASSANDRA-13530:
-
Attachment: GuavaRequestThread.java
groupCommitLog_result.xlsx

> GroupCommitLogService
> -
>
> Key: CASSANDRA-13530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13530
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuji Ito
>Assignee: Yuji Ito
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
> Attachments: groupCommit22.patch, groupCommit30.patch, 
> groupCommit3x.patch, groupCommitLog_result.xlsx, GuavaRequestThread.java, 
> MicroRequestThread.java
>
>
> I propose a new CommitLogService, GroupCommitLogService, to improve the 
> throughput when lots of requests are received.
> It improved the throughput by maximum 94%.
> I'd like to discuss about this CommitLogService.
> Currently, we can select either 2 CommitLog services; Periodic and Batch.
> In Periodic, we might lose some commit log which hasn't written to the disk.
> In Batch, we can write commit log to the disk every time. The size of commit 
> log to write is too small (< 4KB). When high concurrency, these writes are 
> gathered and persisted to the disk at once. But, when insufficient 
> concurrency, many small writes are issued and the performance decreases due 
> to the latency of the disk. Even if you use SSD, processes of many IO 
> commands decrease the performance.
> GroupCommitLogService writes some commitlog to the disk at once.
> The patch adds GroupCommitLogService (It is enabled by setting 
> `commitlog_sync` and `commitlog_sync_group_window_in_ms` in cassandra.yaml).
> The difference from Batch is just only waiting for the semaphore.
> By waiting for the semaphore, some writes for commit logs are executed at the 
> same time.
> In GroupCommitLogService, the latency becomes worse if the there is no 
> concurrency.
> I measured the performance with my microbench (MicroRequestThread.java) by 
> increasing the number of threads.The cluster has 3 nodes (Replication factor: 
> 3). Each nodes is AWS EC2 m4.large instance + 200IOPS io1 volume.
> The result is as below. The GroupCommitLogService with 10ms window improved 
> update with Paxos by 94% and improved select with Paxos by 76%.
> h6. SELECT / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|192|103|
> |2|163|212|
> |4|264|416|
> |8|454|800|
> |16|744|1311|
> |32|1151|1481|
> |64|1767|1844|
> |128|2949|3011|
> |256|4723|5000|
> h6. UPDATE / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|45|26|
> |2|39|51|
> |4|58|102|
> |8|102|198|
> |16|167|213|
> |32|289|295|
> |64|544|548|
> |128|1046|1058|
> |256|2020|2061|



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-05 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036632#comment-16036632
 ] 

Krishna Dattu Koneru edited comment on CASSANDRA-13547 at 6/5/17 7:30 AM:
--

Seems related to https://issues.apache.org/jira/browse/CASSANDRA-13409 which 
has a fix 
[branch|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13409-3.11].
I ran above test on [commit | 
https://github.com/jasonstack/cassandra/commit/26f3e7b96001f6148eab7cc3589b2d496c5830c5]
 from this branch and it still fails. 

I believe the reason for this 

{code:title=tombstone|borderStyle=solid}

cqlsh> SELECT * FROM test.table1_mv2;

 name | id | enabled | foo
--++-+--
  One |  1 |True | null

(1 rows)

{code}

is explained 
[here|https://issues.apache.org/jira/browse/CASSANDRA-10261?focusedCommentId=14731266=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14731266]
 and 
[here|https://issues.apache.org/jira/browse/CASSANDRA-9664?focusedCommentId=14724150=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14724150].


was (Author: krishna.koneru):
Seems related to https://issues.apache.org/jira/browse/CASSANDRA-13409 which 
has a fix 
[branch|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13409-3.11].
I ran above test on [commit | 
https://github.com/jasonstack/cassandra/commit/26f3e7b96001f6148eab7cc3589b2d496c5830c5]
 from this and it still fails. 

I believe the reason for this 

{code:title=tombstone|borderStyle=solid}

cqlsh> SELECT * FROM test.table1_mv2;

 name | id | enabled | foo
--++-+--
  One |  1 |True | null

(1 rows)

{code}

is explained 
[here|https://issues.apache.org/jira/browse/CASSANDRA-10261?focusedCommentId=14731266=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14731266]
 and 
[here|https://issues.apache.org/jira/browse/CASSANDRA-9664?focusedCommentId=14724150=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14724150].

> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Krishna Dattu Koneru
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the correct data.
> Using the following test schema:
> {code:title=Schema|language=sql}
> DROP KEYSPACE IF EXISTS test;
> CREATE KEYSPACE test
>   WITH REPLICATION = { 
>'class' : 'SimpleStrategy', 
>'replication_factor' : 1 
>   };
> CREATE TABLE test.table1 (
> id int,
> name text,
> enabled boolean,
> foo text,
> PRIMARY KEY (id, name));
> CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> {code}
> When I insert a row into the base table the materialized views are updated 
> appropriately. (+)
> {code:title=Insert row|language=sql}
> cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', 
> TRUE, 'Bar');
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
>   One |  1 | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
>   One |  1 |True | Bar
> (1 rows)
> {code}
> Updating the record in the base table and setting enabled to FALSE will 
> filter the record from both materialized views. (+)
> {code:title=Disable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |   False | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
> 

[jira] [Commented] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-05 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036632#comment-16036632
 ] 

Krishna Dattu Koneru commented on CASSANDRA-13547:
--

Seems related to https://issues.apache.org/jira/browse/CASSANDRA-13409 which 
has a fix 
[branch|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13409-3.11].
I ran above test on [commit | 
https://github.com/jasonstack/cassandra/commit/26f3e7b96001f6148eab7cc3589b2d496c5830c5]
 from this and it still fails. 

I believe the reason for this 

{code:title=tombstone|borderStyle=solid}

cqlsh> SELECT * FROM test.table1_mv2;

 name | id | enabled | foo
--++-+--
  One |  1 |True | null

(1 rows)

{code}

is explained 
[here|https://issues.apache.org/jira/browse/CASSANDRA-10261?focusedCommentId=14731266=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14731266]
 and 
[here|https://issues.apache.org/jira/browse/CASSANDRA-9664?focusedCommentId=14724150=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14724150].

> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Krishna Dattu Koneru
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the correct data.
> Using the following test schema:
> {code:title=Schema|language=sql}
> DROP KEYSPACE IF EXISTS test;
> CREATE KEYSPACE test
>   WITH REPLICATION = { 
>'class' : 'SimpleStrategy', 
>'replication_factor' : 1 
>   };
> CREATE TABLE test.table1 (
> id int,
> name text,
> enabled boolean,
> foo text,
> PRIMARY KEY (id, name));
> CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> {code}
> When I insert a row into the base table the materialized views are updated 
> appropriately. (+)
> {code:title=Insert row|language=sql}
> cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', 
> TRUE, 'Bar');
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
>   One |  1 | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
>   One |  1 |True | Bar
> (1 rows)
> {code}
> Updating the record in the base table and setting enabled to FALSE will 
> filter the record from both materialized views. (+)
> {code:title=Disable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |   False | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
> (0 rows)
> {code}
> However a further update to the base table setting enabled to TRUE should 
> include the record in both materialzed views, however only one view 
> (table1_mv2) gets updated. (-)
> It appears that only the view (table1_mv2) that returns the filtered column 
> (enabled) is updated. (-)
> Additionally columns that are not part of the partiion or clustering key are 
> not updated. You can see that the foo column has a null value in table1_mv2. 
> (-)
> {code:title=Enable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+--
>   One |  1 |True | null
> (1 rows)
> {code}



--
This 

[jira] [Commented] (CASSANDRA-13209) test failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections

2017-06-05 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036582#comment-16036582
 ] 

Kurt Greaves commented on CASSANDRA-13209:
--

Yeah not sure why the counts are flaky on ASF infra. I rarely ever got the 
client request timeouts running on a standalone machine, even with a smaller 
heap. Anyway, probably best those tests are fixed in their own tickets/in 
dtests where applicable so may as well finish up with this ticket.

Thanks for the review Stefania, and yes I need you to commit.



> test failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections
> --
>
> Key: CASSANDRA-13209
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13209
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Kurt Greaves
>  Labels: dtest, test-failure
> Attachments: 13209.patch, node1.log, node2.log, node3.log, node4.log, 
> node5.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/528/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections
> {noformat}
> Error Message
> errors={'127.0.0.4': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.4
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-792s6j
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j
> dtest: DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-uNMsuW
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> dtest: DEBUG: Running stress with user profile 
> /home/automaton/cassandra-dtest/cqlsh_tests/blogposts.yaml
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 1090, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2571, in test_bulk_round_trip_blogposts_with_max_connections
> copy_from_options={'NUMPROCESSES': 2})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2500, in _test_bulk_round_trip
> num_records = create_records()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2473, in create_records
> ret = rows_to_list(self.session.execute(count_statement))[0][0]
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1998, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 3784, in result
> raise self._final_exception
> "errors={'127.0.0.4': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.4\n 
> >> begin captured logging << \ndtest: DEBUG: cluster ccm 
> directory: /tmp/dtest-792s6j\ndtest: DEBUG: Done setting configuration 
> options:\n{   'initial_token': None,\n'num_tokens': '32',\n
> 'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j\ndtest: 
> DEBUG: clearing