[jira] [Commented] (CASSANDRA-11343) Fix bloom filter sizing with LCS

2016-03-21 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205764#comment-15205764
 ] 

Wei Deng commented on CASSANDRA-11343:
--

duplicate of CASSANDRA-11344

> Fix bloom filter sizing with LCS
> 
>
> Key: CASSANDRA-11343
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11343
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> Since CASSANDRA-7272 we most often over allocate the bloom filter size with 
> LCS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11396) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205726#comment-15205726
 ] 

Russ Hatch commented on CASSANDRA-11396:


trying to repro here, with the test that has the most failures in recent 
history: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/31/

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test
> -
>
> Key: CASSANDRA-11396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11396
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> {code}
> 
> {code}
> http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD/collection_flush_test
> Failed on CassCI build upgrade_tests-all #25
> It's an inconsistent failure that happens across a number of tests. I'll 
> include all the ones I find here.
> http://cassci.datastax.com/job/upgrade_tests-all/26/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/static_with_limit_test/
> http://cassci.datastax.com/job/upgrade_tests-all/28/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_2_2_HEAD_UpTo_Trunk/test_data_change_impacting_later_page/
> http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_Trunk/range_key_ordered_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11396) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205723#comment-15205723
 ] 

Russ Hatch commented on CASSANDRA-11396:


doesn't seem to repro locally.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test
> -
>
> Key: CASSANDRA-11396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11396
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> {code}
> 
> {code}
> http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD/collection_flush_test
> Failed on CassCI build upgrade_tests-all #25
> It's an inconsistent failure that happens across a number of tests. I'll 
> include all the ones I find here.
> http://cassci.datastax.com/job/upgrade_tests-all/26/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/static_with_limit_test/
> http://cassci.datastax.com/job/upgrade_tests-all/28/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_2_2_HEAD_UpTo_Trunk/test_data_change_impacting_later_page/
> http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_Trunk/range_key_ordered_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11333) cqlsh: COPY FROM should check that explicit column names are valid

2016-03-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205717#comment-15205717
 ] 

Stefania edited comment on CASSANDRA-11333 at 3/22/16 3:38 AM:
---

The dtest pull request to merge on commit: 
https://github.com/riptano/cassandra-dtest/pull/880


was (Author: stefania):
The pull request to merge on commit: 
https://github.com/riptano/cassandra-dtest/pull/880

> cqlsh: COPY FROM should check that explicit column names are valid
> --
>
> Key: CASSANDRA-11333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> If an invalid column is specified in the COPY FROM command, then it fails 
> without an appropriate error notification.
> For example using this schema:
> {code}
> CREATE TABLE bulk_read.value500k_cluster1 (
> pk int,
> c1 int,
> v1 text,
> v2 text,
> PRIMARY KEY (pk, c1)
> );
> {code}
> and this COPY FROM command (note the third column name is wrong:
> {code}
> COPY bulk_read.value500k_cluster1 (pk, c1, vv, v2) FROM 'test.csv';
> {code}
> we get the following error:
> {code}
> Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 
> 'v2'].
> 1 child process(es) died unexpectedly, aborting
> Processed: 0 rows; Rate:   0 rows/s; Avg. rate:   0 rows/s
> 0 rows imported from 0 files in 0.109 seconds (0 skipped).
> {code}
> Running cqlsh with {{--debug}} reveals where the problem is:
> {code}
> Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 
> 'v2'].
> Traceback (most recent call last):
>   File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", 
> line 2005, in run
> self.inner_run(*self.make_params())
>   File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", 
> line 2027, in make_params
> is_counter = ("counter" in [table_meta.columns[name].cql_type for name in 
> self.valid_columns])
> {code}
> The parent process should check that all column names are valid and output an 
> appropriate error message rather than letting worker processes crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11333) cqlsh: COPY FROM should check that explicit column names are valid

2016-03-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205717#comment-15205717
 ] 

Stefania commented on CASSANDRA-11333:
--

The pull request to merge on commit: 
https://github.com/riptano/cassandra-dtest/pull/880

> cqlsh: COPY FROM should check that explicit column names are valid
> --
>
> Key: CASSANDRA-11333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> If an invalid column is specified in the COPY FROM command, then it fails 
> without an appropriate error notification.
> For example using this schema:
> {code}
> CREATE TABLE bulk_read.value500k_cluster1 (
> pk int,
> c1 int,
> v1 text,
> v2 text,
> PRIMARY KEY (pk, c1)
> );
> {code}
> and this COPY FROM command (note the third column name is wrong:
> {code}
> COPY bulk_read.value500k_cluster1 (pk, c1, vv, v2) FROM 'test.csv';
> {code}
> we get the following error:
> {code}
> Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 
> 'v2'].
> 1 child process(es) died unexpectedly, aborting
> Processed: 0 rows; Rate:   0 rows/s; Avg. rate:   0 rows/s
> 0 rows imported from 0 files in 0.109 seconds (0 skipped).
> {code}
> Running cqlsh with {{--debug}} reveals where the problem is:
> {code}
> Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 
> 'v2'].
> Traceback (most recent call last):
>   File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", 
> line 2005, in run
> self.inner_run(*self.make_params())
>   File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", 
> line 2027, in make_params
> is_counter = ("counter" in [table_meta.columns[name].cql_type for name in 
> self.valid_columns])
> {code}
> The parent process should check that all column names are valid and output an 
> appropriate error message rather than letting worker processes crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-03-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11053:
-
Attachment: bisect_test.py

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting
> Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
> Attachments: bisect_test.py, copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> h5. Description
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.
> h5. Doc-impacting changes to COPY FROM options
> * A new option was added: PREPAREDSTATEMENTS - it indicates if prepared 
> statements should be used; it defaults to true.
> * The default value of CHUNKSIZE changed from 1000 to 5000.
> * The default value of MINBATCHSIZE changed from 2 to 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-21 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-11057:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Merged PR with the addition of localhost in cassandra.yaml - passed just fine 
for me in a test job in CI.

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Michael Shuler
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-03-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205629#comment-15205629
 ] 

Stefania commented on CASSANDRA-11053:
--

Thank you for the review, I've applied all 3 suggestions in [this 
commit|https://github.com/stef1927/cassandra/commit/07854f803e42f4a2afceff3585cdb27c16aad958].
 It merges cleanly upwards to all branches.

I wasn't aware of the Queue module, that's why I chose a lower level approach 
based on deque+Event. There aren't any recoverable exceptions as far as I can 
see, but I've added exception handling anyway to be on the safe side.

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting
> Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
> Attachments: copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> h5. Description
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.
> h5. Doc-impacting changes to COPY FROM options
> * A new option was added: PREPAREDSTATEMENTS - it indicates if prepared 
> statements should be used; it defaults to true.
> * The default value of CHUNKSIZE changed from 1000 to 5000.
> * The default value of MINBATCHSIZE changed from 2 to 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9348) Nodetool move output should be more user friendly if bad token is supplied

2016-03-21 Thread Abhishek Verma (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205490#comment-15205490
 ] 

Abhishek Verma commented on CASSANDRA-9348:
---

I am new to Cassandra and came across this while looking for low hanging 
fruits. 

After digging up, I found that the reason you get this error is because 
-9223372036854775809 = -2^63-1 is just outside the range of a Long, thus 
throwing a NumberFormatException 
(https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/dht/Murmur3Partitioner.java#L283),
 which gets thrown as a ConfigurationException, which finally gets thrown as an 
IOException and the string "For input string: ". 
I also found that StorageService.move performs out of range checks robustly 
(https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L3776).

However, I agree that the displayed error in this case is not very clear. I 
intend to change it the Exception message to: "Invalid token: For input string: 
".
If everyone agrees that this is acceptable, I will go ahead and create a patch.

> Nodetool move output should be more user friendly if bad token is supplied
> --
>
> Key: CASSANDRA-9348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9348
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sequoyha pelletier
>Priority: Trivial
>  Labels: lhf
>
> If you put a token into nodetool move that is out of range for the 
> partitioner you get the following error:
> {noformat}
> [architect@md03-gcsarch-lapp33 11:01:06 ]$ nodetool -h 10.11.48.229 -u 
> cassandra -pw cassandra move \\-9223372036854775809 
> Exception in thread "main" java.io.IOException: For input string: 
> "-9223372036854775809" 
> at org.apache.cassandra.service.StorageService.move(StorageService.java:3104) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) 
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) 
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>  
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> at 
> com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>  
> at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) 
> at sun.rmi.transport.Transport$1.run(Transport.java:177) 
> at sun.rmi.transport.Transport$1.run(Transport.java:174) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at sun.rmi.transport.Transport.serviceCall(Transport.java:173) 
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) 
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>  
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> 

[jira] [Updated] (CASSANDRA-11320) Improve backoff policy for cqlsh COPY FROM

2016-03-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11320:
-
Labels: doc-impacting  (was: )

> Improve backoff policy for cqlsh COPY FROM
> --
>
> Key: CASSANDRA-11320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11320
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Currently we have an exponential back-off policy in COPY FROM that kicks in 
> when timeouts are received. However there are two limitations:
> * it does not cover new requests and therefore we may not back-off 
> sufficiently to give time to an overloaded server to recover
> * the pause is performed in the receiving thread and therefore we may not 
> process server messages quickly enough
> There is a static throttling mechanism in rows per second from feeder to 
> worker processes (the INGESTRATE) but the feeder has no idea of the load of 
> each worker process. However it's easy to keep track of how many chunks a 
> worker process has yet to read by introducing a bounded semaphore.
> The idea is to move the back-off pauses to the worker processes main thread 
> so as to include all messages, new and retries, not just the retries that 
> timed out. The worker process will not read new chunks during the back-off 
> pauses, and the feeder process can then look at the number of pending chunks 
> before sending new chunks to a worker process.
> [~aholmber], [~aweisberg] what do you think?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11392) Add auto import java.util for UDF code block

2016-03-21 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205286#comment-15205286
 ] 

Robert Stupp commented on CASSANDRA-11392:
--

CASSANDRA-10818 requires this anyway (at least for {{List}}, {{Set}} and 
{{Map}}) and it's in 10818.

I've no objections against importing a common choice of collection interfaces 
and implementations or doing a wildcard-import of {{java.util.*}}.
Feel free to provide a patch ;)

> Add auto import java.util for UDF code block
> 
>
> Key: CASSANDRA-11392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11392
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Priority: Minor
>
> Right now, when creating Java source code for UDF, since we cannot define 
> import, we need to use fully qualified class name, ex:
> {noformat}
> CREATE FUNCTION toSet(li list)
> CALLED ON NULL INPUT
> RETURNS text
> LANGUAGE java
> AS $$
> java.util.Set set = new java.util.HashSet();
> for(String txt: list) {
> set.add(txt);
> }
> return set;
> $$;
> {noformat}
> Classes from {{java.util}} package are so commonly used that it makes 
> developer life easier to import automatically {{java.util.*}} in the 
> {{JavaUDF}} base class so that developers don't need to use FQCN for common 
> classes.
>  The only drawback I can see is the risk of class name clash but since:
> 1. it is not allow to create new class
> 2. classes that can be used in UDF are restricted
>  I don't see serious clash name issues either
> [~snazy] WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11391) "class declared as inner class" error when using UDF

2016-03-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11391:
-
Fix Version/s: 3.x
   Status: Patch Available  (was: In Progress)

> "class declared as inner class" error when using UDF
> 
>
> Key: CASSANDRA-11391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11391
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Critical
> Fix For: 3.x
>
>
> {noformat}
> cqlsh:music> CREATE FUNCTION testMapEntry(my_map map)
>  ... CALLED ON NULL INPUT
>  ... RETURNS text
>  ... LANGUAGE java
>  ... AS $$
>  ... String buffer = "";
>  ... for(java.util.Map.Entry entry: 
> my_map.entrySet()) {
>  ... buffer = buffer + entry.getKey() + ": " + 
> entry.getValue() + ", ";
>  ... }
>  ... return buffer;
>  ... $$;
> InvalidRequest: code=2200 [Invalid query] 
> message="Could not compile function 'music.testmapentry' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: 
> Java UDF validation failed: [class declared as inner class]"
> {noformat}
> When I try to decompile the source code into byte code, below is the result:
> {noformat}
>   public java.lang.String test(java.util.Map java.lang.String>);
> Code:
>0: ldc   #2  // String
>2: astore_2
>3: aload_1
>4: invokeinterface #3,  1// InterfaceMethod 
> java/util/Map.entrySet:()Ljava/util/Set;
>9: astore_3
>   10: aload_3
>   11: invokeinterface #4,  1// InterfaceMethod 
> java/util/Set.iterator:()Ljava/util/Iterator;
>   16: astore4
>   18: aload 4
>   20: invokeinterface #5,  1// InterfaceMethod 
> java/util/Iterator.hasNext:()Z
>   25: ifeq  94
>   28: aload 4
>   30: invokeinterface #6,  1// InterfaceMethod 
> java/util/Iterator.next:()Ljava/lang/Object;
>   35: checkcast #7  // class java/util/Map$Entry
>   38: astore5
>   40: new   #8  // class java/lang/StringBuilder
>   43: dup
>   44: invokespecial #9  // Method 
> java/lang/StringBuilder."":()V
>   47: aload_2
>   48: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   51: aload 5
>   53: invokeinterface #11,  1   // InterfaceMethod 
> java/util/Map$Entry.getKey:()Ljava/lang/Object;
>   58: checkcast #12 // class java/lang/String
>   61: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   64: ldc   #13 // String :
>   66: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   69: aload 5
>   71: invokeinterface #14,  1   // InterfaceMethod 
> java/util/Map$Entry.getValue:()Ljava/lang/Object;
>   76: checkcast #12 // class java/lang/String
>   79: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   82: ldc   #15 // String ,
>   84: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   87: invokevirtual #16 // Method 
> java/lang/StringBuilder.toString:()Ljava/lang/String;
>   90: astore_2
>   91: goto  18
>   94: aload_2
>   95: areturn
> {noformat}
>  There is nothing that could trigger inner class creation ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11391) "class declared as inner class" error when using UDF

2016-03-21 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205266#comment-15205266
 ] 

Robert Stupp commented on CASSANDRA-11391:
--

It's caused by a superfluous sandbox check. asm reports uses of inner classes 
(like {{java.util.Map$Entry}} - and the check triggers a byte-code validation 
error in that case. We don't need that check since we check for use and 
instantiation of "malicious" classes anyway.

The fix is quite simple: remove that superfluous check + add a regression utest.

Cassci's currently working on CI results.

> "class declared as inner class" error when using UDF
> 
>
> Key: CASSANDRA-11391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11391
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Critical
>
> {noformat}
> cqlsh:music> CREATE FUNCTION testMapEntry(my_map map)
>  ... CALLED ON NULL INPUT
>  ... RETURNS text
>  ... LANGUAGE java
>  ... AS $$
>  ... String buffer = "";
>  ... for(java.util.Map.Entry entry: 
> my_map.entrySet()) {
>  ... buffer = buffer + entry.getKey() + ": " + 
> entry.getValue() + ", ";
>  ... }
>  ... return buffer;
>  ... $$;
> InvalidRequest: code=2200 [Invalid query] 
> message="Could not compile function 'music.testmapentry' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: 
> Java UDF validation failed: [class declared as inner class]"
> {noformat}
> When I try to decompile the source code into byte code, below is the result:
> {noformat}
>   public java.lang.String test(java.util.Map java.lang.String>);
> Code:
>0: ldc   #2  // String
>2: astore_2
>3: aload_1
>4: invokeinterface #3,  1// InterfaceMethod 
> java/util/Map.entrySet:()Ljava/util/Set;
>9: astore_3
>   10: aload_3
>   11: invokeinterface #4,  1// InterfaceMethod 
> java/util/Set.iterator:()Ljava/util/Iterator;
>   16: astore4
>   18: aload 4
>   20: invokeinterface #5,  1// InterfaceMethod 
> java/util/Iterator.hasNext:()Z
>   25: ifeq  94
>   28: aload 4
>   30: invokeinterface #6,  1// InterfaceMethod 
> java/util/Iterator.next:()Ljava/lang/Object;
>   35: checkcast #7  // class java/util/Map$Entry
>   38: astore5
>   40: new   #8  // class java/lang/StringBuilder
>   43: dup
>   44: invokespecial #9  // Method 
> java/lang/StringBuilder."":()V
>   47: aload_2
>   48: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   51: aload 5
>   53: invokeinterface #11,  1   // InterfaceMethod 
> java/util/Map$Entry.getKey:()Ljava/lang/Object;
>   58: checkcast #12 // class java/lang/String
>   61: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   64: ldc   #13 // String :
>   66: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   69: aload 5
>   71: invokeinterface #14,  1   // InterfaceMethod 
> java/util/Map$Entry.getValue:()Ljava/lang/Object;
>   76: checkcast #12 // class java/lang/String
>   79: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   82: ldc   #15 // String ,
>   84: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   87: invokevirtual #16 // Method 
> java/lang/StringBuilder.toString:()Ljava/lang/String;
>   90: astore_2
>   91: goto  18
>   94: aload_2
>   95: areturn
> {noformat}
>  There is nothing that could trigger inner class creation ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11391) "class declared as inner class" error when using UDF

2016-03-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp reassigned CASSANDRA-11391:


Assignee: Robert Stupp

> "class declared as inner class" error when using UDF
> 
>
> Key: CASSANDRA-11391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11391
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Critical
>
> {noformat}
> cqlsh:music> CREATE FUNCTION testMapEntry(my_map map)
>  ... CALLED ON NULL INPUT
>  ... RETURNS text
>  ... LANGUAGE java
>  ... AS $$
>  ... String buffer = "";
>  ... for(java.util.Map.Entry entry: 
> my_map.entrySet()) {
>  ... buffer = buffer + entry.getKey() + ": " + 
> entry.getValue() + ", ";
>  ... }
>  ... return buffer;
>  ... $$;
> InvalidRequest: code=2200 [Invalid query] 
> message="Could not compile function 'music.testmapentry' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: 
> Java UDF validation failed: [class declared as inner class]"
> {noformat}
> When I try to decompile the source code into byte code, below is the result:
> {noformat}
>   public java.lang.String test(java.util.Map java.lang.String>);
> Code:
>0: ldc   #2  // String
>2: astore_2
>3: aload_1
>4: invokeinterface #3,  1// InterfaceMethod 
> java/util/Map.entrySet:()Ljava/util/Set;
>9: astore_3
>   10: aload_3
>   11: invokeinterface #4,  1// InterfaceMethod 
> java/util/Set.iterator:()Ljava/util/Iterator;
>   16: astore4
>   18: aload 4
>   20: invokeinterface #5,  1// InterfaceMethod 
> java/util/Iterator.hasNext:()Z
>   25: ifeq  94
>   28: aload 4
>   30: invokeinterface #6,  1// InterfaceMethod 
> java/util/Iterator.next:()Ljava/lang/Object;
>   35: checkcast #7  // class java/util/Map$Entry
>   38: astore5
>   40: new   #8  // class java/lang/StringBuilder
>   43: dup
>   44: invokespecial #9  // Method 
> java/lang/StringBuilder."":()V
>   47: aload_2
>   48: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   51: aload 5
>   53: invokeinterface #11,  1   // InterfaceMethod 
> java/util/Map$Entry.getKey:()Ljava/lang/Object;
>   58: checkcast #12 // class java/lang/String
>   61: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   64: ldc   #13 // String :
>   66: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   69: aload 5
>   71: invokeinterface #14,  1   // InterfaceMethod 
> java/util/Map$Entry.getValue:()Ljava/lang/Object;
>   76: checkcast #12 // class java/lang/String
>   79: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   82: ldc   #15 // String ,
>   84: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   87: invokevirtual #16 // Method 
> java/lang/StringBuilder.toString:()Ljava/lang/String;
>   90: astore_2
>   91: goto  18
>   94: aload_2
>   95: areturn
> {noformat}
>  There is nothing that could trigger inner class creation ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10612) Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205221#comment-15205221
 ] 

Russ Hatch commented on CASSANDRA-10612:


not completely sure [~mambocab] . The impression I had from this was that it 
was maybe benign to begin with (and so thought maybe I coded the tests to 
ignore it, but that doesn't seem to be the case).

> Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted
> --
>
> Key: CASSANDRA-10612
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10612
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x
>
>
> The following tests in the upgrade_through_versions dtest suite fail:
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_HEAD.rolling_upgrade_test
> See this report:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/
> They fail with the following error:
> {code}
> A subprocess has terminated early. Subprocess statuses: Process-41 (is_alive: 
> True), Process-42 (is_alive: False), Process-43 (is_alive: True), Process-44 
> (is_alive: False), attempting to terminate remaining subprocesses now.
> {code}
> and with logs that look like this:
> {code}
> Unexpected error in node1 node log: ['ERROR [SecondaryIndexManagement:1] 
> 2015-10-27 00:06:52,335 CassandraDaemon.java:195 - Exception in thread 
> Thread[SecondaryIndexManagement:1,5,main] java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:368) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.buildBlocking(CassandraIndex.java:688)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.lambda$getBuildIndexTask$206(CassandraIndex.java:658)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$$Lambda$151/1841229245.call(Unknown
>  Source) ~[na:na]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 
> 578160/{code}1663620)bytes
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_51]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_51]
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:364) 
> ~[main/:na]
> ... 7 common frames omitted Caused by: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at 
> org.apache.cassandra.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:67)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1269)
>  ~[main/:na]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
> ... 4 common frames omitted', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:08:48,520 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]', 'ERROR [HintsDispatcher:2] 

[jira] [Commented] (CASSANDRA-11396) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205208#comment-15205208
 ] 

Russ Hatch commented on CASSANDRA-11396:


not the right fix above, will keep investigating.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test
> -
>
> Key: CASSANDRA-11396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11396
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> {code}
> 
> {code}
> http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD/collection_flush_test
> Failed on CassCI build upgrade_tests-all #25
> It's an inconsistent failure that happens across a number of tests. I'll 
> include all the ones I find here.
> http://cassci.datastax.com/job/upgrade_tests-all/26/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/static_with_limit_test/
> http://cassci.datastax.com/job/upgrade_tests-all/28/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_2_2_HEAD_UpTo_Trunk/test_data_change_impacting_later_page/
> http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_Trunk/range_key_ordered_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-11256.

Resolution: Cannot Reproduce

The UA didn't repro in 100 runs of the test. I did see an assertion error 
(1/100) but that looks unrelated to this.

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11256:
---
Status: In Progress  (was: Patch Available)

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11396) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205110#comment-15205110
 ] 

Russ Hatch commented on CASSANDRA-11396:


trying it out here: 
http://cassci.datastax.com/view/Parameterized/job/upgrade_tests-all-custom_branch_runs/5/

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test
> -
>
> Key: CASSANDRA-11396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11396
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> {code}
> 
> {code}
> http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD/collection_flush_test
> Failed on CassCI build upgrade_tests-all #25
> It's an inconsistent failure that happens across a number of tests. I'll 
> include all the ones I find here.
> http://cassci.datastax.com/job/upgrade_tests-all/26/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/static_with_limit_test/
> http://cassci.datastax.com/job/upgrade_tests-all/28/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_2_2_HEAD_UpTo_Trunk/test_data_change_impacting_later_page/
> http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_Trunk/range_key_ordered_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11396) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205102#comment-15205102
 ] 

Russ Hatch commented on CASSANDRA-11396:


this might help: https://github.com/riptano/cassandra-dtest/pull/877

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test
> -
>
> Key: CASSANDRA-11396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11396
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> {code}
> 
> {code}
> http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD/collection_flush_test
> Failed on CassCI build upgrade_tests-all #25
> It's an inconsistent failure that happens across a number of tests. I'll 
> include all the ones I find here.
> http://cassci.datastax.com/job/upgrade_tests-all/26/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/static_with_limit_test/
> http://cassci.datastax.com/job/upgrade_tests-all/28/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_2_2_HEAD_UpTo_Trunk/test_data_change_impacting_later_page/
> http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_Trunk/range_key_ordered_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11396) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11396:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test
> -
>
> Key: CASSANDRA-11396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11396
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> {code}
> 
> {code}
> http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD/collection_flush_test
> Failed on CassCI build upgrade_tests-all #25
> It's an inconsistent failure that happens across a number of tests. I'll 
> include all the ones I find here.
> http://cassci.datastax.com/job/upgrade_tests-all/26/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/static_with_limit_test/
> http://cassci.datastax.com/job/upgrade_tests-all/28/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_2_2_HEAD_UpTo_Trunk/test_data_change_impacting_later_page/
> http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_Trunk/range_key_ordered_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205030#comment-15205030
 ] 

Russ Hatch commented on CASSANDRA-11256:


ccm pr: https://github.com/pcmanus/ccm/pull/475
dtest pr: https://github.com/riptano/cassandra-dtest/pull/876

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11256:
---
Reviewer: Philip Thompson
  Status: Patch Available  (was: In Progress)

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205021#comment-15205021
 ] 

Russ Hatch commented on CASSANDRA-11256:


It looks like the problem might be on the initial cluster startup we were not 
waiting for binary proto, so it was potentially trying to write at cl.all 
before all nodes were ready for that.

Additionally the test performs a flush then immediate shutdown on one node -- 
the code should probably block on the flush just in case.

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11349) MerkleTree mismatch when multiple range tombstones exists for the same partition and interval

2016-03-21 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski reassigned CASSANDRA-11349:
--

Assignee: Stefan Podkowinski

> MerkleTree mismatch when multiple range tombstones exists for the same 
> partition and interval
> -
>
> Key: CASSANDRA-11349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11349
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fabien Rousseau
>Assignee: Stefan Podkowinski
>
> We observed that repair, for some of our clusters, streamed a lot of data and 
> many partitions were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters, 
> which is really high.
> After investigation, it appears that, if two range tombstones exists for a 
> partition for the same range/interval, they're both included in the merkle 
> tree computation.
> But, if for some reason, on another node, the two range tombstones were 
> already compacted into a single range tombstone, this will result in a merkle 
> tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent 
> on compactions (and if a partition is deleted and created multiple times, the 
> only way to ensure that repair "works correctly"/"don't overstream data" is 
> to major compact before each repair... which is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 float,
> c4 float,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected 
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables 
> (up to thousands for a rather short period of time when using VNodes, the 
> time for compaction to absorb those small files), but also an increased size 
> on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11349) MerkleTree mismatch when multiple range tombstones exists for the same partition and interval

2016-03-21 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204971#comment-15204971
 ] 

Stefan Podkowinski commented on CASSANDRA-11349:


Looks like the {{MergeIterator.ManyToOne}} logic gets in the way of 
{{LazilyCompactedRow.Reducer}} doing it's job. The iterator will stop adding 
atoms to the reducer and continue to advance, once two range tombstones with 
different deletion times are about to be merged.

> MerkleTree mismatch when multiple range tombstones exists for the same 
> partition and interval
> -
>
> Key: CASSANDRA-11349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11349
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fabien Rousseau
>
> We observed that repair, for some of our clusters, streamed a lot of data and 
> many partitions were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters, 
> which is really high.
> After investigation, it appears that, if two range tombstones exists for a 
> partition for the same range/interval, they're both included in the merkle 
> tree computation.
> But, if for some reason, on another node, the two range tombstones were 
> already compacted into a single range tombstone, this will result in a merkle 
> tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent 
> on compactions (and if a partition is deleted and created multiple times, the 
> only way to ensure that repair "works correctly"/"don't overstream data" is 
> to major compact before each repair... which is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 float,
> c4 float,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected 
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables 
> (up to thousands for a rather short period of time when using VNodes, the 
> time for compaction to absorb those small files), but also an increased size 
> on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11388) FailureDetector.java:456 - Ignoring interval time of

2016-03-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204877#comment-15204877
 ] 

Paulo Motta edited comment on CASSANDRA-11388 at 3/21/16 7:35 PM:
--

On CASSANDRA-10241 we downgraded all DEBUG level logging to TRACE but probably 
forgot to reduce this specific statement. Now that debug.log is being shipped 
by default this might be a bit verbose/scary so we should maybe decrease the 
level to TRACE or log only if happened above a certain threshold in the last 
period.


was (Author: pauloricardomg):
On CASSANDRA-10241 we downgraded all DEBUG level logging to TRACE but probably 
forgot to reduce this specific statement. Now that debug.log is being shipped 
by default this might be a bit verbose/scary so we should maybe decrease the 
level to TRACE.

> FailureDetector.java:456 - Ignoring interval time of
> 
>
> Key: CASSANDRA-11388
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11388
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.2.4
>Reporter: Relish Chackochan
>Priority: Minor
>
> We have Cassandra Cluster of 8 nodes with 2.2.4v using jdk1.8.0_65 ( RHEL 6.5 
> 64-bit ) and i am seeing the following error in the Cassandra debug.log file. 
> All the nodes are UP and running with
> "nodetool status". NTP is configured on all nodes and Time syncing well.
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 3069815316 for /192.168.1.153
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 2076119905 for /192.168.1.135
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 2683887772 for /192.168.1.151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11395) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test

2016-03-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204912#comment-15204912
 ] 

Philip Thompson edited comment on CASSANDRA-11395 at 3/21/16 7:20 PM:
--

Looking at the test code, I really think this is a consistency problem. So I'm 
going to throw other tests with the same problem under this ticket.

http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk/bug_6069_test/

http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk/whole_map_conditional_test/


was (Author: philipthompson):
Looking at the test code, I really think this is a consistency problem. So I'm 
going to throw other tests with the same problem under this ticket.

http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk/bug_6069_test/

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test
> ---
>
> Key: CASSANDRA-11395
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11395
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> {code}
> Expected [[0, ['foo', 'bar'], 'foobar']] from SELECT * FROM test, but got 
> [[0, [u'foi', u'bar'], u'foobar']]
> {code}
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/24/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/cas_and_list_index_test
> Failed on CassCI build upgrade_tests-all #24
> Probably a consistency issue in the test code, but I haven't looked into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11396) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test

2016-03-21 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-11396:
---

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD.collection_flush_test
 Key: CASSANDRA-11396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11396
 Project: Cassandra
  Issue Type: Test
  Components: Testing
Reporter: Philip Thompson
Assignee: DS Test Eng


example failure:

{code}

{code}

http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_UpTo_3_0_HEAD/collection_flush_test

Failed on CassCI build upgrade_tests-all #25

It's an inconsistent failure that happens across a number of tests. I'll 
include all the ones I find here.

http://cassci.datastax.com/job/upgrade_tests-all/26/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/static_with_limit_test/

http://cassci.datastax.com/job/upgrade_tests-all/28/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_2_2_HEAD_UpTo_Trunk/test_data_change_impacting_later_page/

http://cassci.datastax.com/job/upgrade_tests-all/29/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_Trunk/range_key_ordered_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11257) dtest failure in consistency_test.TestConsistency.short_read_test

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-11257.

Resolution: Cannot Reproduce

> dtest failure in consistency_test.TestConsistency.short_read_test
> -
>
> Key: CASSANDRA-11257
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11257
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/279/testReport/consistency_test/TestConsistency/short_read_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #279
> from offheap tests job, 2 flaps in recent history:
> {noformat}
> code=1000 [Unavailable exception] message="Cannot achieve consistency level 
> QUORUM" info={'required_replicas': 2, 'alive_replicas': 1, 'consistency': 
> 'QUORUM'}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11257) dtest failure in consistency_test.TestConsistency.short_read_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204923#comment-15204923
 ] 

Russ Hatch commented on CASSANDRA-11257:


no failures out of 100 runs, closing.

> dtest failure in consistency_test.TestConsistency.short_read_test
> -
>
> Key: CASSANDRA-11257
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11257
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/279/testReport/consistency_test/TestConsistency/short_read_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #279
> from offheap tests job, 2 flaps in recent history:
> {noformat}
> code=1000 [Unavailable exception] message="Cannot achieve consistency level 
> QUORUM" info={'required_replicas': 2, 'alive_replicas': 1, 'consistency': 
> 'QUORUM'}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11383) SASI index build leads to massive OOM

2016-03-21 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204917#comment-15204917
 ] 

DOAN DuyHai commented on CASSANDRA-11383:
-

bq. we probably not going to remove SPARSE but rather we are just going to fail 
index build if SPARSE is set but it's requirements are not met, so operators 
will be able to manually change the schema and trigger index rebuild

 I prefer this alternative. I believe there is a real need for {{SPARSE}} 
indices. 

> SASI index build leads to massive OOM
> -
>
> Key: CASSANDRA-11383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
> Attachments: CASSANDRA-11383.patch, 
> SASI_Index_build_LCS_1G_Max_SSTable_Size_logs.tar.gz, 
> new_system_log_CMS_8GB_OOM.log, system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
>  JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
>  - ≈ 100Gb/per node
>  - 1.3 Tb cluster-wide
>  - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
>  - 8 indices with text field, NonTokenizingAnalyser,  PREFIX mode, 
> case-insensitive
>  - 1 index with numeric field, SPARSE mode
>  After a while, the nodes just gone OOM.
>  I attach log files. You can see a lot of GC happening while index segments 
> are flush to disk. At some point the node OOM ...
> /cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11395) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test

2016-03-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204912#comment-15204912
 ] 

Philip Thompson commented on CASSANDRA-11395:
-

Looking at the test code, I really think this is a consistency problem. So I'm 
going to throw other tests with the same problem under this ticket.

http://cassci.datastax.com/job/upgrade_tests-all/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk/bug_6069_test/

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test
> ---
>
> Key: CASSANDRA-11395
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11395
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> {code}
> Expected [[0, ['foo', 'bar'], 'foobar']] from SELECT * FROM test, but got 
> [[0, [u'foi', u'bar'], u'foobar']]
> {code}
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/24/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/cas_and_list_index_test
> Failed on CassCI build upgrade_tests-all #24
> Probably a consistency issue in the test code, but I haven't looked into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11395) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test

2016-03-21 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-11395:
---

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test
 Key: CASSANDRA-11395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11395
 Project: Cassandra
  Issue Type: Test
  Components: Testing
Reporter: Philip Thompson
Assignee: DS Test Eng


{code}
Expected [[0, ['foo', 'bar'], 'foobar']] from SELECT * FROM test, but got [[0, 
[u'foi', u'bar'], u'foobar']]
{code}

example failure:

http://cassci.datastax.com/job/upgrade_tests-all/24/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/cas_and_list_index_test

Failed on CassCI build upgrade_tests-all #24

Probably a consistency issue in the test code, but I haven't looked into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11383) SASI index build leads to massive OOM

2016-03-21 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204881#comment-15204881
 ] 

Pavel Yaskevich commented on CASSANDRA-11383:
-

[~doanduyhai] We are currently working on fixing the stitching step memory 
footprint, it kind of looks like we probably not going to remove SPARSE but 
rather we are just going to fail index build if SPARSE is set but it's 
requirements are not met, so operators will be able to manually change the 
schema and trigger index rebuild. Also it's not necessary to explicitly set 
mode at all, it will be PREFIX by default which works fine for both text and 
numeric fields.

> SASI index build leads to massive OOM
> -
>
> Key: CASSANDRA-11383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
> Attachments: CASSANDRA-11383.patch, 
> SASI_Index_build_LCS_1G_Max_SSTable_Size_logs.tar.gz, 
> new_system_log_CMS_8GB_OOM.log, system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
>  JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
>  - ≈ 100Gb/per node
>  - 1.3 Tb cluster-wide
>  - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
>  - 8 indices with text field, NonTokenizingAnalyser,  PREFIX mode, 
> case-insensitive
>  - 1 index with numeric field, SPARSE mode
>  After a while, the nodes just gone OOM.
>  I attach log files. You can see a lot of GC happening while index segments 
> are flush to disk. At some point the node OOM ...
> /cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11388) FailureDetector.java:456 - Ignoring interval time of

2016-03-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204877#comment-15204877
 ] 

Paulo Motta commented on CASSANDRA-11388:
-

On CASSANDRA-10241 we downgraded all DEBUG level logging to TRACE but probably 
forgot to reduce this specific statement. Now that debug.log is being shipped 
by default this might be a bit verbose/scary so we should maybe decrease the 
level to TRACE.

> FailureDetector.java:456 - Ignoring interval time of
> 
>
> Key: CASSANDRA-11388
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11388
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.2.4
>Reporter: Relish Chackochan
>Priority: Minor
>
> We have Cassandra Cluster of 8 nodes with 2.2.4v using jdk1.8.0_65 ( RHEL 6.5 
> 64-bit ) and i am seeing the following error in the Cassandra debug.log file. 
> All the nodes are UP and running with
> "nodetool status". NTP is configured on all nodes and Time syncing well.
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 3069815316 for /192.168.1.153
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 2076119905 for /192.168.1.135
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 2683887772 for /192.168.1.151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11394) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV1Upgrade_2_1_UpTo_2_2_HEAD.bootstrap_multidc_test

2016-03-21 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-11394:
---

 Summary: dtest failure in 
upgrade_tests.upgrade_through_versions_test.ProtoV1Upgrade_2_1_UpTo_2_2_HEAD.bootstrap_multidc_test
 Key: CASSANDRA-11394
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11394
 Project: Cassandra
  Issue Type: Test
  Components: Testing
Reporter: Philip Thompson
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
node3.log, node3_debug.log, node4.log, node4_debug.log, node5.log, 
node5_debug.log

The other nodes fail to notice node5 come up. If we check the logs, they're 
complaining about long interval times for that node. Might just be an issue 
with how many nodes we're starting? I don't actually see any errors.

Logs for this failure attached. This is flaky, and has happened on a few 
bootstrap_multidc_tests

example failure:

http://cassci.datastax.com/job/upgrade_tests-all/24/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV1Upgrade_2_1_UpTo_2_2_HEAD/bootstrap_multidc_test

Failed on CassCI build upgrade_tests-all #24



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9967) Determine if a Materialized View is finished building, without having to query each node

2016-03-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204795#comment-15204795
 ] 

Jonathan Ellis commented on CASSANDRA-9967:
---

Carl, how is this looking?

> Determine if a Materialized View is finished building, without having to 
> query each node
> 
>
> Key: CASSANDRA-9967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9967
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Alan Boudreault
>Assignee: Carl Yeksigian
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> Since MVs are eventually consistent with its base table, It would nice if we 
> could easily know the state of the MV after its creation, so we could wait 
> until the MV is built before doing some operations.
> // cc [~mbroecheler] [~tjake] [~carlyeks] [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11361) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_round_trip_with_rate_file

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11361:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_round_trip_with_rate_file
> --
>
> Key: CASSANDRA-11361
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11361
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> This test has started failing recently:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/318/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_rate_file
> In addition to failing on the offheap-memtable job, it also fails on the 
> vanilla 2.1 job:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_rate_file/
> which makes me think this isn't just a weird flap. I have not yet seen this 
> failure on versions higher than 2.1.
> Test Eng should take the first crack at debugging this, but if you don't make 
> headway, Stefania added the test in November, so she's probably the person to 
> escalate this to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11326) dtest failure in thrift_tests.TestMutations.test_bad_calls

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-11326.

Resolution: Cannot Reproduce

100 test runs we're clean. resolving.

> dtest failure in thrift_tests.TestMutations.test_bad_calls
> --
>
> Key: CASSANDRA-11326
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11326
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> This test failed on two consecutive CassCI builds on trunk, but hasn't failed 
> otherwise:
> http://cassci.datastax.com/job/trunk_dtest/1041/testReport/thrift_tests/TestMutations/test_bad_calls
> I can't see anything obvious in C* itself that would have caused this error, 
> and when I ran it locally on the 2 C* SHAs on which the test failed on CassCI 
> (615d0e15551cbb7e8f5100b33723562c31876889 and 
> e017f9494844234fa73848890347f59c622cea40), it passed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-03-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11393:

Issue Type: Bug  (was: Test)

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>  Labels: dtest
> Fix For: 3.0.x
>
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> 

[jira] [Created] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-03-21 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-11393:
---

 Summary: dtest failure in 
upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
 Key: CASSANDRA-11393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
 Fix For: 3.0.x


We are seeing a failure in the upgrade tests that go from 2.1 to 3.0

{code}
node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
Unexpected exception during request; channel = [id: 0xeb79b477, 
/127.0.0.1:39613 => /127.0.0.2:9042]
java.lang.AssertionError: null
at 
org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
 ~[main/:na]
at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
~[main/:na]
at 
org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
 ~[main/:na]
at 
org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
 ~[main/:na]
at 
org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758) 
~[main/:na]
at 
org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
~[main/:na]
at 
org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
 ~[main/:na]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
 ~[main/:na]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
 ~[main/:na]
at 
org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
~[main/:na]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
 ~[main/:na]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
 ~[main/:na]
at 
org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
 ~[main/:na]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_51]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
{code}

example failure:


[jira] [Commented] (CASSANDRA-10563) Integrate new upgrade test into dtest upgrade suite

2016-03-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204757#comment-15204757
 ] 

Jim Witschey commented on CASSANDRA-10563:
--

What you did (https://github.com/riptano/cassandra-dtest/pull/858) looks good 
to me. I'll be working on merging 8099_upgrade_tests and reviewing your PR 
today.

> Integrate new upgrade test into dtest upgrade suite
> ---
>
> Key: CASSANDRA-10563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10563
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>Priority: Critical
> Fix For: 3.0.x
>
>
> This is a follow-up ticket for CASSANDRA-10360, specifically [~slebresne]'s 
> comment here:
> https://issues.apache.org/jira/browse/CASSANDRA-10360?focusedCommentId=14966539=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14966539
> These tests should be incorporated into the [{{upgrade_tests}} in 
> dtest|https://github.com/riptano/cassandra-dtest/tree/master/upgrade_tests]. 
> I'll take this on; [~nutbunnies] is also a good person for it, but I'll 
> likely get to it first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204684#comment-15204684
 ] 

Russ Hatch commented on CASSANDRA-11256:


bad settings on job above, repeating here 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/30/

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11326) dtest failure in thrift_tests.TestMutations.test_bad_calls

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204677#comment-15204677
 ] 

Russ Hatch commented on CASSANDRA-11326:


bulk run here to see if this repos on 100x 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/29/

> dtest failure in thrift_tests.TestMutations.test_bad_calls
> --
>
> Key: CASSANDRA-11326
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11326
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> This test failed on two consecutive CassCI builds on trunk, but hasn't failed 
> otherwise:
> http://cassci.datastax.com/job/trunk_dtest/1041/testReport/thrift_tests/TestMutations/test_bad_calls
> I can't see anything obvious in C* itself that would have caused this error, 
> and when I ran it locally on the 2 C* SHAs on which the test failed on CassCI 
> (615d0e15551cbb7e8f5100b33723562c31876889 and 
> e017f9494844234fa73848890347f59c622cea40), it passed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10612) Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted

2016-03-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204673#comment-15204673
 ] 

Jim Witschey commented on CASSANDRA-10612:
--

bq. This does not appear to be manifesting any longer.

Any idea why?

> Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted
> --
>
> Key: CASSANDRA-10612
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10612
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x
>
>
> The following tests in the upgrade_through_versions dtest suite fail:
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_HEAD.rolling_upgrade_test
> See this report:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/
> They fail with the following error:
> {code}
> A subprocess has terminated early. Subprocess statuses: Process-41 (is_alive: 
> True), Process-42 (is_alive: False), Process-43 (is_alive: True), Process-44 
> (is_alive: False), attempting to terminate remaining subprocesses now.
> {code}
> and with logs that look like this:
> {code}
> Unexpected error in node1 node log: ['ERROR [SecondaryIndexManagement:1] 
> 2015-10-27 00:06:52,335 CassandraDaemon.java:195 - Exception in thread 
> Thread[SecondaryIndexManagement:1,5,main] java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:368) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.buildBlocking(CassandraIndex.java:688)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.lambda$getBuildIndexTask$206(CassandraIndex.java:658)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$$Lambda$151/1841229245.call(Unknown
>  Source) ~[na:na]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 
> 578160/{code}1663620)bytes
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_51]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_51]
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:364) 
> ~[main/:na]
> ... 7 common frames omitted Caused by: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at 
> org.apache.cassandra.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:67)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1269)
>  ~[main/:na]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
> ... 4 common frames omitted', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:08:48,520 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:11:58,336 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]']
> {code}



--
This 

[jira] [Assigned] (CASSANDRA-11326) dtest failure in thrift_tests.TestMutations.test_bad_calls

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11326:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in thrift_tests.TestMutations.test_bad_calls
> --
>
> Key: CASSANDRA-11326
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11326
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> This test failed on two consecutive CassCI builds on trunk, but hasn't failed 
> otherwise:
> http://cassci.datastax.com/job/trunk_dtest/1041/testReport/thrift_tests/TestMutations/test_bad_calls
> I can't see anything obvious in C* itself that would have caused this error, 
> and when I ran it locally on the 2 C* SHAs on which the test failed on CassCI 
> (615d0e15551cbb7e8f5100b33723562c31876889 and 
> e017f9494844234fa73848890347f59c622cea40), it passed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11276) (windows) dtest failure in commitlog_test.TestCommitLog.test_compression_error

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11276:
---
Assignee: DS Test Eng  (was: Russ Hatch)

> (windows) dtest failure in commitlog_test.TestCommitLog.test_compression_error
> --
>
> Key: CASSANDRA-11276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11276
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/178/testReport/commitlog_test/TestCommitLog/test_compression_error
> Failed on CassCI build cassandra-3.0_dtest_win32 #178
> Intermittent failures of this test on windows, error:
> {noformat}
> 11 Feb 2016 20:00:22 [node1] Missing: ['Could not create Compression for type 
> org.apache.cassandra.io.compress.LZ5Compressor']
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11276) (windows) dtest failure in commitlog_test.TestCommitLog.test_compression_error

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11276:
---
Summary: (windows) dtest failure in 
commitlog_test.TestCommitLog.test_compression_error  (was: dtest failure in 
commitlog_test.TestCommitLog.test_compression_error)

> (windows) dtest failure in commitlog_test.TestCommitLog.test_compression_error
> --
>
> Key: CASSANDRA-11276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11276
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/178/testReport/commitlog_test/TestCommitLog/test_compression_error
> Failed on CassCI build cassandra-3.0_dtest_win32 #178
> Intermittent failures of this test on windows, error:
> {noformat}
> 11 Feb 2016 20:00:22 [node1] Missing: ['Could not create Compression for type 
> org.apache.cassandra.io.compress.LZ5Compressor']
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests

2016-03-21 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204660#comment-15204660
 ] 

Joshua McKenzie commented on CASSANDRA-11381:
-

bq. trying to figure out how to write a dtest for this
I say we hold off until you've worked out a dtest for it. Just wanted to make 
sure there wasn't a patch waiting for it and it slip through the cracks.

> Node running with join_ring=false and authentication can not serve requests
> ---
>
> Key: CASSANDRA-11381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
> Attachments: 11381-2.0.txt, 11381-2.1.txt, 11381-2.2.txt, 
> 11381-3.0.txt, 11381-trunk.txt
>
>
> Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has 
> authentication configured, eg PasswordAuthenticator, won't be able to serve 
> requests. This is because {{Auth.setup()}} never gets called during the 
> startup.
> Without {{Auth.setup()}} having been called in {{StorageService}} clients 
> connecting to the node fail with the node throwing
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119)
> at 
> org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The exception thrown from the 
> [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119]
> {code}
> ResultMessage.Rows rows = 
> authenticateStatement.execute(QueryState.forInternalCalls(), new 
> QueryOptions(consistencyForUser(username),
>   
>Lists.newArrayList(ByteBufferUtil.bytes(username;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11276) dtest failure in commitlog_test.TestCommitLog.test_compression_error

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11276:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in commitlog_test.TestCommitLog.test_compression_error
> 
>
> Key: CASSANDRA-11276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11276
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/178/testReport/commitlog_test/TestCommitLog/test_compression_error
> Failed on CassCI build cassandra-3.0_dtest_win32 #178
> Intermittent failures of this test on windows, error:
> {noformat}
> 11 Feb 2016 20:00:22 [node1] Missing: ['Could not create Compression for type 
> org.apache.cassandra.io.compress.LZ5Compressor']
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8928) Add downgradesstables

2016-03-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204623#comment-15204623
 ] 

Yuki Morishita commented on CASSANDRA-8928:
---

Thanks Paulo, +1 for these.

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jeremy Hanna
>Priority: Minor
>  Labels: gsoc2016, mentor
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11257) dtest failure in consistency_test.TestConsistency.short_read_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204622#comment-15204622
 ] 

Russ Hatch commented on CASSANDRA-11257:


running 100x here to see if this repros presently 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/28/

> dtest failure in consistency_test.TestConsistency.short_read_test
> -
>
> Key: CASSANDRA-11257
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11257
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/279/testReport/consistency_test/TestConsistency/short_read_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #279
> from offheap tests job, 2 flaps in recent history:
> {noformat}
> code=1000 [Unavailable exception] message="Cannot achieve consistency level 
> QUORUM" info={'required_replicas': 2, 'alive_replicas': 1, 'consistency': 
> 'QUORUM'}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204589#comment-15204589
 ] 

Russ Hatch commented on CASSANDRA-11256:


running a 100x job to see if this repros at all 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/27/

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11322) dtest failure in compaction_test.TestCompaction_with_LeveledCompactionStrategy.data_size_test

2016-03-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-11322.
-
Resolution: Cannot Reproduce

Nope, nothing obvious or unobvious. Removed the annotation.

> dtest failure in 
> compaction_test.TestCompaction_with_LeveledCompactionStrategy.data_size_test
> -
>
> Key: CASSANDRA-11322
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11322
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This dtest has failed once:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/597/testReport/compaction_test/TestCompaction_with_LeveledCompactionStrategy/data_size_test
> Here's the history for this test:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/597/testReport/compaction_test/TestCompaction_with_LeveledCompactionStrategy/data_size_test/history/
> It failed at this line:
> https://github.com/riptano/cassandra-dtest/blob/88a74d7/compaction_test.py#L86
> Basically, it ran compaction over the default stress tables, but timed out 
> waiting to see the line {{Compacted }} in the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11322) dtest failure in compaction_test.TestCompaction_with_LeveledCompactionStrategy.data_size_test

2016-03-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204601#comment-15204601
 ] 

Jim Witschey commented on CASSANDRA-11322:
--

Ok. As long as there's nothing semi-obvious to do, like starting a cluster with 
{{wait_for_binary_proto=True}} or something, I'm +1 to closing as Cannot Repro.

> dtest failure in 
> compaction_test.TestCompaction_with_LeveledCompactionStrategy.data_size_test
> -
>
> Key: CASSANDRA-11322
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11322
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This dtest has failed once:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/597/testReport/compaction_test/TestCompaction_with_LeveledCompactionStrategy/data_size_test
> Here's the history for this test:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/597/testReport/compaction_test/TestCompaction_with_LeveledCompactionStrategy/data_size_test/history/
> It failed at this line:
> https://github.com/riptano/cassandra-dtest/blob/88a74d7/compaction_test.py#L86
> Basically, it ran compaction over the default stress tables, but timed out 
> waiting to see the line {{Compacted }} in the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11257) dtest failure in consistency_test.TestConsistency.short_read_test

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11257:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in consistency_test.TestConsistency.short_read_test
> -
>
> Key: CASSANDRA-11257
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11257
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/279/testReport/consistency_test/TestConsistency/short_read_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #279
> from offheap tests job, 2 flaps in recent history:
> {noformat}
> code=1000 [Unavailable exception] message="Cannot achieve consistency level 
> QUORUM" info={'required_replicas': 2, 'alive_replicas': 1, 'consistency': 
> 'QUORUM'}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8928) Add downgradesstables

2016-03-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204582#comment-15204582
 ] 

Paulo Motta commented on CASSANDRA-8928:


Overall looks good. A few comments to improve your proposal:
* We should probably limit the scope from "ma" to "la" and "ka", and leave "jb" 
as an extra if there's time. Given that "ka" is pretty much the same as "la", 
except for the file name format, then the bulk of the work would be on 
downgrading from "ma" to "la".
* I think we can divide this proposal into two major milestones: PoC and 
Productization. In the PoC milestone (midterm) you could deliver a basic 
sstabledowngrader tool supporting downgrade from "ma" to "la", but without much 
attention to the framework aspect, basically hacking the SStableScrubber tool 
and reusing existing code from previous versions to have a basic functional 
version along with tests. In the second phase after you're more familiarized 
with the problem you would refactor your initial PoC to deal with more complex 
scenarios (if there are any), and extract version-independent structures and 
interfaces to facilitate adding support downgrading to newer formats in the 
future, along with adding downgrade support to "ka" which should be easy enough 
after you have downgrade to "la" in place. With that said, you would have the 
following deliverables:
** sstabledowngrader tool with hard-coded downgrade support from "ma" to "la" 
(midterm)
** sstabledowngrader tool with extension points/flexible support to other 
sstable formats + documentation (final)
** comprehensive dtest suite for "ma" and "la" downgrade support with double 
cycle of upgrade/downgrade based on CASSANDRA-10563  (final)
* You can probably move the code reading/familiarization from the coding period 
to the community bonding period, to focus on your first deliverable in the 
coding period.

WDYT [~yukim]? Any other suggestion?

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jeremy Hanna
>Priority: Minor
>  Labels: gsoc2016, mentor
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11256) dtest failure in repair_test.TestRepair.simple_sequential_repair_test

2016-03-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11256:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in repair_test.TestRepair.simple_sequential_repair_test
> -
>
> Key: CASSANDRA-11256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11256
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/275/testReport/repair_test/TestRepair/simple_sequential_repair_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #275



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11365) Recovering failed from a single disk failure using JBOD

2016-03-21 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11365:

Assignee: Paulo Motta

> Recovering failed from a single disk failure using JBOD
> ---
>
> Key: CASSANDRA-11365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11365
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cassandra 2.1.11
> jdk 1.7
>Reporter: zhaoyan
>Assignee: Paulo Motta
>
> one cassandra node's one disk is failture. so it is down.
> i try recovering the node follow:
> https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRecoverUsingJBOD.html
> but i get follow error when restart the node:
> ERROR 02:58:00 Exception encountered during startup
> java.lang.RuntimeException: A node with address /192.168.xx.xx already 
> exists, cancelling join. Use cassandra.replace_address if you want to replace 
> this node.
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:788)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:720)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:387) 
> [apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562)
>  [apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) 
> [apache-cassandra-2.1.11.jar:2.1.11]
> java.lang.RuntimeException: A node with address /192.168.xx.xx already 
> exists, cancelling join. Use cassandra.replace_address if you want to replace 
> this node.
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:788)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:720)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:387)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651)
> Exception encountered during startup: A node with address /192.168.xx. 
> already exists, cancelling join. Use cassandra.replace_address if you want to 
> replace this node.
> INFO  02:58:00 Announcing shutdown
> INFO  02:58:02 Waiting for messaging service to quiesce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11317) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_repairedset_test

2016-03-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204443#comment-15204443
 ] 

Philip Thompson commented on CASSANDRA-11317:
-

Should this be set to Patch Available with [~jkni]'s patch?

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_repairedset_test
> 
>
> Key: CASSANDRA-11317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11317
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/536/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_repairedset_test
> Here's the failure and stack trace:
> {code}
> [' 0']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4WjpOf
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Repair timestamps are: [' 0', ' 0']
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 198, in sstable_repairedset_test
> self.assertGreaterEqual(len(uniquematches), 2, uniquematches)
>   File "/usr/lib/python2.7/unittest/case.py", line 948, in assertGreaterEqual
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "[' 0']\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4WjpOf\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'start_rpc': 'true'}\ndtest: DEBUG: Repair timestamps are: [' 0', ' 
> 0']\n- >> end captured logging << -"
> {code}
> This has failed in this way on CassCI build cassandra-2.2_dtest 536-9.
> [~philipthompson] Could you have a first look at this? You had a recent look 
> at this test in CASSANDRA-11220.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11317) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_repairedset_test

2016-03-21 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton reassigned CASSANDRA-11317:
-

Assignee: Joel Knighton

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_repairedset_test
> 
>
> Key: CASSANDRA-11317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11317
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Joel Knighton
>  Labels: dtest
> Fix For: 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/536/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_repairedset_test
> Here's the failure and stack trace:
> {code}
> [' 0']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4WjpOf
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Repair timestamps are: [' 0', ' 0']
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 198, in sstable_repairedset_test
> self.assertGreaterEqual(len(uniquematches), 2, uniquematches)
>   File "/usr/lib/python2.7/unittest/case.py", line 948, in assertGreaterEqual
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "[' 0']\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4WjpOf\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'start_rpc': 'true'}\ndtest: DEBUG: Repair timestamps are: [' 0', ' 
> 0']\n- >> end captured logging << -"
> {code}
> This has failed in this way on CassCI build cassandra-2.2_dtest 536-9.
> [~philipthompson] Could you have a first look at this? You had a recent look 
> at this test in CASSANDRA-11220.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11317) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_repairedset_test

2016-03-21 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204449#comment-15204449
 ] 

Joel Knighton commented on CASSANDRA-11317:
---

Not quite yet - [CASSANDRA-10412] made similar changes in a bunch of other 
offline tools so I'd like to do a spot check that similar problems aren't 
present elsewhere.

I'll assign to myself and submit a patch officially once I've checked the other 
tools.

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_repairedset_test
> 
>
> Key: CASSANDRA-11317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11317
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/536/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_repairedset_test
> Here's the failure and stack trace:
> {code}
> [' 0']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4WjpOf
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Repair timestamps are: [' 0', ' 0']
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 198, in sstable_repairedset_test
> self.assertGreaterEqual(len(uniquematches), 2, uniquematches)
>   File "/usr/lib/python2.7/unittest/case.py", line 948, in assertGreaterEqual
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "[' 0']\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4WjpOf\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'start_rpc': 'true'}\ndtest: DEBUG: Repair timestamps are: [' 0', ' 
> 0']\n- >> end captured logging << -"
> {code}
> This has failed in this way on CassCI build cassandra-2.2_dtest 536-9.
> [~philipthompson] Could you have a first look at this? You had a recent look 
> at this test in CASSANDRA-11220.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11392) Add auto import java.util for UDF code block

2016-03-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11392:
-
Priority: Minor  (was: Major)

> Add auto import java.util for UDF code block
> 
>
> Key: CASSANDRA-11392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11392
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Priority: Minor
>
> Right now, when creating Java source code for UDF, since we cannot define 
> import, we need to use fully qualified class name, ex:
> {noformat}
> CREATE FUNCTION toSet(li list)
> CALLED ON NULL INPUT
> RETURNS text
> LANGUAGE java
> AS $$
> java.util.Set set = new java.util.HashSet();
> for(String txt: list) {
> set.add(txt);
> }
> return set;
> $$;
> {noformat}
> Classes from {{java.util}} package are so commonly used that it makes 
> developer life easier to import automatically {{java.util.*}} in the 
> {{JavaUDF}} base class so that developers don't need to use FQCN for common 
> classes.
>  The only drawback I can see is the risk of class name clash but since:
> 1. it is not allow to create new class
> 2. classes that can be used in UDF are restricted
>  I don't see serious clash name issues either
> [~snazy] WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11388) FailureDetector.java:456 - Ignoring interval time of

2016-03-21 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204425#comment-15204425
 ] 

Joel Knighton commented on CASSANDRA-11388:
---

Are you seeing any signs of abnormal behavior in the cluster? Do similar lines 
show up in the debug.log to an extreme degree?

These log messages indicate that a node neglected to add an interval time to 
the failure detector history for a node because it was above some maximum 
threshold. This could be due to local GC or a number of other causes. These log 
messages are flagged DEBUG because they may be helpful in tracking down an 
issue but don't indicate a problem on their own.

> FailureDetector.java:456 - Ignoring interval time of
> 
>
> Key: CASSANDRA-11388
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11388
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.2.4
>Reporter: Relish Chackochan
>Priority: Minor
>
> We have Cassandra Cluster of 8 nodes with 2.2.4v using jdk1.8.0_65 ( RHEL 6.5 
> 64-bit ) and i am seeing the following error in the Cassandra debug.log file. 
> All the nodes are UP and running with
> "nodetool status". NTP is configured on all nodes and Time syncing well.
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 3069815316 for /192.168.1.153
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 2076119905 for /192.168.1.135
> DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
> Ignoring interval time of 2683887772 for /192.168.1.151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11392) Add auto import java.util for UDF code block

2016-03-21 Thread DOAN DuyHai (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DOAN DuyHai updated CASSANDRA-11392:

Summary: Add auto import java.util for UDF code block  (was: Add IMPORT 
block or auto import java.util for UDF code block)

> Add auto import java.util for UDF code block
> 
>
> Key: CASSANDRA-11392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11392
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>
> Right now, when creating Java source code for UDF, since we cannot define 
> import, we need to use fully qualified class name, ex:
> {noformat}
> CREATE FUNCTION toSet(li list)
> CALLED ON NULL INPUT
> RETURNS text
> LANGUAGE java
> AS $$
> java.util.Set set = new java.util.HashSet();
> for(String txt: list) {
> set.add(txt);
> }
> return set;
> $$;
> {noformat}
> Classes from {{java.util}} package are so commonly used that it makes 
> developer life easier to import automatically {{java.util.*}} in the 
> {{JavaUDF}} base class so that developers don't need to use FQCN for common 
> classes.
>  The only drawback I can see is the risk of class name clash but since:
> 1. it is not allow to create new class
> 2. classes that can be used in UDF are restricted
>  I don't see serious clash name issues either
> [~snazy] WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11392) Add IMPORT block or auto import java.util for UDF code block

2016-03-21 Thread DOAN DuyHai (JIRA)
DOAN DuyHai created CASSANDRA-11392:
---

 Summary: Add IMPORT block or auto import java.util for UDF code 
block
 Key: CASSANDRA-11392
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11392
 Project: Cassandra
  Issue Type: Improvement
  Components: CQL
 Environment: C* 3.4
Reporter: DOAN DuyHai


Right now, when creating Java source code for UDF, since we cannot define 
import, we need to use fully qualified class name, ex:


{noformat}
CREATE FUNCTION toSet(li list)
CALLED ON NULL INPUT
RETURNS text
LANGUAGE java
AS $$
java.util.Set set = new java.util.HashSet();
for(String txt: list) {
set.add(txt);
}
return set;
$$;
{noformat}

Classes from {{java.util}} package are so commonly used that it makes developer 
life easier to import automatically {{java.util.*}} in the {{JavaUDF}} base 
class so that developers don't need to use FQCN for common classes.

 The only drawback I can see is the risk of class name clash but since:

1. it is not allow to create new class
2. classes that can be used in UDF are restricted

 I don't see serious clash name issues either

[~snazy] WDYT ?
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11391) "class declared as inner class" error when using UDF

2016-03-21 Thread DOAN DuyHai (JIRA)
DOAN DuyHai created CASSANDRA-11391:
---

 Summary: "class declared as inner class" error when using UDF
 Key: CASSANDRA-11391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11391
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: C* 3.4
Reporter: DOAN DuyHai
Priority: Critical


{noformat}
cqlsh:music> CREATE FUNCTION testMapEntry(my_map map)
 ... CALLED ON NULL INPUT
 ... RETURNS text
 ... LANGUAGE java
 ... AS $$
 ... String buffer = "";
 ... for(java.util.Map.Entry entry: 
my_map.entrySet()) {
 ... buffer = buffer + entry.getKey() + ": " + entry.getValue() 
+ ", ";
 ... }
 ... return buffer;
 ... $$;
InvalidRequest: code=2200 [Invalid query] 
message="Could not compile function 'music.testmapentry' from Java source: 
org.apache.cassandra.exceptions.InvalidRequestException: 
Java UDF validation failed: [class declared as inner class]"
{noformat}

When I try to decompile the source code into byte code, below is the result:

{noformat}
  public java.lang.String test(java.util.Map);
Code:
   0: ldc   #2  // String
   2: astore_2
   3: aload_1
   4: invokeinterface #3,  1// InterfaceMethod 
java/util/Map.entrySet:()Ljava/util/Set;
   9: astore_3
  10: aload_3
  11: invokeinterface #4,  1// InterfaceMethod 
java/util/Set.iterator:()Ljava/util/Iterator;
  16: astore4
  18: aload 4
  20: invokeinterface #5,  1// InterfaceMethod 
java/util/Iterator.hasNext:()Z
  25: ifeq  94
  28: aload 4
  30: invokeinterface #6,  1// InterfaceMethod 
java/util/Iterator.next:()Ljava/lang/Object;
  35: checkcast #7  // class java/util/Map$Entry
  38: astore5
  40: new   #8  // class java/lang/StringBuilder
  43: dup
  44: invokespecial #9  // Method 
java/lang/StringBuilder."":()V
  47: aload_2
  48: invokevirtual #10 // Method 
java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
  51: aload 5
  53: invokeinterface #11,  1   // InterfaceMethod 
java/util/Map$Entry.getKey:()Ljava/lang/Object;
  58: checkcast #12 // class java/lang/String
  61: invokevirtual #10 // Method 
java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
  64: ldc   #13 // String :
  66: invokevirtual #10 // Method 
java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
  69: aload 5
  71: invokeinterface #14,  1   // InterfaceMethod 
java/util/Map$Entry.getValue:()Ljava/lang/Object;
  76: checkcast #12 // class java/lang/String
  79: invokevirtual #10 // Method 
java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
  82: ldc   #15 // String ,
  84: invokevirtual #10 // Method 
java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
  87: invokevirtual #16 // Method 
java/lang/StringBuilder.toString:()Ljava/lang/String;
  90: astore_2
  91: goto  18
  94: aload_2
  95: areturn
{noformat}

 There is nothing that could trigger inner class creation ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11390) Too big MerkleTrees allocated during repair

2016-03-21 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-11390:

Reviewer: Marcus Olsson
  Status: Patch Available  (was: Open)

patch here: https://github.com/krummas/cassandra/commits/marcuse/merkletree

tests:
https://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-merkletree-dtest/
https://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-merkletree-testall/

Can you review [~molsson]?

> Too big MerkleTrees allocated during repair
> ---
>
> Key: CASSANDRA-11390
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11390
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0.x, 3.x
>
>
> Since CASSANDRA-5220 we create one merkle tree per range, but each of those 
> trees is allocated to hold all the keys on the node, taking up too much memory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11390) Too big MerkleTrees allocated during repair

2016-03-21 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-11390:
---

 Summary: Too big MerkleTrees allocated during repair
 Key: CASSANDRA-11390
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11390
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 3.0.x, 3.x


Since CASSANDRA-5220 we create one merkle tree per range, but each of those 
trees is allocated to hold all the keys on the node, taking up too much memory





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2016-03-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey resolved CASSANDRA-10912.
--
Resolution: Fixed

> resumable_bootstrap_test dtest flaps
> 
>
> Key: CASSANDRA-10912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
>  Labels: dtest
> Fix For: 3.0.x
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when 
> a node fails to start listening for connections via CQL:
> {code}
> 21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
> {code}
> I've seen it on 2.2 HEAD:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> and 3.0 HEAD:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
> and trunk:
> http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2016-03-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204355#comment-15204355
 ] 

Jim Witschey commented on CASSANDRA-10912:
--

Ok, cool; I'll resolve this. Thanks, all.

> resumable_bootstrap_test dtest flaps
> 
>
> Key: CASSANDRA-10912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
>  Labels: dtest
> Fix For: 3.0.x
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when 
> a node fails to start listening for connections via CQL:
> {code}
> 21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
> {code}
> I've seen it on 2.2 HEAD:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> and 3.0 HEAD:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
> and trunk:
> http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204332#comment-15204332
 ] 

Benjamin Lerer commented on CASSANDRA-11310:


I agree that things should be simpler on top of my branch, feel free to use it. 

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-03-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204301#comment-15204301
 ] 

Benjamin Lerer commented on CASSANDRA-11354:


{quote}Is AbstractSingleRestriction still required as an abstract class? It's 
also possible to implement the false returning methods on the interface 
directly. {quote}
I removed the {{AbstractSingleRestriction}} class and moved the 
{{reverseBoundIfNeeded}} method into the {{Bound}} class.

{quote}There are two places where clustering column restrictions are 
"validated"{quote}
The main reason for that is that the restrictions are needed in order for the 
second validation. When merging restrictions we have no guaranty of the order 
in which the restrictions are provided. It could be {{SELECT * FROM myTable 
WHERE clustering1 = 1 AND clustering2 = 2;}} or  {{SELECT * FROM myTable WHERE 
clustering2 = 2 AND  clustering1 = 1;}} 

> PrimaryKeyRestrictionSet should be refactored
> -
>
> Key: CASSANDRA-11354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> While reviewing CASSANDRA-11310 I realized that the code of 
> {{PrimaryKeyRestrictionSet}} was really confusing.
> The main 2 issues are:
> * the fact that it is used for partition keys and clustering columns 
> restrictions whereas those types of column required different processing
> * the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not 
> be there as the set of restrictions might not match any of those categories 
> when secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-03-21 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204297#comment-15204297
 ] 

Adam Holmberg commented on CASSANDRA-11053:
---

A few comments on review:
 
It is not clear to me why we're using deque+Event and not a 
[Queue|https://docs.python.org/2/library/queue.html].

We may want to 
[daemonize|https://docs.python.org/2/library/threading.html#threading.Thread.daemon]
 the feeder thread to avoid hanging on exit while the thread continues forever.

Are there any recoverable exceptions that would warrant exception handling in 
that thread body?

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting
> Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
> Attachments: copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> h5. Description
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.
> h5. Doc-impacting changes to COPY FROM options
> * A new option was added: PREPAREDSTATEMENTS - it indicates if prepared 
> statements should be used; it defaults to true.
> * The default value of CHUNKSIZE changed from 1000 to 5000.
> * The default value of MINBATCHSIZE changed from 2 to 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9861) When forcibly exiting due to OOM, we should produce a heap dump

2016-03-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204111#comment-15204111
 ] 

Benjamin Lerer edited comment on CASSANDRA-9861 at 3/21/16 2:02 PM:


I guess the simplest way would be to add the 
{{-XX:+HeapDumpOnOutOfMemoryError}}  command line argument to our startup 
scripts.


was (Author: blerer):
I guess the simplest way would be to use the 
{{-XX:+HeapDumpOnOutOfMemoryError}}  command line argument to our startup 
scripts.

> When forcibly exiting due to OOM, we should produce a heap dump
> ---
>
> Key: CASSANDRA-9861
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9861
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x
>
>
> CASSANDRA-7507 introduced earlier termination on encountering an OOM, due to 
> lack of certainty about system state. However a side effect is that we never 
> produce heap dumps on OOM. We should ideally try to produce one forcibly 
> before exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10404) Node to Node encryption transitional mode

2016-03-21 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204187#comment-15204187
 ] 

Cyril Scetbon commented on CASSANDRA-10404:
---

[~jasobrown] ok, would be great to know if others have the bandwidth (I can't 
check it) or not to be able to plan it. 

> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>Assignee: Jason Brown
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 
> which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10404) Node to Node encryption transitional mode

2016-03-21 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204146#comment-15204146
 ] 

Jason Brown commented on CASSANDRA-10404:
-

[~cscetbon] I honestly don't know how much work is involved for implementing it 
in the current code base, but I suspect it's not as trivial doing it with netty 
(as we already have a working example). WRT timing, the netty changes require a 
slight modification to the internode messaging protocol, and hence need to fall 
on major version updates - as for upgrading the current system, I don't have 
the bandwidth to get to it within the next few months; others could try to 
knock it out, however.

> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>Assignee: Jason Brown
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 
> which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11383) SASI index build leads to massive OOM

2016-03-21 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204144#comment-15204144
 ] 

DOAN DuyHai commented on CASSANDRA-11383:
-

[~xedin]

 Ok last update from testing:

 - LCS 1Gb max_sstable_size
 - only PREFIX index modes

 The cluster is running fine with index build. I can even build multiple 
indices at the same time

 If you decide to remove {{SPARSE}} mode, how will SASI deal with real *sparse* 
numerical values (like the index for  {{created_at}} in the example ?) Or does 
SASI auto-detect sparse-ness and adapt its data structure ?

> SASI index build leads to massive OOM
> -
>
> Key: CASSANDRA-11383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
> Attachments: CASSANDRA-11383.patch, 
> SASI_Index_build_LCS_1G_Max_SSTable_Size_logs.tar.gz, 
> new_system_log_CMS_8GB_OOM.log, system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
>  JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
>  - ≈ 100Gb/per node
>  - 1.3 Tb cluster-wide
>  - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
>  - 8 indices with text field, NonTokenizingAnalyser,  PREFIX mode, 
> case-insensitive
>  - 1 index with numeric field, SPARSE mode
>  After a while, the nodes just gone OOM.
>  I attach log files. You can see a lot of GC happening while index segments 
> are flush to disk. At some point the node OOM ...
> /cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11389) Case sensitive in LIKE query althogh index created with false

2016-03-21 Thread Alon Levi (JIRA)
Alon Levi created CASSANDRA-11389:
-

 Summary: Case sensitive in LIKE query althogh index created with 
false
 Key: CASSANDRA-11389
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11389
 Project: Cassandra
  Issue Type: Bug
  Components: sasi
Reporter: Alon Levi
Priority: Minor
 Fix For: 3.4


I created an index on user's first name as following: 

CREATE CUSTOM INDEX ON users (first_name) USING 
'org.apache.cassandra.index.sasi.SASIIndex'
with options = {
'mode' : 'CONTAINS',
'case_sensitive' : 'false'
};

This is the data I have in my table
user_id | first_name | 
last_name
---+---+---
daa312ae-ecdf-4eb4-b6e9-206e33e5ca24 |  Shlomo | Cohen
ab38ce9d-2823-4e6a-994f-7783953baef1  |  Elad  |  Karakuli
5e8371a7-3ed9-479f-9e4b-e4a07c750b12 |  Alon  |  Levi
ae85cdc0-5eb7-4f08-8e42-2abd89e327ed |  Gil | Elias

Although i mentioned the option 'case_sensitive' : 'false'
when I run this query : 

select user_id, first_name from users where first_name LIKE '%shl%';

The query returns no results.
However, when I run this query :

select user_id, first_name from users where first_name LIKE '%Shl%';

The query returns the right results,
and the strangest thing is when I run this query:

select user_id, first_name from users where first_name LIKE 'shl%';

suddenly the query is no more case sensitive and the results are fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10404) Node to Node encryption transitional mode

2016-03-21 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204120#comment-15204120
 ] 

Cyril Scetbon commented on CASSANDRA-10404:
---

[~jasobrown] :( how much work is needed to implement it ? Why not before 4.0 ? 
Because of tik tok development cycle ? I could understand that as 
CASSANDRA-8457 changes the network code it would be easier to do it after it

> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>Assignee: Jason Brown
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 
> which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9861) When forcibly exiting due to OOM, we should produce a heap dump

2016-03-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204111#comment-15204111
 ] 

Benjamin Lerer commented on CASSANDRA-9861:
---

I guess the simplest way would be to use the 
{{-XX:+HeapDumpOnOutOfMemoryError}}  command line argument to our startup 
scripts.

> When forcibly exiting due to OOM, we should produce a heap dump
> ---
>
> Key: CASSANDRA-9861
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9861
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x
>
>
> CASSANDRA-7507 introduced earlier termination on encountering an OOM, due to 
> lack of certainty about system state. However a side effect is that we never 
> produce heap dumps on OOM. We should ideally try to produce one forcibly 
> before exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz

2016-03-21 Thread Jan Karlsson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204083#comment-15204083
 ] 

Jan Karlsson commented on CASSANDRA-10091:
--

Great that you like the patch! I am really excited to get this in!

We have already created some dtests for this which can be found 
[here|https://github.com/beobal/cassandra-dtest/commits/10091].

I could take a look at the comments next week unless you want to take this 
[~beobal]?

> Integrated JMX authn & authz
> 
>
> Key: CASSANDRA-10091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10091
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.x
>
>
> It would be useful to authenticate with JMX through Cassandra's internal 
> authentication. This would reduce the overhead of keeping passwords in files 
> on the machine and would consolidate passwords to one location. It would also 
> allow the possibility to handle JMX permissions in Cassandra.
> It could be done by creating our own JMX server and setting custom classes 
> for the authenticator and authorizer. We could then add some parameters where 
> the user could specify what authenticator and authorizer to use in case they 
> want to make their own.
> This could also be done by creating a premain method which creates a jmx 
> server. This would give us the feature without changing the Cassandra code 
> itself. However I believe this would be a good feature to have in Cassandra.
> I am currently working on a solution which creates a JMX server and uses a 
> custom authenticator and authorizer. It is currently build as a premain, 
> however it would be great if we could put this in Cassandra instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6788) Race condition silently kills thrift server

2016-03-21 Thread Daniel Pinyol (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204079#comment-15204079
 ] 

Daniel Pinyol commented on CASSANDRA-6788:
--

With both Cassandra 1.2.19 and 2.2.5 (which should contain the patch) I can 
experience a similar problem, with both OSX and linux. I use thrift with 
scale7-pelops 1.3-1.1.x. This is my pseudocode. I use dynamic columns.
{noformat}
for(column=1..1000) 
{
  for(value=1..25) 
  {
write("CF1", "key1", column, writtenValue);
readValue = read("CF1", "key1", column);
some times here readValue!=writtenValue. Once this happens, sleeping and 
reading again does not help  
  }
}
{noformat}
The only alternative ways to avoid the problem are:
* inserting a sleep (any duration) right after the put.
* replacing thrift with CQL
* This sounds crazy, but each value contains the previous one as prefix (1, 12, 
123, 1234...) it never fails. 

> Race condition silently kills thrift server
> ---
>
> Key: CASSANDRA-6788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6788
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Christian Rolf
>Assignee: Christian Rolf
> Fix For: 1.2.17, 2.0.7, 2.1 beta2
>
> Attachments: 6788-v2.txt, 6788-v3.txt, 6793-v3-rebased.txt, 
> race_patch.diff
>
>
> There's a race condition in CustomTThreadPoolServer that can cause the thrift 
> server to silently stop listening for connections. 
> It happens when the executor service throws a RejectedExecutionException, 
> which is not caught.
>  
> Silent in the sense that OpsCenter doesn't notice any problem since JMX is 
> still running fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11388) FailureDetector.java:456 - Ignoring interval time of

2016-03-21 Thread Relish Chackochan (JIRA)
Relish Chackochan created CASSANDRA-11388:
-

 Summary: FailureDetector.java:456 - Ignoring interval time of
 Key: CASSANDRA-11388
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11388
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.2.4
Reporter: Relish Chackochan
Priority: Minor


We have Cassandra Cluster of 8 nodes with 2.2.4v using jdk1.8.0_65 ( RHEL 6.5 
64-bit ) and i am seeing the following error in the Cassandra debug.log file. 
All the nodes are UP and running with
"nodetool status". NTP is configured on all nodes and Time syncing well.


DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
Ignoring interval time of 3069815316 for /192.168.1.153
DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
Ignoring interval time of 2076119905 for /192.168.1.135
DEBUG [GossipStage:1] 2016-03-21 06:45:30,700 FailureDetector.java:456 - 
Ignoring interval time of 2683887772 for /192.168.1.151




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11299) AssertionError when quering by secondary index

2016-03-21 Thread Julien Anguenot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204038#comment-15204038
 ] 

Julien Anguenot edited comment on CASSANDRA-11299 at 3/21/16 11:14 AM:
---

So these errors are definitely due to duplicated primary keys after an upgrade 
(in my case 2.2.5 to 3.0.4). When I mean upgrade, this is before even running 
any sstablesupgrades.

Scrub, repairs, sstableupgrades did not report nor fix this issue as reported 
by Michal.

Note, I seem to only have this issue with one (1) particular table out of 
hundreds.

What could I provide to help the matter here?


was (Author: anguenot):
So these errors are definitely due to duplicated primary keys after an upgrade 
(in my case 2.2.5 to 3.0.4). When I mean upgrade, this is before even running 
any sstablesupgrades.

Scrub, repairs, sstableupgrades did not report nor fix this issue as reported 
by Michal.

What could I provide to help the matter here?

> AssertionError when quering by secondary index
> --
>
> Key: CASSANDRA-11299
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11299
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.3
>Reporter: Michał Matłoka
>
> Hi,
> Recently we have upgraded from Cassandra 2.2.4 to 3.3. I have issues with one 
> table. When I try to query using any secondary index I get e.g. in cqlsh
> {code}
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1249, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> Node logs shows then:
> {code}
> [[AWARN  [SharedPool-Worker-2] 2016-03-03 00:47:01,679 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:225)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:215)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:133)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:294)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1789)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2457)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_66]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.3.0.jar:3.3.0]
> at 

[jira] [Comment Edited] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-21 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202782#comment-15202782
 ] 

Alex Petrov edited comment on CASSANDRA-11310 at 3/21/16 11:10 AM:
---

I've tried to get it all sorted out and have handled most of the things by now, 
although some cases are harder to handle without larger changes. For example, 
within the {{StatementRestrictions}} we can't know whether there 
{{ClusteringColumnRestrictions}} would require filtering. One of the cases is 
multiple column slices, when clustering columns are given in the correct order. 
Without seeing actual restrictions (which are private to the 
{{PrimaryKeyRestrictionSet}}), we can't assert that there is more than one 
slice. For example, there's a custom logic related to filtering contained 
within in the {{RestrictionSet}}, such as {{hasMultipleContains}}.

I think it will be simpler to handle on top of your branch: 
https://github.com/blerer/cassandra/commits/11354-trunk, since there it's going 
to be possible to add logic related to the clustering columns within the 
clustering columns class. To give a bit of background, I've consolidated 
{{usesFiltering}} and exposed it via (previously existed) 
{{StatementRestrictions::needFiltering}}, and made sure that both 
{{usesFiltering}} and {{usesSecondaryIndexing}} are correctly and consistently 
set. For the last bit, it'd be good to know whether clustering column 
restrictions require filtering, as described above.

Do you think it's a good idea?  


was (Author: ifesdjeen):
I've tried to get it all sorted out and have handled most of the things by now, 
although some cases are harder to handle without larger changes. For example, 
within the {{StatementRestrictions}} we can't know whether there 
{{ClusteringColumnRestrictions}} would require filtering. One of the cases is 
multiple column slices, when clustering columns are given in the correct order. 
Without seeing actual restrictions (which are private to the 
{{PrimaryKeyRestrictionSet}}), we can't assert that there is more than one 
slice. For example, there's a custom logic related to filtering contained 
within in the {{RestrictionSet}}, such as {{hasMultipleContains}}.

I think it will be simpler to handle on top of your branch: 
https://github.com/blerer/cassandra/commits/11354-trunk, since there it's going 
to be possible to add logic related to the clustering columns within the 
clustering columns class. Do you think it's a good idea?  

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11299) AssertionError when quering by secondary index

2016-03-21 Thread Julien Anguenot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204038#comment-15204038
 ] 

Julien Anguenot commented on CASSANDRA-11299:
-

So these errors are definitely due to duplicated primary keys after an upgrade 
(in my case 2.2.5 to 3.0.4). When I mean upgrade, this is before even running 
any sstablesupgrades.

Scrub, repairs, sstableupgrades did not report nor fix this issue as reported 
by Michal.

What could I provide to help the matter here?

> AssertionError when quering by secondary index
> --
>
> Key: CASSANDRA-11299
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11299
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.3
>Reporter: Michał Matłoka
>
> Hi,
> Recently we have upgraded from Cassandra 2.2.4 to 3.3. I have issues with one 
> table. When I try to query using any secondary index I get e.g. in cqlsh
> {code}
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1249, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> Node logs shows then:
> {code}
> [[AWARN  [SharedPool-Worker-2] 2016-03-03 00:47:01,679 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:225)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:215)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:133)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:294)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1789)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2457)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_66]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.3.0.jar:3.3.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
> {code}
> SStables are upgraded, I have tried repair and scrub. I have tried to rebuild 
> indexes, and even remove them and re-add them.It occurs on every cluster node.
> Additionally I had seen in this table case where PRIMARY KEY was 
> duplicated!!! (there were two rows with same primary key, by seeing what 
> 

[jira] [Created] (CASSANDRA-11387) If "using ttl" not present in an update statement, ttl shouldn't be updated to null

2016-03-21 Thread JIRA
Çelebi Murat created CASSANDRA-11387:


 Summary: If "using ttl" not present in an update statement, ttl 
shouldn't be updated to null
 Key: CASSANDRA-11387
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11387
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, CQL
Reporter: Çelebi Murat
Priority: Minor


When I update a value if I don't use "using ttl = ?", ttl value becomes null 
instead of staying untouched. Thus causing unexpected behaviours. Selecting the 
ttl value of the updated values before the actual update operation is hindering 
both software performance and development speed. Also makes the software prone 
to be buggy.

Instead, It would be very helpful if the behavior is changed as follows; If 
"using ttl" clause is not present in an update statement, ttl value should stay 
unchanged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11195) paging may returns incomplete results on small page size

2016-03-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-11195:
--

Assignee: Benjamin Lerer

> paging may returns incomplete results on small page size
> 
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log
>
>
> This was found through a flapping test, and running that test is still the 
> easiest way to repro the issue. On CI we're seeing a 40-50% failure rate, but 
> locally this test fails much less frequently.
> If I attach a python debugger and re-query the "bad" query, it continues to 
> return incomplete data indefinitely. If I go directly to cqlsh I can see all 
> rows just fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11320) Improve backoff policy for cqlsh COPY FROM

2016-03-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203918#comment-15203918
 ] 

Stefania commented on CASSANDRA-11320:
--

I've started to work on  a patch for 3.0:

|3.0|[patch|https://github.com/stef1927/cassandra/commits/11320-3.0]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11320-3.0-dtest/]|

> Improve backoff policy for cqlsh COPY FROM
> --
>
> Key: CASSANDRA-11320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11320
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Currently we have an exponential back-off policy in COPY FROM that kicks in 
> when timeouts are received. However there are two limitations:
> * it does not cover new requests and therefore we may not back-off 
> sufficiently to give time to an overloaded server to recover
> * the pause is performed in the receiving thread and therefore we may not 
> process server messages quickly enough
> There is a static throttling mechanism in rows per second from feeder to 
> worker processes (the INGESTRATE) but the feeder has no idea of the load of 
> each worker process. However it's easy to keep track of how many chunks a 
> worker process has yet to read by introducing a bounded semaphore.
> The idea is to move the back-off pauses to the worker processes main thread 
> so as to include all messages, new and retries, not just the retries that 
> timed out. The worker process will not read new chunks during the back-off 
> pauses, and the feeder process can then look at the number of pending chunks 
> before sending new chunks to a worker process.
> [~aholmber], [~aweisberg] what do you think?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11386) Cassandra eclipse error on trunk

2016-03-21 Thread Vinay Polisetty (JIRA)
Vinay Polisetty created CASSANDRA-11386:
---

 Summary: Cassandra eclipse error on trunk
 Key: CASSANDRA-11386
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11386
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
 Environment: mac / eclipse luna
Reporter: Vinay Polisetty
Priority: Minor
 Attachments: Screen Shot 2016-03-21 at 12.04.36 AM.png

The method computeCompressionRatio(Iterable) in the type 
TableMetrics is not applicable for the arguments (Iterable)

eclipse has the above error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)