[jira] [Commented] (CASSANDRA-11164) Order and filter cipher suites correctly

2016-02-12 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144334#comment-15144334
 ] 

Stefan Podkowinski commented on CASSANDRA-11164:


Do you have an opinion on not filtering the available cipher suites at all? Why 
fix the ordering when we can just get rid of the filtering in first place?

> Order and filter cipher suites correctly
> 
>
> Key: CASSANDRA-11164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11164
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Petracca
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 11164-2.2.txt
>
>
> As pointed out in https://issues.apache.org/jira/browse/CASSANDRA-10508, 
> SSLFactory.filterCipherSuites() doesn't respect the ordering of desired 
> ciphers in cassandra.yaml.
> Also the fix that occurred for 
> https://issues.apache.org/jira/browse/CASSANDRA-3278 is incomplete and needs 
> to be applied to all locations where we create an SSLSocket so that JCE is 
> not required out of the box or with additional configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144508#comment-15144508
 ] 

Stefania commented on CASSANDRA-11053:
--

Here are the latest results:

||MODULE CYTHONIZED||PREPARED STATEMENTS||NUM. WORKER PROCESSES||CHUNK 
SIZE||AVERAGE ROWS / SEC||TOTAL TIME||APPROX ROWS / SEC IN REAL-TIME (50% -> 
95%)||
|NONE|YES|7|1,000|44,115|7' 44'"|43,700 -> 44,000|
|NONE|NO|7|1,000|58,345|5' 51"|57,800 -> 58,200|
|DRIVER|YES|7|1,000|77,719|4' 23"|77,300 -> 77,600|
|DRIVER|NO \(*\)|7|1,000|94,508 \(*\)|3' 36"|94,000 -> 95,000|
|DRIVER|YES|15|1,000|78,429|4' 21"|77,900 -> 78,300|
|DRIVER|YES|7|10,000|78,746|4' 20"|78,000 -> 78,500|
|DRIVER|YES|7|5,000|79,337|4" 18"|78,900 -> 79,200|
|DRIVER|YES|8|5,000|81,636|4' 10"|80,900 -> 81,500|
|DRIVER|YES|9|5,000|*82,584*|4' 8"|82,000 -> 82,500|
|DRIVER|YES|10|5,000|82,486|4' 8"|81,800 -> 82,400|
|DRIVER|YES|9|2500|82,013|4' 9"|81,500 -> 81,900|
|DRIVER + COPYUTIL|YES|9|5,000|*88,187*|3' 52"|87,900 -> 88,100|
|DRIVER + COPYUTIL|NO \(*\)|9|5,000|87,860 \(*\)|3' 53"|99,600 -> 93,800|

I've also saved the results in a 
[spreadsheet|https://docs.google.com/spreadsheets/d/1XTE2fSDJkwHzpdaD5HI0HlsFuPCW1Kc1NeqauF6WX2s].

The column on the right contains two approximate observations of the real-time 
rate at about half-way through and just before finishing. It's purpose is 
simply to verify that the real-time rate is fine now, it no longer lags behind 
as it used to do. 

The test runs with a \(*\) were affected by time outs, indicating the cluster 
had reached capacity. This is to be expected given that with non-prepared 
statements we shift the parsing burden to cassandra nodes forcing them to 
compile each batch statement as well. I don't consider this a particularly good 
thing to do, as it is only applicable when the cluster is over-sized and 
therefore I focused my efforts and search for optimal parameters to the case 
with prepared statements (the default). In the very last run, we can see how 
half-way through we had an average of 99,600 but it then plummeted just before 
finishing due to a long pause (there is an exponential back-off policy that 
kicks in on timeouts).

The improvements over the [last set of 
results|https://issues.apache.org/jira/browse/CASSANDRA-11053?focusedCommentId=15133899=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15133899]
 are mostly due to tailored optimizations of Python code via the Python [line 
profiler|https://github.com/rkern/line_profiler]. I've also reduced the amount 
of data sent from worker processes to the parent by aggregating results. This 
helped the real time reporting tremendously. I've also added support for libev 
if it is installed, as described in the driver [installation 
guide|https://datastax.github.io/python-driver/installation.html]. Finally, I 
fixed a problem with type formatting introduced by the cythonized driver.

With these improvements, together with those previously adopted, worker and 
parent processes are no longer as tightly coupled and I therefore experimented 
with the number of worker processes and the chunk size. The default number of 
worker processes is 7 (num-cores minus 1). However it seems from observation 
that num-cores + 1 gives better results. I've monitored vmstats with {{dstat}} 
and the running tasks were reasonable (less than 2*num-cores). As for the chunk 
size, the default value of 1000 is probably too small, and it seems 5000 is a 
better value for this particular dataset and environment. However, I don't 
propose that we change the current default values as they are safer for smaller 
environments such as laptops.

I've also spent time trying to improve csv parsing times, by comparing 
alternatives based on [pandas|http://pandas.pydata.org/], 
[numpy|http://www.numpy.org/] and [numba|http://numba.pydata.org/] but none 
were worth pursuing further, at least not for this benchmark with very simple 
type conversions (text and integers). For more complex data types, such as 
dates or collections, perhaps pure cython conversion functions would help 
significantly.

Whilst I still have a new set of profiler results to analyse, I feel that we 
are reaching a point where our efforts could be better spent elsewhere due to 
diminishing returns. As a comparison, cassandra stress with approx 1KB 
partitions inserted 5M rows at a rate of 93k rows per second. As this is well 
within 10% of our results, I suggest we should consider focussing on 
alternative means of optimizations for wider user cases, such as supporting 
binary formats for COPY TO / FROM or optimizing text conversion of complex data 
types.


> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: 

[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144509#comment-15144509
 ] 

Stefania commented on CASSANDRA-11053:
--

It is fine now thanks, see details below.

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11126) select_distinct_with_deletions_test failing on non-vnode environments

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144466#comment-15144466
 ] 

Sylvain Lebresne commented on CASSANDRA-11126:
--

bq.  I'm afraid this isn't possible with ccm as it is

Well, {{ccm}} certainly support running from a local directory so it ought to 
be possible. But I think I see what you mean, and I agree we can fix {{ccm}} so 
it's much easier to make this work transparently for this kind of test. We 
definitely should do that though.

bq. The way I recommend you test two particular branches is to push them to 
GitHub

I'll do it this time, but that's seriously painful, as a typical way to debug 
such a problem involve multiple change/recompilation and having to commit and 
push every time adds significant annoyance and slowness to the process (if for 
no other reason than because I assume {{ccm}} will do a clean before 
recompiling every time which I can avoid most of the time locally).
Anyway my main point being, when you work on this type of test framework, 
please keep in mind that having an easy way to test against a local checkout is 
pretty important for us, devs.

> select_distinct_with_deletions_test failing on non-vnode environments
> -
>
> Key: CASSANDRA-11126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11126
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ryan McGuire
>Assignee: Sylvain Lebresne
>  Labels: dtest
> Fix For: 3.0.x
>
>
> Looks like this was fixed in CASSANDRA-10762, but not for non-vnode 
> environments:
> {code}
> $ DISABLE_VNODES=yes KEEP_TEST_DIR=yes CASSANDRA_VERSION=git:cassandra-3.0 
> PRINT_DEBUG=true nosetests -s -v 
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1.select_distinct_with_deletions_test
> select_distinct_with_deletions_test 
> (upgrade_tests.cql_tests.TestCQLNodes2RF1) ... cluster ccm directory: 
> /tmp/dtest-UXb0un
> http://git-wip-us.apache.org/repos/asf/cassandra.git git:cassandra-3.0
> Custom init_config not found. Setting defaults.
> Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> getting default job version for 3.0.3
> UpgradePath(starting_version='binary:2.2.3', upgrade_version=None)
> starting from 2.2.3
> upgrading to {'install_dir': 
> '/home/ryan/.ccm/repository/gitCOLONcassandra-3.0'}
> Querying upgraded node
> FAIL
> ==
> FAIL: select_distinct_with_deletions_test 
> (upgrade_tests.cql_tests.TestCQLNodes2RF1)
> --
> Traceback (most recent call last):
>   File "/home/ryan/git/datastax/cassandra-dtest/upgrade_tests/cql_tests.py", 
> line 3360, in select_distinct_with_deletions_test
> self.assertEqual(9, len(rows))
> AssertionError: 9 != 8
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-UXb0un
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: getting default job version for 3.0.3
> dtest: DEBUG: UpgradePath(starting_version='binary:2.2.3', 
> upgrade_version=None)
> dtest: DEBUG: starting from 2.2.3
> dtest: DEBUG: upgrading to {'install_dir': 
> '/home/ryan/.ccm/repository/gitCOLONcassandra-3.0'}
> dtest: DEBUG: Querying upgraded node
> - >> end captured logging << -
> --
> Ran 1 test in 56.022s
> FAILED (failures=1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11056) Use max timestamp to decide DTCS-timewindow-membership

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144551#comment-15144551
 ] 

Sylvain Lebresne commented on CASSANDRA-11056:
--

Cancelling the patch for now because it sounds like there still is outstanding 
questions regarding the consequence of switching to this.

> Use max timestamp to decide DTCS-timewindow-membership
> --
>
> Key: CASSANDRA-11056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11056
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Björn Hegerfors
>  Labels: dtcs
> Attachments: cassandra-2.2-CASSANDRA-11056.txt
>
>
> TWCS (CASSANDRA-9666) uses max timestamp to decide time window membership, we 
> should do the same in DTCS so that users can configure DTCS to work exactly 
> like TWCS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11033) Prevent logging in sandboxed state

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11033:
-
Reviewer: Tyler Hobbs

> Prevent logging in sandboxed state
> --
>
> Key: CASSANDRA-11033
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11033
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.x
>
>
> logback will re-read its configuration file regularly. So it is possible that 
> logback tries to reload the configuration while we log from a sandboxed UDF, 
> which will fail due to the restricted access privileges for UDFs. UDAs are 
> also affected as these use UDFs.
> /cc [~doanduyhai]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144454#comment-15144454
 ] 

Sylvain Lebresne commented on CASSANDRA-8616:
-

We definitively want to clean how offline use internal code as it's a mess 
right now but for now I agree a quick and dirty (and importantly simple) fix is 
good enough. It would be nice to write a regression dtest for this though 
before committing.

[~thobbs] are you good finishing review on this since you're still marked 
reviewer?

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11157) test_bulk_round_trip_blogposts_with_max_connections got "Truncate timed out"

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11157:
-
Reviewer: Jim Witschey

> test_bulk_round_trip_blogposts_with_max_connections got "Truncate timed out"
> 
>
> Key: CASSANDRA-11157
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11157
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Stefania
>Assignee: Stefania
>
> {{test_bulk_round_trip_blogposts_with_max_connections}} failed again but for 
> a different reason:
> http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-11148-trunk-dtest/1/testReport/junit/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections/
> Increasing cqlsh {{--request-timeout}} should fix this since it is just the 
> TRUNCATE operation that times out, unlike the problems of CASSANDRA-10938.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11084) cassandra-3.0 eclipse-warnings

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11084:
-
Reviewer: Sylvain Lebresne

> cassandra-3.0 eclipse-warnings
> --
>
> Key: CASSANDRA-11084
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11084
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 11084-3.0.txt
>
>
> REF = origin/cassandra-3.0 
> COMMIT = 414c1c5771ca05c23c8c1279dbdb90a673dda040
> {noformat}
> # 1/27/16 10:00:17 PM UTC
> # Eclipse Compiler for Java(TM) v20150120-1634, 3.10.2, Copyright IBM Corp 
> 2000, 2013. All rights reserved.
> --
> 1. ERROR in 
> /mnt/data/jenkins/workspace/cassandra-3.0_eclipse-warnings/src/java/org/apache/cassandra/hints/CompressedChecksummedDataInput.java
>  (at line 156)
>   return builder.build();
>   ^^^
> Potential resource leak: '' may not 
> be closed at this location
> --
> --
> 2. ERROR in 
> /mnt/data/jenkins/workspace/cassandra-3.0_eclipse-warnings/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java
>  (at line 141)
>   return channel.socket();
>   
> Potential resource leak: 'channel' may not be closed at this location
> --
> 2 problems (2 errors)
> {noformat}
> Check the latest job on 
> http://cassci.datastax.com/job/cassandra-3.0_eclipse-warnings/ for the most 
> recent warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11084) cassandra-3.0 eclipse-warnings

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144543#comment-15144543
 ] 

Sylvain Lebresne commented on CASSANDRA-11084:
--

+1, but I'd move the comment in {{newSocket}} on the {{SuppressWarnings}} and 
add a similar comment on the other one.

> cassandra-3.0 eclipse-warnings
> --
>
> Key: CASSANDRA-11084
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11084
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 11084-3.0.txt
>
>
> REF = origin/cassandra-3.0 
> COMMIT = 414c1c5771ca05c23c8c1279dbdb90a673dda040
> {noformat}
> # 1/27/16 10:00:17 PM UTC
> # Eclipse Compiler for Java(TM) v20150120-1634, 3.10.2, Copyright IBM Corp 
> 2000, 2013. All rights reserved.
> --
> 1. ERROR in 
> /mnt/data/jenkins/workspace/cassandra-3.0_eclipse-warnings/src/java/org/apache/cassandra/hints/CompressedChecksummedDataInput.java
>  (at line 156)
>   return builder.build();
>   ^^^
> Potential resource leak: '' may not 
> be closed at this location
> --
> --
> 2. ERROR in 
> /mnt/data/jenkins/workspace/cassandra-3.0_eclipse-warnings/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java
>  (at line 141)
>   return channel.socket();
>   
> Potential resource leak: 'channel' may not be closed at this location
> --
> 2 problems (2 errors)
> {noformat}
> Check the latest job on 
> http://cassci.datastax.com/job/cassandra-3.0_eclipse-warnings/ for the most 
> recent warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11064) Failed aggregate creation breaks server permanently

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144550#comment-15144550
 ] 

Sylvain Lebresne commented on CASSANDRA-11064:
--

ping [~snazy]: did the potential reason for the handling of empty collections 
came back to you? It would be nice to get this resolved and as said above, I'm 
decently confident that empty BBs for collections should be invalid (since they 
don't {{validate()}} :)). 

> Failed aggregate creation breaks server permanently
> ---
>
> Key: CASSANDRA-11064
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11064
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Olivier Michallat
>Assignee: Robert Stupp
> Fix For: 3.0.x
>
>
> While testing edge cases around aggregates, I tried the following to see if 
> custom types were supported:
> {code}
> ccm create v321 -v3.2.1 -n3
> ccm updateconf enable_user_defined_functions:true
> ccm start
> ccm node1 cqlsh
> CREATE FUNCTION id(i 'DynamicCompositeType(s => UTF8Type, i => Int32Type)')
> RETURNS NULL ON NULL INPUT
> RETURNS 'DynamicCompositeType(s => UTF8Type, i => Int32Type)'
> LANGUAGE java
> AS 'return i;';
> // function created successfully
> CREATE AGGREGATE ag()
> SFUNC id
> STYPE 'DynamicCompositeType(s => UTF8Type, i => Int32Type)'
> INITCOND 's@foo:i@32';
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: 
> [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at 
> character '@'">{code}
> Despite the error, the aggregate appears in system tables:
> {code}
> select * from system_schema.aggregates;
>  keyspace_name | aggregate_name | ...
> ---++ ...
>   test | ag | ...
> {code}
> But you can't drop it, and trying to drop its function produces the server 
> error again:
> {code}
> DROP AGGREGATE ag;
> InvalidRequest: code=2200 [Invalid query] message="Cannot drop non existing 
> aggregate 'test.ag'"
> DROP FUNCTION id;
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: 
> [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at 
> character '@'">
> {code}
> What's worse, it's now impossible to restart the server:
> {code}
> ccm stop; ccm start
> org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: 
> [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at 
> character '@'
>   at 
> org.apache.cassandra.cql3.CQLFragmentParser.parseAny(CQLFragmentParser.java:48)
>   at org.apache.cassandra.cql3.Terms.asBytes(Terms.java:51)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createUDAFromRow(SchemaKeyspace.java:1225)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchUDAs(SchemaKeyspace.java:1204)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchFunctions(SchemaKeyspace.java:1129)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:897)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:872)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:860)
>   at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:125)
>   at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:115)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:680)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11043) Secondary indexes doesn't properly validate custom expressions

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11043:
-
Reviewer: Andrés de la Peña

> Secondary indexes doesn't properly validate custom expressions
> --
>
> Key: CASSANDRA-11043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11043
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Sam Tunnicliffe
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: test-index.zip
>
>
> It seems that 
> [CASSANDRA-7575|https://issues.apache.org/jira/browse/CASSANDRA-7575] is 
> broken in Cassandra 3.x. As stated in the secondary indexes' API 
> documentation, custom index implementations should perform any validation of 
> query expressions at {{Index#searcherFor(ReadCommand)}}, throwing an 
> {{InvalidRequestException}} if the expressions are not valid. I assume these 
> validation errors should produce an {{InvalidRequest}} error on cqlsh, or 
> raise an {{InvalidQueryException}} on Java driver. However, when 
> {{Index#searcherFor(ReadCommand)}} throws its {{InvalidRequestException}}, I 
> get this cqlsh output:
> {noformat}
> Traceback (most recent call last):
>   File "bin/cqlsh.py", line 1246, in perform_simple_statement
> result = future.result()
>   File 
> "/Users/adelapena/stratio/platform/src/cassandra-3.2.1/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {noformat}
> I attach a dummy index implementation to reproduce the error:
> {noformat}
> CREATE KEYSPACE test with replication = {'class' : 'SimpleStrategy', 
> 'replication_factor' : '1' }; 
> CREATE TABLE test.test (id int PRIMARY KEY, value varchar); 
> CREATE CUSTOM INDEX test_index ON test.test() USING 'com.stratio.TestIndex'; 
> SELECT * FROM test.test WHERE expr(test_index,'ok');
> SELECT * FROM test.test WHERE expr(test_index,'error');
> {noformat}
> This is specially problematic when using Cassandra Java Driver, because one 
> of these server exceptions can produce subsequent queries fail (even if they 
> are valid) with a no host available exception.
> Maybe the validation method added with 
> [CASSANDRA-7575|https://issues.apache.org/jira/browse/CASSANDRA-7575] should 
> be restored, unless there is a way to properly manage the exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10882) network_topology_test dtest still failing

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144561#comment-15144561
 ] 

Sylvain Lebresne commented on CASSANDRA-10882:
--

This is marked "patch available" but I don't see any link to a dtest PR. Am I 
being blind or you forgot to add one?

> network_topology_test dtest still failing
> -
>
> Key: CASSANDRA-10882
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10882
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> It looks like CASSANDRA-8158 may not have been properly resolved:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/176/testReport/replication_test/ReplicationTest/network_topology_test/history/
> http://cassci.datastax.com/job/cassandra-2.2_novnode_dtest/lastCompletedBuild/testReport/replication_test/ReplicationTest/network_topology_test/history/
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/lastCompletedBuild/testReport/replication_test/ReplicationTest/network_topology_test/history/
> [~philipthompson] Can you have a look?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Avoid potential AssertionError in mixed version cluster

2016-02-12 Thread slebresne
Avoid potential AssertionError in mixed version cluster

patch by slebresne; reviewed by Stefania for CASSANDRA-1128

The patch attempts to make sure the version of a given node is set
correctly as soon as possible by using the version passed through
gossip, as that version could previously be used before having been
properly set, thus defaulting to the current version (which might be
incorrect) and leading to the AssertionError


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3b7599e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3b7599e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3b7599e

Branch: refs/heads/trunk
Commit: f3b7599e3b615f26cc81affa97569f6a7395
Parents: d4e6f08
Author: Sylvain Lebresne 
Authored: Tue Feb 9 15:08:34 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Feb 12 12:04:09 2016 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/net/MessagingService.java  |  3 +++
 .../cassandra/net/OutboundTcpConnection.java| 11 +-
 .../cassandra/service/StorageService.java   | 21 
 4 files changed, 35 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3b7599e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5156b0c..15012b1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.4
+ * Avoid potential AssertionError in mixed version cluster (CASSANDRA-11128)
  * Properly handle hinted handoff after topology changes (CASSANDRA-5902)
  * AssertionError when listing sstable files on inconsistent disk state 
(CASSANDRA-11156)
  * Fix wrong rack counting and invalid conditions check for TokenAllocation

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3b7599e/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index d416dca..835beed 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -877,6 +877,9 @@ public final class MessagingService implements 
MessagingServiceMBean
  */
 public int setVersion(InetAddress endpoint, int version)
 {
+// We can't talk to someone from the future
+version = Math.min(version, current_version);
+
 logger.trace("Setting version {} for {}", version, endpoint);
 
 if (version < VERSION_22)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3b7599e/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
--
diff --git a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java 
b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
index adf90da..7b6e26e 100644
--- a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
+++ b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
@@ -134,13 +134,22 @@ public class OutboundTcpConnection extends Thread
 private volatile long completed;
 private final AtomicLong dropped = new AtomicLong();
 private volatile int currentMsgBufferCount = 0;
-private int targetVersion = MessagingService.current_version;
+private volatile int targetVersion;
 
 public OutboundTcpConnection(OutboundTcpConnectionPool pool)
 {
 super("MessagingService-Outgoing-" + pool.endPoint());
 this.poolReference = pool;
 cs = newCoalescingStrategy(pool.endPoint().getHostAddress());
+
+// We want to use the most precise version we know because while there 
is version detection on connect(),
+// the target version might be accessed by the pool (in 
getConnection()) before we actually connect (as we
+// connect when the first message is submitted). Note however that the 
only case where we'll connect
+// without knowing the true version of a node is if that node is a 
seed (otherwise, we can't know a node
+// unless it has been gossiped to us or it has connected to us and in 
both case this sets the version) and
+// in that case we won't rely on that targetVersion before we're 
actually connected and so the version
+// detection in connect() will do its job.
+targetVersion = 
MessagingService.instance().getVersion(pool.endPoint());
 }
 
 private static boolean isLocalDC(InetAddress targetHost)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3b7599e/src/java/org/apache/cassandra/service/StorageService.java

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-02-12 Thread slebresne
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a800ca89
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a800ca89
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a800ca89

Branch: refs/heads/trunk
Commit: a800ca898d6295420e0f43b12686466e838ca9ad
Parents: db49d3b f3b7599
Author: Sylvain Lebresne 
Authored: Fri Feb 12 12:04:42 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Feb 12 12:04:42 2016 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/net/MessagingService.java  |  3 +++
 .../cassandra/net/OutboundTcpConnection.java| 11 +-
 .../cassandra/service/StorageService.java   | 21 
 4 files changed, 35 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a800ca89/CHANGES.txt
--
diff --cc CHANGES.txt
index 7c2794a,15012b1..9481544
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,26 -1,5 +1,27 @@@
 -3.0.4
 +3.4
 + * fix EQ semantics of analyzed SASI indexes (CASSANDRA-11130)
 + * Support long name output for nodetool commands (CASSANDRA-7950)
 + * Encrypted hints (CASSANDRA-11040)
 + * SASI index options validation (CASSANDRA-11136)
 + * Optimize disk seek using min/max column name meta data when the LIMIT 
clause is used
 +   (CASSANDRA-8180)
 + * Add LIKE support to CQL3 (CASSANDRA-11067)
 + * Generic Java UDF types (CASSANDRA-10819)
 + * cqlsh: Include sub-second precision in timestamps by default 
(CASSANDRA-10428)
 + * Set javac encoding to utf-8 (CASSANDRA-11077)
 + * Integrate SASI index into Cassandra (CASSANDRA-10661)
 + * Add --skip-flush option to nodetool snapshot
 + * Skip values for non-queried columns (CASSANDRA-10657)
 + * Add support for secondary indexes on static columns (CASSANDRA-8103)
 + * CommitLogUpgradeTestMaker creates broken commit logs (CASSANDRA-11051)
 + * Add metric for number of dropped mutations (CASSANDRA-10866)
 + * Simplify row cache invalidation code (CASSANDRA-10396)
 + * Support user-defined compaction through nodetool (CASSANDRA-10660)
 + * Stripe view locks by key and table ID to reduce contention 
(CASSANDRA-10981)
 + * Add nodetool gettimeout and settimeout commands (CASSANDRA-10953)
 + * Add 3.0 metadata to sstablemetadata output (CASSANDRA-10838)
 +Merged from 3.0:
+  * Avoid potential AssertionError in mixed version cluster (CASSANDRA-11128)
   * Properly handle hinted handoff after topology changes (CASSANDRA-5902)
   * AssertionError when listing sstable files on inconsistent disk state 
(CASSANDRA-11156)
   * Fix wrong rack counting and invalid conditions check for TokenAllocation

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a800ca89/src/java/org/apache/cassandra/net/MessagingService.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a800ca89/src/java/org/apache/cassandra/service/StorageService.java
--



[1/3] cassandra git commit: Avoid potential AssertionError in mixed version cluster

2016-02-12 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 d4e6f08d4 -> f3b7599e3
  refs/heads/trunk db49d3b89 -> a800ca898


Avoid potential AssertionError in mixed version cluster

patch by slebresne; reviewed by Stefania for CASSANDRA-1128

The patch attempts to make sure the version of a given node is set
correctly as soon as possible by using the version passed through
gossip, as that version could previously be used before having been
properly set, thus defaulting to the current version (which might be
incorrect) and leading to the AssertionError


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3b7599e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3b7599e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3b7599e

Branch: refs/heads/cassandra-3.0
Commit: f3b7599e3b615f26cc81affa97569f6a7395
Parents: d4e6f08
Author: Sylvain Lebresne 
Authored: Tue Feb 9 15:08:34 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Feb 12 12:04:09 2016 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/net/MessagingService.java  |  3 +++
 .../cassandra/net/OutboundTcpConnection.java| 11 +-
 .../cassandra/service/StorageService.java   | 21 
 4 files changed, 35 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3b7599e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5156b0c..15012b1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.4
+ * Avoid potential AssertionError in mixed version cluster (CASSANDRA-11128)
  * Properly handle hinted handoff after topology changes (CASSANDRA-5902)
  * AssertionError when listing sstable files on inconsistent disk state 
(CASSANDRA-11156)
  * Fix wrong rack counting and invalid conditions check for TokenAllocation

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3b7599e/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index d416dca..835beed 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -877,6 +877,9 @@ public final class MessagingService implements 
MessagingServiceMBean
  */
 public int setVersion(InetAddress endpoint, int version)
 {
+// We can't talk to someone from the future
+version = Math.min(version, current_version);
+
 logger.trace("Setting version {} for {}", version, endpoint);
 
 if (version < VERSION_22)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3b7599e/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
--
diff --git a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java 
b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
index adf90da..7b6e26e 100644
--- a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
+++ b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
@@ -134,13 +134,22 @@ public class OutboundTcpConnection extends Thread
 private volatile long completed;
 private final AtomicLong dropped = new AtomicLong();
 private volatile int currentMsgBufferCount = 0;
-private int targetVersion = MessagingService.current_version;
+private volatile int targetVersion;
 
 public OutboundTcpConnection(OutboundTcpConnectionPool pool)
 {
 super("MessagingService-Outgoing-" + pool.endPoint());
 this.poolReference = pool;
 cs = newCoalescingStrategy(pool.endPoint().getHostAddress());
+
+// We want to use the most precise version we know because while there 
is version detection on connect(),
+// the target version might be accessed by the pool (in 
getConnection()) before we actually connect (as we
+// connect when the first message is submitted). Note however that the 
only case where we'll connect
+// without knowing the true version of a node is if that node is a 
seed (otherwise, we can't know a node
+// unless it has been gossiped to us or it has connected to us and in 
both case this sets the version) and
+// in that case we won't rely on that targetVersion before we're 
actually connected and so the version
+// detection in connect() will do its job.
+targetVersion = 
MessagingService.instance().getVersion(pool.endPoint());
 }
 
 private static boolean isLocalDC(InetAddress targetHost)


[jira] [Updated] (CASSANDRA-11154) CassandraDaemon in Managed mode fails to be restartable

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11154:
-
Reviewer: Aleksey Yeschenko

> CassandraDaemon in Managed mode fails to be restartable
> ---
>
> Key: CASSANDRA-11154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11154
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Achim Nierbeck
> Fix For: 3.4
>
> Attachments: CASSANDRA-11154_patch.txt
>
>
> Restarting the CassandraDeamon in managed mode fails to restart due to 
> duplicate migration of already migrated keyspaces. 
> To reproduce this, just do something like in this test class: 
> https://github.com/ANierbeck/Karaf-Cassandra/blob/master/Karaf-Cassandra-Embedded/src/test/java/de/nierbeck/cassandra/embedded/TestEmbedded.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11051) Make LZ4 Compression Level Configurable

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144555#comment-15144555
 ] 

Sylvain Lebresne commented on CASSANDRA-11051:
--

I see no reason not to add this, though as that's a (nice but non terribly 
essential) improvement, we should probably stick to 3.x for that. So 
[~mkjellman], a 3.x version of the patch with a few unit tests would be really 
awesome.

> Make LZ4 Compression Level Configurable 
> 
>
> Key: CASSANDRA-11051
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11051
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Attachments: lz4_2.2.patch
>
>
> We'd like to make the LZ4 Compressor implementation configurable on a per 
> column family basis. Testing has shown a ~4% reduction in file size with the 
> higher compression LZ4 implementation vs the standard compressor we currently 
> use instantiated by the default constructor. The attached patch adds the 
> following optional parameters 'lz4_compressor_type' and 
> 'lz4_high_compressor_level' to the LZ4Compressor. If none of the new optional 
> parameters are specified, the Compressor will use the same defaults Cassandra 
> has always had for LZ4.
> New LZ4Compressor Optional Parameters:
>   * lz4_compressor_type can currently be either 'high' (uses LZ4HCCompressor) 
> or 'fast' (uses LZ4Compressor)
>   * lz4_high_compressor_level can be set between 1 and 17. Not specifying a 
> compressor level while specifying lz4_compressor_type as 'high' will use a 
> default level of 9 (as picked by the LZ4 library as the "default").
> Currently, we use the default LZ4 compressor constructor. This change would 
> just expose the level (and implementation to use) to the user via the schema. 
> There are many potential cases where users may find that the tradeoff in 
> additional CPU and memory usage is worth the on-disk space savings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7715) Add a credentials cache to the PasswordAuthenticator

2016-02-12 Thread Mike Adamson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144514#comment-15144514
 ] 

Mike Adamson commented on CASSANDRA-7715:
-

+1 LGTM. I'm happy waiting for CASSANDRA-11022 clearing down on auth failures. 
As you say, it'll be far more relevant there.

> Add a credentials cache to the PasswordAuthenticator
> 
>
> Key: CASSANDRA-7715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7715
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Mike Adamson
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.x
>
>
> If the PasswordAuthenticator cached credentials for a short time it would 
> reduce the overhead of user journeys when they need to do multiple 
> authentications in quick succession.
> This cache should work in the same way as the cache in CassandraAuthorizer in 
> that if it's TTL is set to 0 the cache will be disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11083) cassandra-2.2 eclipse-warnings

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144545#comment-15144545
 ] 

Sylvain Lebresne commented on CASSANDRA-11083:
--

+1 with the same very minor nits than in CASSANDRA-11084.

> cassandra-2.2 eclipse-warnings
> --
>
> Key: CASSANDRA-11083
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11083
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 11083-2.2.txt
>
>
> REF = origin/cassandra-2.2 
> COMMIT = fa2fa602d989ed911b60247e3dd8f2d580188782
> {noformat}
> # 1/27/16 6:19:23 PM UTC
> # Eclipse Compiler for Java(TM) v20150120-1634, 3.10.2, Copyright IBM Corp 
> 2000, 2013. All rights reserved.
> incorrect classpath: 
> /var/lib/jenkins/workspace/cassandra-2.2_eclipse-warnings/build/cobertura/classes
> --
> 1. ERROR in 
> /var/lib/jenkins/workspace/cassandra-2.2_eclipse-warnings/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java
>  (at line 141)
>   return channel.socket();
>   
> Potential resource leak: 'channel' may not be closed at this location
> --
> 1 problem (1 error)
> {noformat}
> Check latest job on 
> http://cassci.datastax.com/job/cassandra-2.2_eclipse-warnings/ for the most 
> recent artifact



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11051) Make LZ4 Compression Level Configurable

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11051:
-
Reviewer: Marcus Eriksson

> Make LZ4 Compression Level Configurable 
> 
>
> Key: CASSANDRA-11051
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11051
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Attachments: lz4_2.2.patch
>
>
> We'd like to make the LZ4 Compressor implementation configurable on a per 
> column family basis. Testing has shown a ~4% reduction in file size with the 
> higher compression LZ4 implementation vs the standard compressor we currently 
> use instantiated by the default constructor. The attached patch adds the 
> following optional parameters 'lz4_compressor_type' and 
> 'lz4_high_compressor_level' to the LZ4Compressor. If none of the new optional 
> parameters are specified, the Compressor will use the same defaults Cassandra 
> has always had for LZ4.
> New LZ4Compressor Optional Parameters:
>   * lz4_compressor_type can currently be either 'high' (uses LZ4HCCompressor) 
> or 'fast' (uses LZ4Compressor)
>   * lz4_high_compressor_level can be set between 1 and 17. Not specifying a 
> compressor level while specifying lz4_compressor_type as 'high' will use a 
> default level of 9 (as picked by the LZ4 library as the "default").
> Currently, we use the default LZ4 compressor constructor. This change would 
> just expose the level (and implementation to use) to the user via the schema. 
> There are many potential cases where users may find that the tradeoff in 
> additional CPU and memory usage is worth the on-disk space savings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10625) Problem of year 10000: Dates too far in the future can be saved but not read back using cqlsh

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10625:
-
Reviewer: Paulo Motta

> Problem of year 1: Dates too far in the future can be saved but not read 
> back using cqlsh
> -
>
> Key: CASSANDRA-10625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Piotr Kołaczkowski
>Assignee: Adam Holmberg
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> {noformat}
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '-12-31 
> 23:59:59+');
> cqlsh> select * from test.timestamp_test ;
>  pkey | ts
> --+--
> 1 | -12-31 23:59:59+
> (1 rows)
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '1-01-01 
> 00:00:01+');
> cqlsh> select * from test.timestamp_test ;
> Traceback (most recent call last):
>   File "bin/../resources/cassandra/bin/cqlsh", line 1112, in 
> perform_simple_statement
> rows = self.session.execute(statement, trace=self.tracing_enabled)
>   File 
> "/home/pkolaczk/Projekty/DataStax/bdp/resources/cassandra/bin/../zipfiles/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py",
>  line 1602, in execute
> result = future.result()
>   File 
> "/home/pkolaczk/Projekty/DataStax/bdp/resources/cassandra/bin/../zipfiles/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py",
>  line 3347, in result
> raise self._final_exception
> OverflowError: date value out of range
> {noformat}
> The connection is broken afterwards:
> {noformat}
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '1-01-01 
> 00:00:01+');
> NoHostAvailable: ('Unable to complete the operation against any hosts', 
> {: ConnectionShutdown('Connection to 127.0.0.1 is 
> defunct',)})
> {noformat}
> Expected behaviors (one of):
> - don't allow to insert dates larger than -12-31 and document the 
> limitation
> - handle all dates up to Java Date(MAX_LONG) for writing and reading



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11152) SOURCE command in CQLSH 3.2 requires that "use keyspace" is in the cql file that you are sourcing

2016-02-12 Thread Francesco Animali (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Animali updated CASSANDRA-11152:
--
Comment: was deleted

(was: hi Sequoyha,

not sure about the comment you put on the jira.  Let me know if I missed
anything. Francesco

On Wed, Feb 10, 2016 at 6:57 PM, sequoyha pelletier (JIRA) 

)

> SOURCE command in CQLSH 3.2 requires that "use keyspace" is in the cql file 
> that you are sourcing
> -
>
> Key: CASSANDRA-11152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11152
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: CQLSH 3.2.1
>Reporter: Francesco Animali
>
> a difference in behaviour between SOURCE command in CQLSH 3.1 and 3.2. 
> In CQLSH 3.1 SOURCE will NOT require "use keyspace" in the cql file that you 
> execute: the "keyspace" directive in the qlshrc file will work and the cql 
> file will be executed.
> In CQLSH 3.2.1, SOURCE command requires that "use keyspace" is in the cql 
> file that you are sourcing, otherwise it throws this error:
> "No keyspace has been specified. USE a keyspace, or explicitly specify 
> keyspace.tablename". 
> The "keyspace" directive in cqlshrc is overridden by source command.
> steps to reproduce:
> create a file called select.cql in your home directory:
> {noformat}
> echo "CONSISTENCY ONE;" > select.cql
> echo "select * from tab;" >> select.cql
> {noformat}
> in cqlsh:
> {noformat}
> create KEYSPACE kspace WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> create TABLE tab ( id int primary key);
> insert into tab (id) VALUES ( 1);
> {noformat}
> Add this to cqlsgrc:
> {noformat}
> [authentication]
> keyspace = kspace
> {noformat}
> Then exit cqlsh and rerun cqlsh using the cqlshrc just modified.
> Note that you are in keyspace "kspace".
> execute:
> {noformat}
> source 'select.cql' 
> {noformat}
> this will have different behaviour in CQLSH 3.2 and 3.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10397) CQLSH not displaying correct timezone

2016-02-12 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142812#comment-15142812
 ] 

Stefan Podkowinski edited comment on CASSANDRA-10397 at 2/12/16 2:11 PM:
-

I've now created a patch for this that would use pytz and tzlocal for automatic 
timezone conversion. The code will fallback to UTC in case the modules could 
not be found:

[CASSANDRA-10397-2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...spodkowinski:CASSANDRA-10397-2.2]
 

I'm still wondering about the best way to handle absence of the libs. Users 
should at least get a warning in case pytz is missing and a TZ environment 
value set.

[~pauloricardomg], [~aploetz], any thoughts on that?


was (Author: spo...@gmail.com):
I've now created a patch for this that would use pytz and tzlocal for automatic 
timezone conversion. The code will fallback to UTC in case the modules could 
not be found:

[CASSANDRA-1-2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...spodkowinski:CASSANDRA-1-2.2]
 

I'm still wondering about the best way to handle absence of the libs. Users 
should at least get a warning in case pytz is missing and a TZ environment 
value set.

[~pauloricardomg], [~aploetz], any thoughts on that?

> CQLSH not displaying correct timezone
> -
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10736) TestTopology.simple_decommission_test failing due to assertion triggered by SizeEstimatesRecorder

2016-02-12 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144655#comment-15144655
 ] 

Branimir Lambov commented on CASSANDRA-10736:
-

LGTM.

> TestTopology.simple_decommission_test failing due to assertion triggered by 
> SizeEstimatesRecorder
> -
>
> Key: CASSANDRA-10736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Minor
>
> Example 
> [here|http://cassci.datastax.com/job/cassandra-2.2_dtest/369/testReport/junit/topology_test/TestTopology/simple_decommission_test/].
> {{SizeEstimatesRecorder}} can race with decommission when it tries to get the 
> primary ranges for a node.
> This is because {{getPredecessor}} in {{TokenMetadata}} hits an assertion if 
> the token is no longer in {{TokenMetadata}}.
> This no longer occurs in 3.0 because this assertion has been removed and 
> replace with different data.
> In both cases, the relationship between the set of tokens in 
> {{getPrimaryRangesFor}} (passed in as an argument) and the set of tokens used 
> in calls by {{getPredecessor}} (the system ones) should be investigated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9779) Append-only optimization

2016-02-12 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144665#comment-15144665
 ] 

Ariel Weisberg commented on CASSANDRA-9779:
---

For tables that are marked append only it would be nice to have some best 
effort warnings or feedback if updates do occur. Checking the memtable when 
writing might be cheap/free and during compaction we can warn and log a 
conflict if an update is encountered. We could do the same thing on read.

This would give people with a buggy application (or a bug in Cassandra) rapid 
feedback rather then silently giving them inconsistent results.

> Append-only optimization
> 
>
> Key: CASSANDRA-9779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9779
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
> Fix For: 3.x
>
>
> Many common workloads are append-only: that is, they insert new rows but do 
> not update existing ones.  However, Cassandra has no way to infer this and so 
> it must treat all tables as if they may experience updates in the future.
> If we added syntax to tell Cassandra about this ({{WITH INSERTS ONLY}} for 
> instance) then we could do a number of optimizations:
> - Compaction would only need to worry about defragmenting partitions, not 
> rows.  We could default to DTCS or similar.
> - CollationController could stop scanning sstables as soon as it finds a 
> matching row
> - Most importantly, materialized views wouldn't need to worry about deleting 
> prior values, which would eliminate the majority of the MV overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10397) CQLSH not displaying correct timezone

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10397:
-
Assignee: Stefan Podkowinski

> CQLSH not displaying correct timezone
> -
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10397) CQLSH not displaying correct timezone

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10397:
-
Reviewer: Paulo Motta

> CQLSH not displaying correct timezone
> -
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10120) When specifying both num_tokens and initial_token, error out if the numbers don't match

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10120:
-
Reviewer: Sylvain Lebresne

> When specifying both num_tokens and initial_token, error out if the numbers 
> don't match
> ---
>
> Key: CASSANDRA-10120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10120
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremy Hanna
>Assignee: Roman Pogribnyi
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 10120-3.0.txt
>
>
> Right now if both initial_token and num_tokens are specified, initial_token 
> is used.  As something to not trip people up, it would be nice to do a basic 
> error check.  If both are specified, we should make sure they match.  That 
> is, if they have one initial token and num_tokens of 256, it should error out 
> on startup and alert the user of the configuration.  It's better to fail fast 
> than bootstrap with only one token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10120) When specifying both num_tokens and initial_token, error out if the numbers don't match

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144602#comment-15144602
 ] 

Sylvain Lebresne commented on CASSANDRA-10120:
--

Thanks for the patch, but we should probably generalize it a bit to validate 
that {{size(initial_token) == num_tokens}}. Mind updating your patch to do so?

> When specifying both num_tokens and initial_token, error out if the numbers 
> don't match
> ---
>
> Key: CASSANDRA-10120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10120
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremy Hanna
>Assignee: Roman Pogribnyi
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 10120-3.0.txt
>
>
> Right now if both initial_token and num_tokens are specified, initial_token 
> is used.  As something to not trip people up, it would be nice to do a basic 
> error check.  If both are specified, we should make sure they match.  That 
> is, if they have one initial token and num_tokens of 256, it should error out 
> on startup and alert the user of the configuration.  It's better to fail fast 
> than bootstrap with only one token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10625) Problem of year 10000: Dates too far in the future can be saved but not read back using cqlsh

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10625:
-
Assignee: Adam Holmberg

> Problem of year 1: Dates too far in the future can be saved but not read 
> back using cqlsh
> -
>
> Key: CASSANDRA-10625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Piotr Kołaczkowski
>Assignee: Adam Holmberg
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> {noformat}
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '-12-31 
> 23:59:59+');
> cqlsh> select * from test.timestamp_test ;
>  pkey | ts
> --+--
> 1 | -12-31 23:59:59+
> (1 rows)
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '1-01-01 
> 00:00:01+');
> cqlsh> select * from test.timestamp_test ;
> Traceback (most recent call last):
>   File "bin/../resources/cassandra/bin/cqlsh", line 1112, in 
> perform_simple_statement
> rows = self.session.execute(statement, trace=self.tracing_enabled)
>   File 
> "/home/pkolaczk/Projekty/DataStax/bdp/resources/cassandra/bin/../zipfiles/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py",
>  line 1602, in execute
> result = future.result()
>   File 
> "/home/pkolaczk/Projekty/DataStax/bdp/resources/cassandra/bin/../zipfiles/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py",
>  line 3347, in result
> raise self._final_exception
> OverflowError: date value out of range
> {noformat}
> The connection is broken afterwards:
> {noformat}
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '1-01-01 
> 00:00:01+');
> NoHostAvailable: ('Unable to complete the operation against any hosts', 
> {: ConnectionShutdown('Connection to 127.0.0.1 is 
> defunct',)})
> {noformat}
> Expected behaviors (one of):
> - don't allow to insert dates larger than -12-31 and document the 
> limitation
> - handle all dates up to Java Date(MAX_LONG) for writing and reading



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10120) When specifying both num_tokens and initial_token, error out if the numbers don't match

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10120:
-
Assignee: Roman Pogribnyi

> When specifying both num_tokens and initial_token, error out if the numbers 
> don't match
> ---
>
> Key: CASSANDRA-10120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10120
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremy Hanna
>Assignee: Roman Pogribnyi
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 10120-3.0.txt
>
>
> Right now if both initial_token and num_tokens are specified, initial_token 
> is used.  As something to not trip people up, it would be nice to do a basic 
> error check.  If both are specified, we should make sure they match.  That 
> is, if they have one initial token and num_tokens of 256, it should error out 
> on startup and alert the user of the configuration.  It's better to fail fast 
> than bootstrap with only one token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10736) TestTopology.simple_decommission_test failing due to assertion triggered by SizeEstimatesRecorder

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10736:
-
Assignee: Joel Knighton

> TestTopology.simple_decommission_test failing due to assertion triggered by 
> SizeEstimatesRecorder
> -
>
> Key: CASSANDRA-10736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Minor
>
> Example 
> [here|http://cassci.datastax.com/job/cassandra-2.2_dtest/369/testReport/junit/topology_test/TestTopology/simple_decommission_test/].
> {{SizeEstimatesRecorder}} can race with decommission when it tries to get the 
> primary ranges for a node.
> This is because {{getPredecessor}} in {{TokenMetadata}} hits an assertion if 
> the token is no longer in {{TokenMetadata}}.
> This no longer occurs in 3.0 because this assertion has been removed and 
> replace with different data.
> In both cases, the relationship between the set of tokens in 
> {{getPrimaryRangesFor}} (passed in as an argument) and the set of tokens used 
> in calls by {{getPredecessor}} (the system ones) should be investigated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10736) TestTopology.simple_decommission_test failing due to assertion triggered by SizeEstimatesRecorder

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10736:
-
Reviewer: Branimir Lambov

> TestTopology.simple_decommission_test failing due to assertion triggered by 
> SizeEstimatesRecorder
> -
>
> Key: CASSANDRA-10736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Minor
>
> Example 
> [here|http://cassci.datastax.com/job/cassandra-2.2_dtest/369/testReport/junit/topology_test/TestTopology/simple_decommission_test/].
> {{SizeEstimatesRecorder}} can race with decommission when it tries to get the 
> primary ranges for a node.
> This is because {{getPredecessor}} in {{TokenMetadata}} hits an assertion if 
> the token is no longer in {{TokenMetadata}}.
> This no longer occurs in 3.0 because this assertion has been removed and 
> replace with different data.
> In both cases, the relationship between the set of tokens in 
> {{getPrimaryRangesFor}} (passed in as an argument) and the set of tokens used 
> in calls by {{getPredecessor}} (the system ones) should be investigated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10818) Evaluate exposure of DataType instances from JavaUDF class

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144587#comment-15144587
 ] 

Sylvain Lebresne commented on CASSANDRA-10818:
--

Not sure how I feel about silently injecting bindings, that sounds like a bit 
hacky to me. Not that I have a much cleaner to suggest.

But if we're gonna silently inject something, I'd have a preference for 
injecting a single "environment" object that would be used for this but could 
be reused later if we realize there is more such information that could be 
useful inside UDF bodies.  So for instance, we'd just expose some {{getEnv()}} 
method usable inside UDF which would return an {{Environment}} object looking 
something like:
{noformat}
interface Environment
{
UDTValue newArgUDTValue(String argName);
UDTValue newReturnUDTValue();

TupleValue newArgTupleValue(String argName);
TupleValue newReturnTupleValue();
}
{noformat}
and to which we could add more function along the way.

I still don't love this solution, but it feels a bit less hacky than exposing a 
bunch of generated names and has some future-proofness advantage. Also feels a 
bit easier to document (because it's slightly less magic).


> Evaluate exposure of DataType instances from JavaUDF class
> --
>
> Key: CASSANDRA-10818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10818
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> Currently UDF implementations cannot create new UDT instances.
> There's no way to create a new UT instance without having the 
> {{com.datastax.driver.core.DataType}} to be able to call 
> {{com.datastax.driver.core.UserType.newValue()}}.
> From a quick look into the related code in {{JavaUDF}}, {{DataType}} and 
> {{UserType}} classes it looks fine to expose information about return and 
> argument types via {{JavaUDF}}.
> Have to find some solution for script UDFs - but feels doable, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10202) simplify CommitLogSegmentManager

2016-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144597#comment-15144597
 ] 

Sylvain Lebresne commented on CASSANDRA-10202:
--

[~blambov] Are you good with the modifications suggested by [~benedict] above? 
If so, would you mind rebasing and running some CI.

> simplify CommitLogSegmentManager
> 
>
> Key: CASSANDRA-10202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10202
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Jonathan Ellis
>Assignee: Branimir Lambov
>Priority: Minor
>
> Now that we only keep one active segment around we can simplify this from the 
> old recycling design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9714) sstableloader appears to use the cassandra.yaml outgoing stream throttle

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9714:

Reviewer: Marcus Eriksson

> sstableloader appears to use the cassandra.yaml outgoing stream throttle
> 
>
> Key: CASSANDRA-9714
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9714
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jeremy Hanna
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> When trying to use the sstableloader, we found (through the metrics in 
> opscenter) that the stream throughput was constant at about 24MB/s.  We 
> didn't run it with the --throttle option so according to the help output and 
> the BulkLoader code it should be unthrottled.  However when it was 
> unthrottled in the cassandra.yaml in the loader's classpath, it got up to the 
> low hundreds of MB/s.It sounds like when starting up it takes the 
> cassandra.yaml attributes and overrides the default throttle setting of the 
> loader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9779) Append-only optimization

2016-02-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9779:
-
Flagged: Impediment

> Append-only optimization
> 
>
> Key: CASSANDRA-9779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9779
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
> Fix For: 3.x
>
>
> Many common workloads are append-only: that is, they insert new rows but do 
> not update existing ones.  However, Cassandra has no way to infer this and so 
> it must treat all tables as if they may experience updates in the future.
> If we added syntax to tell Cassandra about this ({{WITH INSERTS ONLY}} for 
> instance) then we could do a number of optimizations:
> - Compaction would only need to worry about defragmenting partitions, not 
> rows.  We could default to DTCS or similar.
> - CollationController could stop scanning sstables as soon as it finds a 
> matching row
> - Most importantly, materialized views wouldn't need to worry about deleting 
> prior values, which would eliminate the majority of the MV overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9779) Append-only optimization

2016-02-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9779:
-
Flagged:   (was: Impediment)

> Append-only optimization
> 
>
> Key: CASSANDRA-9779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9779
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
> Fix For: 3.x
>
>
> Many common workloads are append-only: that is, they insert new rows but do 
> not update existing ones.  However, Cassandra has no way to infer this and so 
> it must treat all tables as if they may experience updates in the future.
> If we added syntax to tell Cassandra about this ({{WITH INSERTS ONLY}} for 
> instance) then we could do a number of optimizations:
> - Compaction would only need to worry about defragmenting partitions, not 
> rows.  We could default to DTCS or similar.
> - CollationController could stop scanning sstables as soon as it finds a 
> matching row
> - Most importantly, materialized views wouldn't need to worry about deleting 
> prior values, which would eliminate the majority of the MV overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/6] cassandra git commit: Use cloned TokenMetadata in size estimates to avoid race against membership check

2016-02-12 Thread aleksey
Use cloned TokenMetadata in size estimates to avoid race against membership 
check

patch by Joel Knighton; reviewed by Branimir Lambov for CASSANDRA-10736


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b201e95
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b201e95
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b201e95

Branch: refs/heads/cassandra-3.0
Commit: 1b201e959a6f77aeedd2549ed523200021d8c6e6
Parents: d5c83f4
Author: Joel Knighton 
Authored: Tue Dec 29 14:59:57 2015 -0600
Committer: Aleksey Yeschenko 
Committed: Fri Feb 12 17:28:18 2016 +

--
 CHANGES.txt | 5 -
 src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java | 6 --
 2 files changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b201e95/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa25980..49bc581 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,10 +1,13 @@
 2.2.6
+ * Use cloned TokenMetadata in size estimates to avoid race against membership 
check
+   (CASSANDRA-10736)
  * Always persist upsampled index summaries (CASSANDRA-10512)
  * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
  * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
  * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
  * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
- * Fix paging on DISTINCT queries repeats result when first row in partition 
changes (CASSANDRA-10010)
+ * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
+   (CASSANDRA-10010)
 Merged from 2.1:
  * Properly release sstable ref when doing offline scrub (CASSANDRA-10697)
  * Improve nodetool status performance for large cluster (CASSANDRA-7238)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b201e95/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index c59db4b..2f14fb1 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -26,6 +26,7 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.locator.TokenMetadata;
 import org.apache.cassandra.service.MigrationListener;
 import org.apache.cassandra.service.MigrationManager;
 import org.apache.cassandra.service.StorageService;
@@ -56,7 +57,8 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 
 public void run()
 {
-if 
(!StorageService.instance.getTokenMetadata().isMember(FBUtilities.getBroadcastAddress()))
+TokenMetadata metadata = 
StorageService.instance.getTokenMetadata().cloneOnlyTokenMap();
+if (!metadata.isMember(FBUtilities.getBroadcastAddress()))
 {
 logger.debug("Node is not part of the ring; not recording size 
estimates");
 return;
@@ -66,7 +68,7 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 
 // find primary token ranges for the local node.
 Collection localTokens = 
StorageService.instance.getLocalTokens();
-Collection localRanges = 
StorageService.instance.getTokenMetadata().getPrimaryRangesFor(localTokens);
+Collection localRanges = 
metadata.getPrimaryRangesFor(localTokens);
 
 for (Keyspace keyspace : Keyspace.nonSystem())
 {



[3/6] cassandra git commit: Use cloned TokenMetadata in size estimates to avoid race against membership check

2016-02-12 Thread aleksey
Use cloned TokenMetadata in size estimates to avoid race against membership 
check

patch by Joel Knighton; reviewed by Branimir Lambov for CASSANDRA-10736


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b201e95
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b201e95
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b201e95

Branch: refs/heads/trunk
Commit: 1b201e959a6f77aeedd2549ed523200021d8c6e6
Parents: d5c83f4
Author: Joel Knighton 
Authored: Tue Dec 29 14:59:57 2015 -0600
Committer: Aleksey Yeschenko 
Committed: Fri Feb 12 17:28:18 2016 +

--
 CHANGES.txt | 5 -
 src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java | 6 --
 2 files changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b201e95/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa25980..49bc581 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,10 +1,13 @@
 2.2.6
+ * Use cloned TokenMetadata in size estimates to avoid race against membership 
check
+   (CASSANDRA-10736)
  * Always persist upsampled index summaries (CASSANDRA-10512)
  * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
  * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
  * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
  * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
- * Fix paging on DISTINCT queries repeats result when first row in partition 
changes (CASSANDRA-10010)
+ * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
+   (CASSANDRA-10010)
 Merged from 2.1:
  * Properly release sstable ref when doing offline scrub (CASSANDRA-10697)
  * Improve nodetool status performance for large cluster (CASSANDRA-7238)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b201e95/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index c59db4b..2f14fb1 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -26,6 +26,7 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.locator.TokenMetadata;
 import org.apache.cassandra.service.MigrationListener;
 import org.apache.cassandra.service.MigrationManager;
 import org.apache.cassandra.service.StorageService;
@@ -56,7 +57,8 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 
 public void run()
 {
-if 
(!StorageService.instance.getTokenMetadata().isMember(FBUtilities.getBroadcastAddress()))
+TokenMetadata metadata = 
StorageService.instance.getTokenMetadata().cloneOnlyTokenMap();
+if (!metadata.isMember(FBUtilities.getBroadcastAddress()))
 {
 logger.debug("Node is not part of the ring; not recording size 
estimates");
 return;
@@ -66,7 +68,7 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 
 // find primary token ranges for the local node.
 Collection localTokens = 
StorageService.instance.getLocalTokens();
-Collection localRanges = 
StorageService.instance.getTokenMetadata().getPrimaryRangesFor(localTokens);
+Collection localRanges = 
metadata.getPrimaryRangesFor(localTokens);
 
 for (Keyspace keyspace : Keyspace.nonSystem())
 {



[1/6] cassandra git commit: Use cloned TokenMetadata in size estimates to avoid race against membership check

2016-02-12 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 d5c83f491 -> 1b201e959
  refs/heads/cassandra-3.0 f3b7599e3 -> efbcd15d6
  refs/heads/trunk a800ca898 -> 1944bf507


Use cloned TokenMetadata in size estimates to avoid race against membership 
check

patch by Joel Knighton; reviewed by Branimir Lambov for CASSANDRA-10736


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b201e95
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b201e95
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b201e95

Branch: refs/heads/cassandra-2.2
Commit: 1b201e959a6f77aeedd2549ed523200021d8c6e6
Parents: d5c83f4
Author: Joel Knighton 
Authored: Tue Dec 29 14:59:57 2015 -0600
Committer: Aleksey Yeschenko 
Committed: Fri Feb 12 17:28:18 2016 +

--
 CHANGES.txt | 5 -
 src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java | 6 --
 2 files changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b201e95/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa25980..49bc581 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,10 +1,13 @@
 2.2.6
+ * Use cloned TokenMetadata in size estimates to avoid race against membership 
check
+   (CASSANDRA-10736)
  * Always persist upsampled index summaries (CASSANDRA-10512)
  * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
  * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
  * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
  * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
- * Fix paging on DISTINCT queries repeats result when first row in partition 
changes (CASSANDRA-10010)
+ * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
+   (CASSANDRA-10010)
 Merged from 2.1:
  * Properly release sstable ref when doing offline scrub (CASSANDRA-10697)
  * Improve nodetool status performance for large cluster (CASSANDRA-7238)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b201e95/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index c59db4b..2f14fb1 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -26,6 +26,7 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.locator.TokenMetadata;
 import org.apache.cassandra.service.MigrationListener;
 import org.apache.cassandra.service.MigrationManager;
 import org.apache.cassandra.service.StorageService;
@@ -56,7 +57,8 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 
 public void run()
 {
-if 
(!StorageService.instance.getTokenMetadata().isMember(FBUtilities.getBroadcastAddress()))
+TokenMetadata metadata = 
StorageService.instance.getTokenMetadata().cloneOnlyTokenMap();
+if (!metadata.isMember(FBUtilities.getBroadcastAddress()))
 {
 logger.debug("Node is not part of the ring; not recording size 
estimates");
 return;
@@ -66,7 +68,7 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 
 // find primary token ranges for the local node.
 Collection localTokens = 
StorageService.instance.getLocalTokens();
-Collection localRanges = 
StorageService.instance.getTokenMetadata().getPrimaryRangesFor(localTokens);
+Collection localRanges = 
metadata.getPrimaryRangesFor(localTokens);
 
 for (Keyspace keyspace : Keyspace.nonSystem())
 {



[jira] [Commented] (CASSANDRA-8343) Secondary index creation causes moves/bootstraps to fail

2016-02-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144718#comment-15144718
 ] 

Paulo Motta commented on CASSANDRA-8343:


Surprisingly enough I didn't manage to reproduce this issue in 2.1 because the 
{{streaming_socket_timeout}} parameter was not being enforced due to the use of 
a {{ReadableByteChannel}} created via {{socket.getChannel()}}, which never 
times out on reads (see [this 
article|https://technfun.wordpress.com/2009/01/29/networking-in-java-non-blocking-nio-blocking-nio-and-io/]
 for background). The workaround is to create the {{ReadableByteChannel}} via 
{{Channels.newChannel(socket.getInputStream())}} instead, so the socket 
{{SO_TIMEOUT}} is respected.

Even after this fix, the socket {{SO_TIMEOUT}} was never being set on the 
receiving side, so I also set while attaching the socket on the receiving side.

After the previous fixes, I managed to reproduce this issue on a [bootstrap 
dtest|https://github.com/pauloricardomg/cassandra-dtest/commit/301e332758b3873d2bb61259343375107caf437b]
 by introducing a sleep delay (via a system property) on the 
{{OnCompletionRunnable}} larger than {{streaming_socket_timeout}}.

This problem will probably happen more often on 3.0 because of MVs, since 
they're rebuilt by the receiving node in the end of the stream session.
I think we should remain finishing the stream session only after the secondary 
indexes/MVs are rebuilt to avoid leaving the node in a inconsistent state in 
case the rebuild fails after the session is completed.

The proposed solution is to introduce a {{KeepAlive}} message and send a keep 
alive message to the peer after reaching the {{WAIT_COMPLETE}} state every 
{{streaming_socket_timeout/2}}, to ensure the socket will remain fresh and will 
not throw a {{SocketTimeoutException}} and fail the stream session.

I initially created a fix for 2.1 (even though it's near EOL, I think 
{{streaming_socket_timeout}} not working is critical enough to be fixed on 
2.1), and after review I will create patch for other versions.

||2.1||dtest||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.1...pauloricardomg:2.1-8343]|[branch|https://github.com/riptano/cassandra-dtest/compare/master...pauloricardomg:8343]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-8343-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-8343-dtest/lastCompletedBuild/testReport/]|

> Secondary index creation causes moves/bootstraps to fail
> 
>
> Key: CASSANDRA-8343
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8343
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Frisch
>Assignee: Paulo Motta
>
> Node moves/bootstraps are failing if the stream timeout is set to a value in 
> which secondary index creation cannot complete.  This happens because at the 
> end of the very last stream the StreamInSession.closeIfFinished() function 
> calls maybeBuildSecondaryIndexes on every column family.  If the stream time 
> + all CF's index creation takes longer than your stream timeout then the 
> socket closes from the sender's side, the receiver of the stream tries to 
> write to said socket because it's not null, an IOException is thrown but not 
> caught in closeIfFinished(), the exception is caught somewhere and not 
> logged, AbstractStreamSession.close() is never called, and the CountDownLatch 
> is never decremented.  This causes the move/bootstrap to continue forever 
> until the node is restarted.
> This problem of stream time + secondary index creation time exists on 
> decommissioning/unbootstrap as well but since it's on the sending side the 
> timeout triggers the onFailure() callback which does decrement the 
> CountDownLatch leading to completion.
> A cursory glance at the 2.0 code leads me to believe this problem would exist 
> there as well.
> Temporary workaround: set a really high/infinite stream timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11141) Alert if firewall is running and/or blocking C* ports

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11141:
-
Issue Type: Improvement  (was: Bug)

> Alert if firewall is running and/or blocking C* ports
> -
>
> Key: CASSANDRA-11141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11141
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>
> We've struggled quite a few times with firewalls blocking C* port on Windows. 
> [~JoshuaMcKenzie] suggested on CASSANDRA-11073:
> bq. It'd be nice if there's a way for us to check whether or not firewalls 
> are running on Windows and fire an alert if found (or if C* ports are blocked 
> in built-in firewall, for instance).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10445) Cassandra-stress throws max frame size error when SSL certification is enabled

2016-02-12 Thread Cornel Foltea (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144766#comment-15144766
 ] 

Cornel Foltea commented on CASSANDRA-10445:
---

https://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html?scroll=reference_ds_qfg_n1r_1k__thrift_framed_transport_size_in_mb
 15728640=15MB (default)
bump that thrift_framed_transport_size_in_mb up on the cassandra.yaml although 
~336MB is a bit too much.
Also have a look at 
https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsCStress_t.html

> Cassandra-stress throws max frame size error when SSL certification is enabled
> --
>
> Key: CASSANDRA-10445
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10445
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Goldberg
>  Labels: stress
> Fix For: 2.1.x
>
>
> Running cassandra-stress when SSL is enabled gives the following error and 
> does not finish executing:
> {quote}
> cassandra-stress write n=100
> Exception in thread "main" java.lang.RuntimeException: 
> org.apache.thrift.transport.TTransportException: Frame size (352518912) 
> larger than max length (15728640)!
> at 
> org.apache.cassandra.stress.settings.StressSettings.getRawThriftClient(StressSettings.java:144)
> at 
> org.apache.cassandra.stress.settings.StressSettings.getRawThriftClient(StressSettings.java:110)
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:111)
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:59)
> at 
> org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:205)
> at org.apache.cassandra.stress.StressAction.run(StressAction.java:55)
> at org.apache.cassandra.stress.Stress.main(Stress.java:109)
> {quote}
> I was able to reproduce this issue consistently via the following steps:
> 1) Spin up 3 node cassandra cluster running 2.1.8
> 2) Perform cassandra-stress write n=100
> 3) Everything works!
> 4) Generate keystore and truststore for each node in the cluster and 
> distribute appropriately 
> 5) Modify cassandra.yaml on each node to enable SSL:
> client_encryption_options:
> enabled: true
> keystore: /
> # require_client_auth: false
> # Set trustore and truststore_password if require_client_auth is true
> truststore:  /
> truststore_password: 
> # More advanced defaults below:
> protocol: ssl
> 6) Restart each node.
> 7) Perform cassandra-stress write n=100
> 8) Get Frame Size error, cassandra-stress fails
> This may be related to CASSANDRA-9325.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8132) Save or stream hints to a safe place in node replacement

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8132:

Issue Type: New Feature  (was: Sub-task)
Parent: (was: CASSANDRA-9427)

> Save or stream hints to a safe place in node replacement
> 
>
> Key: CASSANDRA-8132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8132
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Minh Do
>Assignee: Minh Do
> Fix For: 2.1.x
>
>
> Often, we need to replace a node with a new instance in the cloud environment 
> where we have all nodes are still alive. To be safe without losing data, we 
> usually make sure all hints are gone before we do this operation.
> Replacement means we just want to shutdown C* process on a node and bring up 
> another instance to take over that node's token.
> However, if a node to be replaced has a lot of stored hints, its 
> HintedHandofManager seems very slow to send the hints to other nodes.  In our 
> case, we tried to replace a node and had to wait for several days before its 
> stored hints are clear out.  As mentioned above, we need all hints on this 
> node to clear out before we can terminate it and replace it by a new 
> instance/machine.
> Since this is not a decommission, I am proposing that we have the same 
> hints-streaming mechanism as in the decommission code.  Furthermore, there 
> needs to be a cmd for NodeTool to trigger this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9427) Improve Hinted Handoff

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-9427.
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)

> Improve Hinted Handoff
> --
>
> Key: CASSANDRA-9427
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9427
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>
> There are multiple issues with the way we currently handle hints. Having them 
> saved in a regular Cassandra table, and implementing the queue anti-pattern, 
> is just one of them.
> This ticket will aggregate the planned improvements for 3.X.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8343) Secondary index creation causes moves/bootstraps to fail

2016-02-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144731#comment-15144731
 ] 

Paulo Motta commented on CASSANDRA-8343:


It's also important to note that currently if the secondary index rebuild takes 
longer than {{streaming_socket_timeout_in_ms}} the stream session will fail 
(and not hang as described in this ticket report) due to CASSANDRA-10774.

> Secondary index creation causes moves/bootstraps to fail
> 
>
> Key: CASSANDRA-8343
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8343
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Frisch
>Assignee: Paulo Motta
>
> Node moves/bootstraps are failing if the stream timeout is set to a value in 
> which secondary index creation cannot complete.  This happens because at the 
> end of the very last stream the StreamInSession.closeIfFinished() function 
> calls maybeBuildSecondaryIndexes on every column family.  If the stream time 
> + all CF's index creation takes longer than your stream timeout then the 
> socket closes from the sender's side, the receiver of the stream tries to 
> write to said socket because it's not null, an IOException is thrown but not 
> caught in closeIfFinished(), the exception is caught somewhere and not 
> logged, AbstractStreamSession.close() is never called, and the CountDownLatch 
> is never decremented.  This causes the move/bootstrap to continue forever 
> until the node is restarted.
> This problem of stream time + secondary index creation time exists on 
> decommissioning/unbootstrap as well but since it's on the sending side the 
> timeout triggers the onFailure() callback which does decrement the 
> CountDownLatch leading to completion.
> A cursory glance at the 2.0 code leads me to believe this problem would exist 
> there as well.
> Temporary workaround: set a really high/infinite stream timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11138) cassandra-stress tool - clustering key values not distributed

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11138:
-
Labels: stress  (was: )

> cassandra-stress tool - clustering key values not distributed
> -
>
> Key: CASSANDRA-11138
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11138
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 2.2.4, Centos 6.5, Java 8
>Reporter: Ralf Steppacher
>  Labels: stress
>
> I am trying to get the stress tool to generate random values for three 
> clustering keys. I am trying to simulate collecting events per user id (text, 
> partition key). Events have a session type (text), event type (text), and 
> creation time (timestamp) (clustering keys, in that order). For testing 
> purposes I ended up with the following column spec:
> {noformat}
> columnspec:
> - name: created_at
>   cluster: uniform(10..10)
> - name: event_type
>   size: uniform(5..10)
>   population: uniform(1..30)
>   cluster: uniform(1..30)
> - name: session_type
>   size: fixed(5)
>   population: uniform(1..4)
>   cluster: uniform(1..4)
> - name: user_id
>   size: fixed(15)
>   population: uniform(1..100)
> - name: message
>   size: uniform(10..100)
>   population: uniform(1..100B)
> {noformat}
> My expectation was that this would lead to anywhere between 10 and 1200 rows 
> to be created per partition key. But it seems that exactly 10 rows are being 
> created, with the {{created_at}} timestamp being the only variable that is 
> assigned variable values (per partition key). The {{session_type}} and 
> {{event_type}} variables are assigned fixed values. This is even the case if 
> I set the cluster distribution to uniform(30..30) and uniform(4..4) 
> respectively. With this setting I expected 1200 rows per partition key to be 
> created, as announced when running the stress tool, but it is still 10.
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_eventy_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> …
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [1..1] rows (of [1200..1200] 
> total rows in the partitions)
> Improvement over 4 threadCount: 19%
> ...
> {noformat}
> Sample of generated data:
> {noformat}
> cqlsh> select user_id, event_type, session_type, created_at from 
> stresscql.batch_too_large LIMIT 30 ;
> user_id | event_type   | session_type | created_at
> -+--+--+--
>   %\x7f\x03/.d29 08:14:11+
>   %\x7f\x03/.d29 04:04:56+
>   %\x7f\x03/.d29 00:39:23+
>   %\x7f\x03/.d29 19:56:30+
>   %\x7f\x03/.d29 20:46:26+
>   %\x7f\x03/.d29 03:27:17+
>   %\x7f\x03/.d29 23:30:34+
>   %\x7f\x03/.d29 02:41:28+
>   %\x7f\x03/.d29 07:23:48+
>   %\x7f\x03/.d29 23:23:04+
>  N!\x0eUA7^r7d\x06J 17:48:51+
>  N!\x0eUA7^r7d\x06J 06:21:13+
>  N!\x0eUA7^r7d\x06J 03:34:41+
>  N!\x0eUA7^r7d\x06J 05:26:21+
>  N!\x0eUA7^r7d\x06J 01:31:24+
>  N!\x0eUA7^r7d\x06J 14:22:43+
>  N!\x0eUA7^r7d\x06J 14:54:29+
>  N!\x0eUA7^r7d\x06J 13:31:54+
>  N!\x0eUA7^r7d\x06J 06:38:40+
>  N!\x0eUA7^r7d\x06J 21:16:47+
> oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 2014-11-23 
> 17:05:45+
> oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 2012-02-23 
> 23:20:54+
> oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB | 

[jira] [Updated] (CASSANDRA-9430) Add startup options to cqlshrc

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9430:

Labels: cqlsh lhf  (was: cqlsh)

> Add startup options to cqlshrc
> --
>
> Key: CASSANDRA-9430
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9430
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jeremy Hanna
>Priority: Minor
>  Labels: cqlsh, lhf
>
> There are certain settings that would be nice to set defaults for in the 
> cqlshrc file.  For example, a user may want to set the paging to off by 
> default for their environment.  You can't simply do
> {code}
> echo "paging off;" | cqlsh
> {code}
> because this would disable paging and immediately exit cqlsh.
> So it would be nice to have a section of the cqlshrc to include default 
> settings on startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10371) Decommissioned nodes can remain in gossip

2016-02-12 Thread pavel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144713#comment-15144713
 ] 

pavel commented on CASSANDRA-10371:
---

Exactly the same was reproduced in 2.1.11

> Decommissioned nodes can remain in gossip
> -
>
> Key: CASSANDRA-10371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
>
> This may apply to other dead states as well.  Dead states should be expired 
> after 3 days.  In the case of decom we attach a timestamp to let the other 
> nodes know when it should be expired.  It has been observed that sometimes a 
> subset of nodes in the cluster never expire the state, and through heap 
> analysis of these nodes it is revealed that the epstate.isAlive check returns 
> true when it should return false, which would allow the state to be evicted.  
> This may have been affected by CASSANDRA-8336.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9469) Improve Load Shedding

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-9469.
-
Resolution: Later

> Improve Load Shedding
> -
>
> Key: CASSANDRA-9469
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9469
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
> Fix For: 3.x
>
>
> As discussed in CASSANDRA-9318, load shedding currently provides very few 
> guarantees and is not employed at every stage of the pipeline. It is 
> relatively simple to impose bounds on the number of items in a pipeline 
> stage, along with the age of the items (as we currently impose). We should 
> also ensure that the predicates are imposed on addition to a pipeline stage, 
> not removal (as in the mutation/read stage case), since this does not prevent 
> dangerous build up of outstanding tasks when outlier (or buggy) tasks arrive 
> together (and block processing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9325) cassandra-stress requires keystore for SSL but provides no way to configure it

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9325:

Labels: lhf stress  (was: stress)

> cassandra-stress requires keystore for SSL but provides no way to configure it
> --
>
> Key: CASSANDRA-9325
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9325
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: J.B. Langston
>  Labels: lhf, stress
> Fix For: 2.1.x
>
>
> Even though it shouldn't be required unless client certificate authentication 
> is enabled, the stress tool is looking for a keystore in the default location 
> of conf/.keystore with the default password of cassandra. There is no command 
> line option to override these defaults so you have to provide a keystore that 
> satisfies the default. It looks for conf/.keystore in the working directory, 
> so you need to create this in the directory you are running cassandra-stress 
> from.It doesn't really matter what's in the keystore; it just needs to exist 
> in the expected location and have a password of cassandra.
> Since the keystore might be required if client certificate authentication is 
> enabled, we need to add -transport parameters for keystore and 
> keystore-password.  Ideally, these should be optional and stress shouldn't 
> require the keystore unless client certificate authentication is enabled on 
> the server.
> In case it wasn't apparent, this is for Cassandra 2.1 and later's stress 
> tool.  I actually had even more problems getting Cassandra 2.0's stress tool 
> working with SSL and gave up on it.  We probably don't need to fix 2.0; we 
> can just document that it doesn't support SSL and recommend using 2.1 instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9256) Refactor MessagingService to support pluggable transports

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-9256.
-
Resolution: Later

> Refactor MessagingService to support pluggable transports
> -
>
> Key: CASSANDRA-9256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9256
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>
> CASSANDRA-7029 and CASSANDRA-9237 would both benefit greatly from a pluggable 
> MessagingService.
> Ideally, we would refactor the native transport to use the same abstractions, 
> so that we could have a single implementation of each viable transport 
> mechanism for both, and we can easily test out the impact of any transport on 
> the whole cluster, not just one half. This is especially important for 
> establishing if there is a benefit to approaches that permit us to isolate 
> networking to a single thread/core, as the characteristics would be quite 
> different if we still needed many networking threads for the other half of 
> the equation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11165) Static column restriction ignored

2016-02-12 Thread Artem Soloviov (JIRA)
Artem Soloviov created CASSANDRA-11165:
--

 Summary: Static column restriction ignored
 Key: CASSANDRA-11165
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11165
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
 Environment: [cqlsh 5.0.1 | Cassandra 3.3 | CQL spec 3.4.0 | Native 
protocol v4]
Reporter: Artem Soloviov
Priority: Minor


Applying restriction on static field value does not affect any results.
{code}
CREATE TABLE t (k text, s text static, i int, PRIMARY KEY(k,i));
INSERT INTO t (k, s, i) VALUES ( 'a','static value',1);
INSERT INTO t (k, s, i) VALUES ( 'b','other value',2);
SELECT * FROM t WHERE k = 'b' AND s = 'static value' ALLOW FILTERING ;
{code}
*Expected result:* 
empty set
*Actual result:*
{code}
 k | i | s
---+---+-
 b | 2 | other value

(1 rows)
{code}
{code}
SELECT * FROM t WHERE i = 2 AND s = 'static value' ALLOW FILTERING ;
{code}
*Expected result:* 
empty set
*Actual result:*
{code}
 k | i | s
---+---+-
 b | 2 | other value

(1 rows)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11124) Change default cqlsh encoding to utf-8

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11124:
-
Issue Type: Improvement  (was: Bug)

> Change default cqlsh encoding to utf-8
> --
>
> Key: CASSANDRA-11124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11124
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Trivial
>  Labels: cqlsh
>
> Strange things can happen when utf-8 is not the default cqlsh encoding (see 
> CASSANDRA-11030). This ticket proposes changing the default cqlsh encoding to 
> utf-8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9348) Nodetool move output should be more user friendly if bad token is supplied

2016-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9348:

Labels: lhf  (was: )

> Nodetool move output should be more user friendly if bad token is supplied
> --
>
> Key: CASSANDRA-9348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9348
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sequoyha pelletier
>Priority: Trivial
>  Labels: lhf
>
> If you put a token into nodetool move that is out of range for the 
> partitioner you get the following error:
> {noformat}
> [architect@md03-gcsarch-lapp33 11:01:06 ]$ nodetool -h 10.11.48.229 -u 
> cassandra -pw cassandra move \\-9223372036854775809 
> Exception in thread "main" java.io.IOException: For input string: 
> "-9223372036854775809" 
> at org.apache.cassandra.service.StorageService.move(StorageService.java:3104) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) 
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) 
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>  
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> at 
> com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>  
> at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) 
> at sun.rmi.transport.Transport$1.run(Transport.java:177) 
> at sun.rmi.transport.Transport$1.run(Transport.java:174) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at sun.rmi.transport.Transport.serviceCall(Transport.java:173) 
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) 
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>  
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:745) 
> {noformat}
> This ticket is just requesting that we catch the exception an output 
> something along the lines of "Token supplied is outside of the acceptable 
> range" for those that are still in the Cassandra learning curve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10397) CQLSH not displaying correct timezone

2016-02-12 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144825#comment-15144825
 ] 

Stefan Podkowinski commented on CASSANDRA-10397:


It's probably better to add an {{ui.timezone}} option for {{cqlshrc}} that can 
be used instead of an command line option and document it along with other 
related settings there such as time_format. 

I've now updated my branch so the timezone is taken from either {{cqlshrc}}, 
the {{TZ}} environment value or auto-detected in case {{tzlocal}} is installed, 
included warnings in cases Python packages are missing. 

> CQLSH not displaying correct timezone
> -
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10733) Inconsistencies in CQLSH auto-complete

2016-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144983#comment-15144983
 ] 

Tyler Hobbs commented on CASSANDRA-10733:
-

Ah, I didn't realize we were running the cqlshlib tests in cassci yet, thanks 
for catching that.

> Inconsistencies in CQLSH auto-complete
> --
>
> Key: CASSANDRA-10733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10733
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Tools
>Reporter: Michael Edge
>Assignee: Michael Edge
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 2.2.6, 3.0.4, 3.4
>
> Attachments: 10733-fix-space-2.2.txt, 
> CASSANDRA-2.2-10733-CQLSH-Auto.patch, CASSANDRA-2.2-10733-tests.patch, 
> CASSANDRA-3.0-10733-CQLSH-Auto.patch
>
>
> Auto-complete in cqlsh does not work correctly on some commands. We see some 
> inconsistent behaviour when completing part of the statement and hitting the 
> tab key.
> {color:green}Works correctly{color}
> Auto-complete on {{'desc table '}}, {{'desc function '}} and {{'desc type '}} 
> works correctly. We see a list of all tables (or functions, types) in the 
> current keyspace plus a list of all available keyspaces followed by a full 
> stop (e.g. system.)
> {code}
> cqlsh:fxaggr> desc TABLE 
>  minutedata   system_distributed.
> ;rawtickdatabylp  system_traces.
>   rawtickdatabysymbol  tickdata
> daydata  system.  
> fxaggr.  system_auth. 
> {code}
> {color:red}Fix required{color}
> {{'desc aggregate '}} displays the aggregates in the current keyspace (in 
> this case, only 1, called 'average') but does not display a list of available 
> keyspaces. It only displays the current keyspace, with no following full stop.
> {code}
> cqlsh:fxaggr> desc aggregate 
>  ;  average  fxaggr
> {code}
> {color:green}Works correctly{color}
> Auto-complete on {{'desc table . '}} and {{'desc type 
> .'}} works correctly. We see a list of all tables (or types) in the 
> current keyspace
> {code}
> cqlsh:fxaggr> desc table fxaggr.
> daydata  rawtickdatabylp  tickdata
> minutedata   rawtickdatabysymbol  
> {code}
> {color:red}Fix required{color}
> Auto-complete on {{'desc function . '}} and {{'desc aggregate 
> .'}} works inconsistently. In a keyspace with 2 functions, both 
> beginning with the letters 'avg', if I type {{'desc function '}} 
> and hit tab, auto-complete will result in this: {{'desc function fxaggr.avg 
> '}} and will not display the matching functions. If I type {{'desc function 
> .'}} (note the trailing full stop) and hit tab, auto-complete will 
> work correctly:
> {code}
> cqlsh:fxaggr> desc function fxaggr.avg
> avgfinal  avgstate  
> {code}
> If I type {{'desc aggregate '}} and hit tab, auto-complete returns  
> {{'desc aggregate  '}}  (it adds a space) and does not show me the 
> list of available aggregates. If I type {{'desc aggregate .'}} 
> (note the trailing full stop) and hit tab, auto-complete will work correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11164) Order and filter cipher suites correctly

2016-02-12 Thread Tom Petracca (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144982#comment-15144982
 ] 

Tom Petracca commented on CASSANDRA-11164:
--

And to follow up.  I don't know that the ordering actually matters; some 
tickets and stuff online seems to suggest that it does, but nowhere in the  
java documentation for SSLSocket or SSLServerSocket does it imply that the 
ordering of the suites actually implies that it will attempt to use them in 
that order.  It doesn't really have an impact by ordering them, so why not?

> Order and filter cipher suites correctly
> 
>
> Key: CASSANDRA-11164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11164
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Petracca
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 11164-2.2.txt
>
>
> As pointed out in https://issues.apache.org/jira/browse/CASSANDRA-10508, 
> SSLFactory.filterCipherSuites() doesn't respect the ordering of desired 
> ciphers in cassandra.yaml.
> Also the fix that occurred for 
> https://issues.apache.org/jira/browse/CASSANDRA-3278 is incomplete and needs 
> to be applied to all locations where we create an SSLSocket so that JCE is 
> not required out of the box or with additional configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10736) TestTopology.simple_decommission_test failing due to assertion triggered by SizeEstimatesRecorder

2016-02-12 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144918#comment-15144918
 ] 

Aleksey Yeschenko commented on CASSANDRA-10736:
---

Committed as 
[1b201e959a6f77aeedd2549ed523200021d8c6e6|https://github.com/apache/cassandra/commit/1b201e959a6f77aeedd2549ed523200021d8c6e6]
 to 2.2 and merged with 3.0 and trunk using the provided branches, thanks.

Minor nit: the commit message shouldn't include the ticket number. Just copy 
whatever you used for CHANGES.txt sans the issue #.

> TestTopology.simple_decommission_test failing due to assertion triggered by 
> SizeEstimatesRecorder
> -
>
> Key: CASSANDRA-10736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Minor
> Fix For: 2.2.6, 3.0.4, 3.4
>
>
> Example 
> [here|http://cassci.datastax.com/job/cassandra-2.2_dtest/369/testReport/junit/topology_test/TestTopology/simple_decommission_test/].
> {{SizeEstimatesRecorder}} can race with decommission when it tries to get the 
> primary ranges for a node.
> This is because {{getPredecessor}} in {{TokenMetadata}} hits an assertion if 
> the token is no longer in {{TokenMetadata}}.
> This no longer occurs in 3.0 because this assertion has been removed and 
> replace with different data.
> In both cases, the relationship between the set of tokens in 
> {{getPrimaryRangesFor}} (passed in as an argument) and the set of tokens used 
> in calls by {{getPredecessor}} (the system ones) should be investigated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-02-12 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/efbcd15d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/efbcd15d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/efbcd15d

Branch: refs/heads/cassandra-3.0
Commit: efbcd15d642008866f82d89f881b703c011dce19
Parents: f3b7599 1b201e9
Author: Aleksey Yeschenko 
Authored: Fri Feb 12 17:29:15 2016 +
Committer: Aleksey Yeschenko 
Committed: Fri Feb 12 17:31:21 2016 +

--
 CHANGES.txt | 5 -
 src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java | 6 --
 src/java/org/apache/cassandra/locator/TokenMetadata.java| 7 +++
 3 files changed, 11 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/efbcd15d/CHANGES.txt
--
diff --cc CHANGES.txt
index 15012b1,49bc581..a7669bb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -15,8 -5,9 +15,11 @@@ Merged from 2.2
   * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
   * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
   * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
++ * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
++   (CASSANDRA-10736)
   * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
-  * Fix paging on DISTINCT queries repeats result when first row in partition 
changes (CASSANDRA-10010)
+  * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
+(CASSANDRA-10010)
  Merged from 2.1:
   * Properly release sstable ref when doing offline scrub (CASSANDRA-10697)
   * Improve nodetool status performance for large cluster (CASSANDRA-7238)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/efbcd15d/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/efbcd15d/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --cc src/java/org/apache/cassandra/locator/TokenMetadata.java
index f6e9cf7,de16fda..97c5f10
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@@ -26,8 -26,8 +26,9 @@@ import java.util.concurrent.atomic.Atom
  import java.util.concurrent.locks.ReadWriteLock;
  import java.util.concurrent.locks.ReentrantReadWriteLock;
  
 +import com.google.common.annotations.VisibleForTesting;
  import com.google.common.collect.*;
+ import org.apache.commons.lang3.StringUtils;
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
  



[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-02-12 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/efbcd15d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/efbcd15d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/efbcd15d

Branch: refs/heads/trunk
Commit: efbcd15d642008866f82d89f881b703c011dce19
Parents: f3b7599 1b201e9
Author: Aleksey Yeschenko 
Authored: Fri Feb 12 17:29:15 2016 +
Committer: Aleksey Yeschenko 
Committed: Fri Feb 12 17:31:21 2016 +

--
 CHANGES.txt | 5 -
 src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java | 6 --
 src/java/org/apache/cassandra/locator/TokenMetadata.java| 7 +++
 3 files changed, 11 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/efbcd15d/CHANGES.txt
--
diff --cc CHANGES.txt
index 15012b1,49bc581..a7669bb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -15,8 -5,9 +15,11 @@@ Merged from 2.2
   * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
   * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
   * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
++ * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
++   (CASSANDRA-10736)
   * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
-  * Fix paging on DISTINCT queries repeats result when first row in partition 
changes (CASSANDRA-10010)
+  * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
+(CASSANDRA-10010)
  Merged from 2.1:
   * Properly release sstable ref when doing offline scrub (CASSANDRA-10697)
   * Improve nodetool status performance for large cluster (CASSANDRA-7238)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/efbcd15d/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/efbcd15d/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --cc src/java/org/apache/cassandra/locator/TokenMetadata.java
index f6e9cf7,de16fda..97c5f10
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@@ -26,8 -26,8 +26,9 @@@ import java.util.concurrent.atomic.Atom
  import java.util.concurrent.locks.ReadWriteLock;
  import java.util.concurrent.locks.ReentrantReadWriteLock;
  
 +import com.google.common.annotations.VisibleForTesting;
  import com.google.common.collect.*;
+ import org.apache.commons.lang3.StringUtils;
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
  



[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-02-12 Thread aleksey
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1944bf50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1944bf50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1944bf50

Branch: refs/heads/trunk
Commit: 1944bf507d66b5c103c136319caeb4a9e3767a69
Parents: a800ca8 efbcd15
Author: Aleksey Yeschenko 
Authored: Fri Feb 12 17:31:35 2016 +
Committer: Aleksey Yeschenko 
Committed: Fri Feb 12 17:33:14 2016 +

--
 CHANGES.txt | 5 -
 src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java | 6 --
 src/java/org/apache/cassandra/locator/TokenMetadata.java| 7 +++
 3 files changed, 11 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1944bf50/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1944bf50/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --cc src/java/org/apache/cassandra/locator/TokenMetadata.java
index d47b681,97c5f10..caa7661
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@@ -868,20 -869,18 +869,18 @@@ public class TokenMetadat
  
  public Token getPredecessor(Token token)
  {
 -List tokens = sortedTokens();
 +List tokens = sortedTokens();
  int index = Collections.binarySearch(tokens, token);
- //assert index >= 0 : token + " not found in " + 
StringUtils.join(tokenToEndpointMap.keySet(), ", ");
- if (index < 0) index = -index-1;
+ assert index >= 0 : token + " not found in " + 
StringUtils.join(tokenToEndpointMap.keySet(), ", ");
 -return (Token) (index == 0 ? tokens.get(tokens.size() - 1) : 
tokens.get(index - 1));
 +return index == 0 ? tokens.get(tokens.size() - 1) : tokens.get(index 
- 1);
  }
  
  public Token getSuccessor(Token token)
  {
 -List tokens = sortedTokens();
 +List tokens = sortedTokens();
  int index = Collections.binarySearch(tokens, token);
- //assert index >= 0 : token + " not found in " + 
StringUtils.join(tokenToEndpointMap.keySet(), ", ");
- if (index < 0) return (Token) tokens.get(-index-1);
+ assert index >= 0 : token + " not found in " + 
StringUtils.join(tokenToEndpointMap.keySet(), ", ");
 -return (Token) ((index == (tokens.size() - 1)) ? tokens.get(0) : 
tokens.get(index + 1));
 +return (index == (tokens.size() - 1)) ? tokens.get(0) : 
tokens.get(index + 1);
  }
  
  /** @return a copy of the bootstrapping tokens map */



[jira] [Commented] (CASSANDRA-11164) Order and filter cipher suites correctly

2016-02-12 Thread Tom Petracca (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144958#comment-15144958
 ] 

Tom Petracca commented on CASSANDRA-11164:
--

You need the filtering to ensure that you don't attempt to use an unsupported 
cipher suite.  We attempt to use (by default) TLS_RSA_WITH_AES_256_CBC_SHA, 
which fails on systems that don't have the JCE Unlimited Strength Jurisdiction 
Policy.  However I don't want to remove the unsupported suites from the default 
because most people who have JCE will actually want to use the stronger ones 
(and I generally like the idea of it having that functionality by default).

> Order and filter cipher suites correctly
> 
>
> Key: CASSANDRA-11164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11164
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Petracca
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 11164-2.2.txt
>
>
> As pointed out in https://issues.apache.org/jira/browse/CASSANDRA-10508, 
> SSLFactory.filterCipherSuites() doesn't respect the ordering of desired 
> ciphers in cassandra.yaml.
> Also the fix that occurred for 
> https://issues.apache.org/jira/browse/CASSANDRA-3278 is incomplete and needs 
> to be applied to all locations where we create an SSLSocket so that JCE is 
> not required out of the box or with additional configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9220) Hostname verification for node-to-node encryption

2016-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145056#comment-15145056
 ] 

Tyler Hobbs commented on CASSANDRA-9220:


[~spo...@gmail.com] should we block this on CASSANDRA-10508, then?

> Hostname verification for node-to-node encryption
> -
>
> Key: CASSANDRA-9220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9220
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.x
>
> Attachments: sslhostverification-2.0.patch
>
>
> This patch will will introduce a new ssl server option: 
> {{require_endpoint_verification}}. 
> Setting it will enable hostname verification for inter-node SSL 
> communication. This is necessary to prevent man-in-the-middle attacks when 
> building a trust chain against a common CA. See 
> [here|https://tersesystems.com/2014/03/23/fixing-hostname-verification/] for 
> background details. 
> Clusters that solely rely on importing all node certificates into each trust 
> store (as described 
> [here|http://docs.datastax.com/en/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html])
>  are not effected. 
> Clusters that use the same common CA to sign node certificates are 
> potentially affected. In case the CA signing process will allow other parties 
> to generate certs for different purposes, those certificates could in turn be 
> used for MITM attacks. The provided patch will allow to enable hostname 
> verification to make sure not only to check if the cert is valid but also if 
> it has been created for the host that we're about to connect.
> Corresponding dtest: [Test for 
> CASSANDRA-9220|https://github.com/riptano/cassandra-dtest/pull/237]
> Github: 
> 2.0 -> 
> [diff|https://github.com/apache/cassandra/compare/cassandra-2.0...spodkowinski:feat/sslhostverification],
>  
> [patch|https://github.com/apache/cassandra/compare/cassandra-2.0...spodkowinski:feat/sslhostverification.patch],
> Trunk -> 
> [diff|https://github.com/apache/cassandra/compare/trunk...spodkowinski:feat/sslhostverification],
>  
> [patch|https://github.com/apache/cassandra/compare/trunk...spodkowinski:feat/sslhostverification.patch]
> Related patches from the client perspective: 
> [Java|https://datastax-oss.atlassian.net/browse/JAVA-716], 
> [Python|https://datastax-oss.atlassian.net/browse/PYTHON-296]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9220) Hostname verification for node-to-node encryption

2016-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145056#comment-15145056
 ] 

Tyler Hobbs edited comment on CASSANDRA-9220 at 2/12/16 7:02 PM:
-

[~spo...@gmail.com] should we block this on CASSANDRA-10508, then?

I've also rebased a version of this on trunk: 
https://github.com/thobbs/cassandra/tree/CASSANDRA-9220-trunk-rebase


was (Author: thobbs):
[~spo...@gmail.com] should we block this on CASSANDRA-10508, then?

> Hostname verification for node-to-node encryption
> -
>
> Key: CASSANDRA-9220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9220
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.x
>
> Attachments: sslhostverification-2.0.patch
>
>
> This patch will will introduce a new ssl server option: 
> {{require_endpoint_verification}}. 
> Setting it will enable hostname verification for inter-node SSL 
> communication. This is necessary to prevent man-in-the-middle attacks when 
> building a trust chain against a common CA. See 
> [here|https://tersesystems.com/2014/03/23/fixing-hostname-verification/] for 
> background details. 
> Clusters that solely rely on importing all node certificates into each trust 
> store (as described 
> [here|http://docs.datastax.com/en/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html])
>  are not effected. 
> Clusters that use the same common CA to sign node certificates are 
> potentially affected. In case the CA signing process will allow other parties 
> to generate certs for different purposes, those certificates could in turn be 
> used for MITM attacks. The provided patch will allow to enable hostname 
> verification to make sure not only to check if the cert is valid but also if 
> it has been created for the host that we're about to connect.
> Corresponding dtest: [Test for 
> CASSANDRA-9220|https://github.com/riptano/cassandra-dtest/pull/237]
> Github: 
> 2.0 -> 
> [diff|https://github.com/apache/cassandra/compare/cassandra-2.0...spodkowinski:feat/sslhostverification],
>  
> [patch|https://github.com/apache/cassandra/compare/cassandra-2.0...spodkowinski:feat/sslhostverification.patch],
> Trunk -> 
> [diff|https://github.com/apache/cassandra/compare/trunk...spodkowinski:feat/sslhostverification],
>  
> [patch|https://github.com/apache/cassandra/compare/trunk...spodkowinski:feat/sslhostverification.patch]
> Related patches from the client perspective: 
> [Java|https://datastax-oss.atlassian.net/browse/JAVA-716], 
> [Python|https://datastax-oss.atlassian.net/browse/PYTHON-296]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11166) Inconsistent behavior on Tombstones

2016-02-12 Thread Anubhav Kale (JIRA)
Anubhav Kale created CASSANDRA-11166:


 Summary: Inconsistent behavior on Tombstones
 Key: CASSANDRA-11166
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11166
 Project: Cassandra
  Issue Type: Bug
Reporter: Anubhav Kale
Priority: Minor


I noticed an inconsistent behavior on deletes. Not sure if it is intentional. 

The summary is:

If a table is created with TTL or if rows are inserted in a table using TTL, 
when its time to expire the row, tombstone is generated (as expected) and 
cfstats, cqlsh tracing and sstable2json show it.

However, if one executes a delete from table query followed by a select *, 
neither cql tracing nor cfstats shows a tombstone being present. However, 
sstable2json shows a tombstone.

Is this situation treated differently on purpose ? In such a situation, does 
Cassandra not have to scan tombstones (seems odd) ?

Also as a data point, if one executes a delete  from table, cqlsh 
tracing, nodetool cfstats, and sstable2json all show a consistent result 
(tombstone being present).

As a end user, I'd assume that deleting a row either via TTL or explicitly 
should show me a tombstone. Is this expectation reasonable ? If not, can this 
behavior be clearly documented ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9220) Hostname verification for node-to-node encryption

2016-02-12 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9220:
---
Reviewer: Robert Stupp  (was: Tyler Hobbs)

> Hostname verification for node-to-node encryption
> -
>
> Key: CASSANDRA-9220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9220
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.x
>
> Attachments: sslhostverification-2.0.patch
>
>
> This patch will will introduce a new ssl server option: 
> {{require_endpoint_verification}}. 
> Setting it will enable hostname verification for inter-node SSL 
> communication. This is necessary to prevent man-in-the-middle attacks when 
> building a trust chain against a common CA. See 
> [here|https://tersesystems.com/2014/03/23/fixing-hostname-verification/] for 
> background details. 
> Clusters that solely rely on importing all node certificates into each trust 
> store (as described 
> [here|http://docs.datastax.com/en/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html])
>  are not effected. 
> Clusters that use the same common CA to sign node certificates are 
> potentially affected. In case the CA signing process will allow other parties 
> to generate certs for different purposes, those certificates could in turn be 
> used for MITM attacks. The provided patch will allow to enable hostname 
> verification to make sure not only to check if the cert is valid but also if 
> it has been created for the host that we're about to connect.
> Corresponding dtest: [Test for 
> CASSANDRA-9220|https://github.com/riptano/cassandra-dtest/pull/237]
> Github: 
> 2.0 -> 
> [diff|https://github.com/apache/cassandra/compare/cassandra-2.0...spodkowinski:feat/sslhostverification],
>  
> [patch|https://github.com/apache/cassandra/compare/cassandra-2.0...spodkowinski:feat/sslhostverification.patch],
> Trunk -> 
> [diff|https://github.com/apache/cassandra/compare/trunk...spodkowinski:feat/sslhostverification],
>  
> [patch|https://github.com/apache/cassandra/compare/trunk...spodkowinski:feat/sslhostverification.patch]
> Related patches from the client perspective: 
> [Java|https://datastax-oss.atlassian.net/browse/JAVA-716], 
> [Python|https://datastax-oss.atlassian.net/browse/PYTHON-296]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10508) Remove hard-coded SSL cipher suites and protocols

2016-02-12 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10508:

Assignee: Stefan Podkowinski
Reviewer: Robert Stupp

> Remove hard-coded SSL cipher suites and protocols
> -
>
> Key: CASSANDRA-10508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>  Labels: lhf
> Fix For: 3.x
>
>
> Currently each SSL connections will be initialized using a hard-coded list of 
> protocols ("SSLv2Hello", "TLSv1", "TLSv1.1", "TLSv1.2") and cipher suites. We 
> now require Java 8 which comes with solid defaults for these kind of SSL 
> settings and I'm wondering if the current behavior shouldn't be re-evaluated. 
> In my impression the way cipher suites are currently whitelisted is 
> problematic, as this will prevent the JVM from using more recent and more 
> secure suites that haven't been added to the hard-coded list. JVM updates may 
> also cause issues in case the limited number of ciphers cannot be used, e.g. 
> see CASSANDRA-6613.
> Looking at the source I've also stumbled upon a bug in the 
> {{filterCipherSuites()}} method that would return the filtered list of 
> ciphers in undetermined order where the result is passed to 
> {{setEnabledCipherSuites()}}. However, the list of ciphers will reflect the 
> order of preference 
> ([source|https://bugs.openjdk.java.net/browse/JDK-8087311]) and therefore you 
> may end up with weaker algorithms on the top. Currently it's not that 
> critical, as we only whitelist a couple of ciphers anyway. But it adds to the 
> question if it still really makes sense to work with the cipher list at all 
> in the Cassandra code base.
> Another way to effect used ciphers is by changing the security properties. 
> This is a more versatile way to work with cipher lists instead of relying on 
> hard-coded values, see 
> [here|https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#DisabledAlgorithms]
>  for details.
> The same applies to the protocols. Introduced in CASSANDRA-8265 to prevent 
> SSLv3 attacks, this is not necessary anymore as SSLv3 is now blacklisted 
> anyway and will stop using safer protocol sets on new JVM releases or user 
> request. Again, we should stick with the JVM defaults. Using the 
> {{jdk.tls.client.protocols}} systems property will always allow to restrict 
> the set of protocols in case another emergency fix is needed. 
> You can find a patch with where I ripped out the mentioned options here:
> [Diff 
> trunk|https://github.com/apache/cassandra/compare/trunk...spodkowinski:fix/ssloptions]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11167) NPE when creating serializing ErrorMessage for Exception with null message

2016-02-12 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-11167:
---

 Summary: NPE when creating serializing ErrorMessage for Exception 
with null message
 Key: CASSANDRA-11167
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11167
 Project: Cassandra
  Issue Type: Bug
  Components: Coordination
Reporter: Tyler Hobbs
Priority: Minor
 Fix For: 2.2.x, 3.0.x, 3.x


In {{ErrorMessage.encode()}} and {{encodedSize()}}, we do not handle the 
exception having a {{null}} message.  This can result in an error like the 
following:

{noformat}
ERROR [SharedPool-Worker-1] 2016-02-10 17:41:29,793  Message.java:611 - 
Unexpected exception during request; channel = [id: 0xc2c6499a, 
/127.0.0.1:53299 => /127.0.0.1:9042]
java.lang.NullPointerException: null
at 
org.apache.cassandra.db.TypeSizes.encodedUTF8Length(TypeSizes.java:46) 
~[cassandra-all-3.0.3.874.jar:3.0.3.874]
at org.apache.cassandra.transport.CBUtil.sizeOfString(CBUtil.java:132) 
~[cassandra-all-3.0.3.874.jar:3.0.3.874]
at 
org.apache.cassandra.transport.messages.ErrorMessage$1.encodedSize(ErrorMessage.java:215)
 ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
at 
org.apache.cassandra.transport.messages.ErrorMessage$1.encodedSize(ErrorMessage.java:44)
 ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
at 
org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:328) 
~[cassandra-all-3.0.3.874.jar:3.0.3.874]
at 
org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:314) 
~[cassandra-all-3.0.3.874.jar:3.0.3.874]
at 
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:629)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:686)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:622)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:445) 
~[cassandra-all-3.0.3.874.jar:3.0.3.874]
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11157) test_bulk_round_trip_blogposts_with_max_connections got "Truncate timed out"

2016-02-12 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145178#comment-15145178
 ] 

Jim Witschey commented on CASSANDRA-11157:
--

PR merged at [14a590 on 
dtest|https://github.com/stef1927/cassandra-dtest/commit/14a5901943ef1f90964abcc4d4ec4b579d634b8a].

> test_bulk_round_trip_blogposts_with_max_connections got "Truncate timed out"
> 
>
> Key: CASSANDRA-11157
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11157
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Stefania
>Assignee: Stefania
>
> {{test_bulk_round_trip_blogposts_with_max_connections}} failed again but for 
> a different reason:
> http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-11148-trunk-dtest/1/testReport/junit/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections/
> Increasing cqlsh {{--request-timeout}} should fix this since it is just the 
> TRUNCATE operation that times out, unlike the problems of CASSANDRA-10938.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11169) [sasi] exception thrown when trying to index row with index on set

2016-02-12 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145784#comment-15145784
 ] 

Pavel Yaskevich commented on CASSANDRA-11169:
-

[~rustyrazorblade] collections are not yet supported properly by SASI since one 
couldn't even create index on collections before (2.0 where we ported SASI 
from), I will add a validation option to verify that "create index" fails on 
such columns meanwhile and will later check what if required to make it happen. 

> [sasi] exception thrown when trying to index row with index on set
> 
>
> Key: CASSANDRA-11169
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11169
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>
> I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69
> I created a new table with a set, then a SASI index on the set.  I 
> tried to insert a row with a set, Cassandra throws an exception and becomes 
> unavailable.
> {code}
> cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> cqlsh> use test;
> cqlsh:test> create table a (id int PRIMARY KEY , s set );
> cqlsh:test> create CUSTOM INDEX on a(s) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
> WriteTimeout: code=1100 [Coordinator node timed out waiting for replica 
> nodes' responses] message="Operation timed out - received only 0 responses." 
> info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> Cassandra stacktrace:
> {code}
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:194)
>  ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:95) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:247) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>  ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:136) 
> ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.build(BTree.java:118) 
> ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.update(BTree.java:177) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>  ~[main/:na]
>   at org.apache.cassandra.db.Memtable.put(Memtable.java:244) ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1216) 
> ~[main/:na]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:531) ~[main/:na]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:399) ~[main/:na]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:202) 
> ~[main/:na]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[main/:na]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:228) ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$$Lambda$201/413275033.run(Unknown 
> Source) ~[na:na]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1343)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[main/:na]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9544) Allow specification of TLS protocol to use for cqlsh

2016-02-12 Thread Cott Lang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145252#comment-15145252
 ] 

Cott Lang commented on CASSANDRA-9544:
--

[~thobbs]  Despite the name, SSLv23 allows TLS 1.1 and TLS 1.2 to work, whereas 
TLSv1 does not. This makes it more complicated to properly secure Cassandra 
with TLS 1.2.  SSLv23 seems to be the 'normal' way of making a client SSL call. 
TLS 1.0+ should be enforced on the server side.

The error text also seems to be incorrect - it's TLSv1_1 or TLSv1_2 rather than 
TLSv1.1 or TLSv1.2.

Thanks.


> Allow specification of TLS protocol to use for cqlsh
> 
>
> Key: CASSANDRA-9544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9544
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jesse Szwedko
>Assignee: Jesse Szwedko
>  Labels: cqlsh, docs-impacting, tls
> Fix For: 2.1.9, 2.2.0
>
>
> Currently when using {{cqlsh}} with {{--ssl}} it tries to use TLS 1.0 to 
> connect. I have my server only serving TLS 1.2 which means that I cannot 
> connect.
> It would be nice if {{cqlsh}} allowed the TLS protocol it uses to connect to 
> be configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9935) Repair fails with RuntimeException

2016-02-12 Thread Jean-Francois Gosselin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145336#comment-15145336
 ] 

Jean-Francois Gosselin commented on CASSANDRA-9935:
---

[~yukim] What's the next step to troubleshoot this issue ? Any specific log we 
could enable at DEBUG  ?

> Repair fails with RuntimeException
> --
>
> Key: CASSANDRA-9935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9935
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.8, Debian Wheezy
>Reporter: mlowicki
>Assignee: Yuki Morishita
> Fix For: 2.1.x
>
> Attachments: db1.sync.lati.osa.cassandra.log, 
> db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, 
> system.log.10.210.3.221, system.log.10.210.3.230
>
>
> We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade 
> to 2.1.8 it started to work faster but now it fails with:
> {code}
> ...
> [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde 
> for range (-5474076923322749342,-5468600594078911162] finished
> [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde 
> for range (-8631877858109464676,-8624040066373718932] finished
> [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde 
> for range (-5372806541854279315,-5369354119480076785] finished
> [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde 
> for range (8166489034383821955,8168408930184216281] finished
> [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde 
> for range (6084602890817326921,6088328703025510057] finished
> [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde 
> for range (-781874602493000830,-781745173070807746] finished
> [2015-07-29 20:44:03,957] Repair command #4 finished
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
> {code}
> After running:
> {code}
> nodetool repair --partitioner-range --parallel --in-local-dc sync
> {code}
> Last records in logs regarding repair are:
> {code}
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range 
> (-7695808664784761779,-7693529816291585568] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range 
> (806371695398849,8065203836608925992] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range 
> (-5474076923322749342,-5468600594078911162] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range 
> (-8631877858109464676,-8624040066373718932] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range 
> (-5372806541854279315,-5369354119480076785] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range 
> (8166489034383821955,8168408930184216281] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range 
> (6084602890817326921,6088328703025510057] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range 
> (-781874602493000830,-781745173070807746] finished
> {code}
> but a bit above I see (at least two times in attached log):
> {code}
> ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - 
> Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range 
> (5765414319217852786,5781018794516851576] failed with error 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) 
> [na:1.7.0_80]
> at 
> 

[jira] [Commented] (CASSANDRA-10733) Inconsistencies in CQLSH auto-complete

2016-02-12 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145288#comment-15145288
 ] 

Michael Shuler commented on CASSANDRA-10733:


I fixed the cqlshlib jobs to set {enable_user_defined_functions=true} and 
re-ran them all.

> Inconsistencies in CQLSH auto-complete
> --
>
> Key: CASSANDRA-10733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10733
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Tools
>Reporter: Michael Edge
>Assignee: Michael Edge
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 2.2.6, 3.0.4, 3.4
>
> Attachments: 10733-fix-space-2.2.txt, 
> CASSANDRA-2.2-10733-CQLSH-Auto.patch, CASSANDRA-2.2-10733-tests.patch, 
> CASSANDRA-3.0-10733-CQLSH-Auto.patch
>
>
> Auto-complete in cqlsh does not work correctly on some commands. We see some 
> inconsistent behaviour when completing part of the statement and hitting the 
> tab key.
> {color:green}Works correctly{color}
> Auto-complete on {{'desc table '}}, {{'desc function '}} and {{'desc type '}} 
> works correctly. We see a list of all tables (or functions, types) in the 
> current keyspace plus a list of all available keyspaces followed by a full 
> stop (e.g. system.)
> {code}
> cqlsh:fxaggr> desc TABLE 
>  minutedata   system_distributed.
> ;rawtickdatabylp  system_traces.
>   rawtickdatabysymbol  tickdata
> daydata  system.  
> fxaggr.  system_auth. 
> {code}
> {color:red}Fix required{color}
> {{'desc aggregate '}} displays the aggregates in the current keyspace (in 
> this case, only 1, called 'average') but does not display a list of available 
> keyspaces. It only displays the current keyspace, with no following full stop.
> {code}
> cqlsh:fxaggr> desc aggregate 
>  ;  average  fxaggr
> {code}
> {color:green}Works correctly{color}
> Auto-complete on {{'desc table . '}} and {{'desc type 
> .'}} works correctly. We see a list of all tables (or types) in the 
> current keyspace
> {code}
> cqlsh:fxaggr> desc table fxaggr.
> daydata  rawtickdatabylp  tickdata
> minutedata   rawtickdatabysymbol  
> {code}
> {color:red}Fix required{color}
> Auto-complete on {{'desc function . '}} and {{'desc aggregate 
> .'}} works inconsistently. In a keyspace with 2 functions, both 
> beginning with the letters 'avg', if I type {{'desc function '}} 
> and hit tab, auto-complete will result in this: {{'desc function fxaggr.avg 
> '}} and will not display the matching functions. If I type {{'desc function 
> .'}} (note the trailing full stop) and hit tab, auto-complete will 
> work correctly:
> {code}
> cqlsh:fxaggr> desc function fxaggr.avg
> avgfinal  avgstate  
> {code}
> If I type {{'desc aggregate '}} and hit tab, auto-complete returns  
> {{'desc aggregate  '}}  (it adds a space) and does not show me the 
> list of available aggregates. If I type {{'desc aggregate .'}} 
> (note the trailing full stop) and hit tab, auto-complete will work correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11168) Hint Metrics are updated even if hinted_hand-offs=false

2016-02-12 Thread Anubhav Kale (JIRA)
Anubhav Kale created CASSANDRA-11168:


 Summary: Hint Metrics are updated even if hinted_hand-offs=false
 Key: CASSANDRA-11168
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11168
 Project: Cassandra
  Issue Type: Bug
Reporter: Anubhav Kale
Priority: Minor


In our PROD logs, we noticed a lot of hint metrics even though we have disabled 
hinted handoffs.

The reason is StorageProxy.ShouldHint has an inverted if condition. We should 
also wrap the if (hintWindowExpired) block in if 
(DatabaseDescriptor.hintedHandoffEnabled()) as well.

The fix is easy, and I can provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11167) NPE when creating serializing ErrorMessage for Exception with null message

2016-02-12 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-11167:
---

Assignee: Tyler Hobbs

> NPE when creating serializing ErrorMessage for Exception with null message
> --
>
> Key: CASSANDRA-11167
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11167
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In {{ErrorMessage.encode()}} and {{encodedSize()}}, we do not handle the 
> exception having a {{null}} message.  This can result in an error like the 
> following:
> {noformat}
> ERROR [SharedPool-Worker-1] 2016-02-10 17:41:29,793  Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc2c6499a, 
> /127.0.0.1:53299 => /127.0.0.1:9042]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.db.TypeSizes.encodedUTF8Length(TypeSizes.java:46) 
> ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.CBUtil.sizeOfString(CBUtil.java:132) 
> ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.messages.ErrorMessage$1.encodedSize(ErrorMessage.java:215)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.messages.ErrorMessage$1.encodedSize(ErrorMessage.java:44)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:328)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:314)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:629)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:686)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:622)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:445)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10733) Inconsistencies in CQLSH auto-complete

2016-02-12 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145288#comment-15145288
 ] 

Michael Shuler edited comment on CASSANDRA-10733 at 2/12/16 9:00 PM:
-

I fixed the cqlshlib jobs to set {{enable_user_defined_functions=true}} and 
re-ran them all.


was (Author: mshuler):
I fixed the cqlshlib jobs to set {enable_user_defined_functions=true} and 
re-ran them all.

> Inconsistencies in CQLSH auto-complete
> --
>
> Key: CASSANDRA-10733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10733
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Tools
>Reporter: Michael Edge
>Assignee: Michael Edge
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 2.2.6, 3.0.4, 3.4
>
> Attachments: 10733-fix-space-2.2.txt, 
> CASSANDRA-2.2-10733-CQLSH-Auto.patch, CASSANDRA-2.2-10733-tests.patch, 
> CASSANDRA-3.0-10733-CQLSH-Auto.patch
>
>
> Auto-complete in cqlsh does not work correctly on some commands. We see some 
> inconsistent behaviour when completing part of the statement and hitting the 
> tab key.
> {color:green}Works correctly{color}
> Auto-complete on {{'desc table '}}, {{'desc function '}} and {{'desc type '}} 
> works correctly. We see a list of all tables (or functions, types) in the 
> current keyspace plus a list of all available keyspaces followed by a full 
> stop (e.g. system.)
> {code}
> cqlsh:fxaggr> desc TABLE 
>  minutedata   system_distributed.
> ;rawtickdatabylp  system_traces.
>   rawtickdatabysymbol  tickdata
> daydata  system.  
> fxaggr.  system_auth. 
> {code}
> {color:red}Fix required{color}
> {{'desc aggregate '}} displays the aggregates in the current keyspace (in 
> this case, only 1, called 'average') but does not display a list of available 
> keyspaces. It only displays the current keyspace, with no following full stop.
> {code}
> cqlsh:fxaggr> desc aggregate 
>  ;  average  fxaggr
> {code}
> {color:green}Works correctly{color}
> Auto-complete on {{'desc table . '}} and {{'desc type 
> .'}} works correctly. We see a list of all tables (or types) in the 
> current keyspace
> {code}
> cqlsh:fxaggr> desc table fxaggr.
> daydata  rawtickdatabylp  tickdata
> minutedata   rawtickdatabysymbol  
> {code}
> {color:red}Fix required{color}
> Auto-complete on {{'desc function . '}} and {{'desc aggregate 
> .'}} works inconsistently. In a keyspace with 2 functions, both 
> beginning with the letters 'avg', if I type {{'desc function '}} 
> and hit tab, auto-complete will result in this: {{'desc function fxaggr.avg 
> '}} and will not display the matching functions. If I type {{'desc function 
> .'}} (note the trailing full stop) and hit tab, auto-complete will 
> work correctly:
> {code}
> cqlsh:fxaggr> desc function fxaggr.avg
> avgfinal  avgstate  
> {code}
> If I type {{'desc aggregate '}} and hit tab, auto-complete returns  
> {{'desc aggregate  '}}  (it adds a space) and does not show me the 
> list of available aggregates. If I type {{'desc aggregate .'}} 
> (note the trailing full stop) and hit tab, auto-complete will work correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-02-12 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8616:
---
Reproduced In: 2.1.3, 2.0.10  (was: 2.0.10, 2.1.3)
 Reviewer: Tyler Hobbs

bq. Tyler Hobbs are you good finishing review on this since you're still marked 
reviewer?

Sure, I'll review.

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11164) Order and filter cipher suites correctly

2016-02-12 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145213#comment-15145213
 ] 

Stefan Podkowinski commented on CASSANDRA-11164:


bq. You need the filtering to ensure that you don't attempt to use an 
unsupported cipher suite. 

You should never have to pick a cipher. The TLS protocol will handle this 
during 
[handshake|https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake]
 as part of the cipher suite negotiation. The client will offer a list of 
supported ciphers that the server can choose from. The only reason you want to 
manually filter ciphers is to avoid [downgrade 
attacks|https://en.wikipedia.org/wiki/Downgrade_attack]. As SSL in Java 8 isn't 
known to be vulnerable to such attacks, there's no point in manually filter 
ciphers or protocols. Therefore I'd suggest to stick with CASSANDRA-10508.

> Order and filter cipher suites correctly
> 
>
> Key: CASSANDRA-11164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11164
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Petracca
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 11164-2.2.txt
>
>
> As pointed out in https://issues.apache.org/jira/browse/CASSANDRA-10508, 
> SSLFactory.filterCipherSuites() doesn't respect the ordering of desired 
> ciphers in cassandra.yaml.
> Also the fix that occurred for 
> https://issues.apache.org/jira/browse/CASSANDRA-3278 is incomplete and needs 
> to be applied to all locations where we create an SSLSocket so that JCE is 
> not required out of the box or with additional configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10793) fix ohc and java-driver pom dependencies in build.xml

2016-02-12 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145412#comment-15145412
 ] 

Jeremiah Jordan commented on CASSANDRA-10793:
-

+1 LGTM

> fix ohc and java-driver pom dependencies in build.xml
> -
>
> Key: CASSANDRA-10793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.0.x
>
>
> ohc-core/ohc-core-j8 should be included in the  section of the build.xml.  Otherwise when getting cassandra-all from maven 
> row cache doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11168) Hint Metrics are updated even if hinted_hand-offs=false

2016-02-12 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-11168:
-
Description: 
In our PROD logs, we noticed a lot of hint metrics even though we have disabled 
hinted handoffs.

The reason is StorageProxy.ShouldHint has an inverted if condition. 
We should also wrap the if (hintWindowExpired) block in if 
(DatabaseDescriptor.hintedHandoffEnabled()).

The fix is easy, and I can provide a patch.

  was:
In our PROD logs, we noticed a lot of hint metrics even though we have disabled 
hinted handoffs.

The reason is StorageProxy.ShouldHint has an inverted if condition. We should 
also wrap the if (hintWindowExpired) block in if 
(DatabaseDescriptor.hintedHandoffEnabled()) as well.

The fix is easy, and I can provide a patch.


> Hint Metrics are updated even if hinted_hand-offs=false
> ---
>
> Key: CASSANDRA-11168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11168
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anubhav Kale
>Priority: Minor
>
> In our PROD logs, we noticed a lot of hint metrics even though we have 
> disabled hinted handoffs.
> The reason is StorageProxy.ShouldHint has an inverted if condition. 
> We should also wrap the if (hintWindowExpired) block in if 
> (DatabaseDescriptor.hintedHandoffEnabled()).
> The fix is easy, and I can provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11135) test_timestamp_output in the cqlshlib tests is failing

2016-02-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145584#comment-15145584
 ] 

Paulo Motta commented on CASSANDRA-11135:
-

[~philipthompson] Could you please also setup a cqlshlib cassci run for the 
above branch? Thanks a lot!

> test_timestamp_output in the cqlshlib tests is failing
> --
>
> Key: CASSANDRA-11135
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11135
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> See here:
> http://cassci.datastax.com/view/trunk/job/trunk_cqlshlib/738/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_timestamp_output/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145613#comment-15145613
 ] 

Tyler Hobbs commented on CASSANDRA-8616:


The fix looks good to me as a temporary solution.  However, I think you missed 
a few tools that also need the fix:
* {{SSTableLevelResetter}}
* {{SSTableExpiredBlockers}}
* {{BulkLoader}}

And in later versions, we also need to fix:
* {{SSTableRepairedAtSetter}}
* {{StandaloneVerifier}}
* {{StandaloneSSTableUtil}}

As Sylvain mentions, it would be good to add regression dtests for these.

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11169) [sasi] exception thrown when trying to index row with index on set

2016-02-12 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-11169:
--

 Summary: [sasi] exception thrown when trying to index row with 
index on set
 Key: CASSANDRA-11169
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11169
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Haddad


I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69

I created a new table with a set, then a SASI index on the set.  I tried 
to insert a row with a set, Cassandra throws an exception and becomes 
unavailable.

{code}
cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use test;
cqlsh:test> create table a (id int PRIMARY KEY , s set );
cqlsh:test>
cqlsh:test> create CUSTOM INDEX on a(s) USING '
 
cqlsh:test> create CUSTOM INDEX on a(s) USING '
 
cqlsh:test> create CUSTOM INDEX on a(s) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh:test> insert into a (id, s) values (1, {"jon", "haddad"});
SyntaxException: 
cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes' 
responses] message="Operation timed out - received only 0 responses." 
info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Cassandra stacktrace:

{code}
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) 
~[main/:na]
at 
org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:194)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:95) 
~[main/:na]
at 
org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:247) 
~[main/:na]
at 
org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
 ~[main/:na]
at org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:136) 
~[main/:na]
at org.apache.cassandra.utils.btree.BTree.build(BTree.java:118) 
~[main/:na]
at org.apache.cassandra.utils.btree.BTree.update(BTree.java:177) 
~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
 ~[main/:na]
at org.apache.cassandra.db.Memtable.put(Memtable.java:244) ~[main/:na]
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1216) 
~[main/:na]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:531) ~[main/:na]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:399) ~[main/:na]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:202) 
~[main/:na]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[main/:na]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:228) ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$$Lambda$201/413275033.run(Unknown 
Source) ~[na:na]
at 
org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1343) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10767) Checking version of Cassandra command creates `cassandra.logdir_IS_UNDEFINED/`

2016-02-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145601#comment-15145601
 ] 

Paulo Motta commented on CASSANDRA-10767:
-

tested on windows and also works. marking as ready to commit. Thanks!

> Checking version of Cassandra command creates `cassandra.logdir_IS_UNDEFINED/`
> --
>
> Key: CASSANDRA-10767
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10767
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ cassandra -v   
>  
> 2.1.2
> MacOSX 10.9.5
> $ brew info cassandra 
>  [14:15:41]
> cassandra: stable 2.2.3 (bottled)
> Eventually consistent, distributed key-value store
> https://cassandra.apache.org
> /usr/local/Cellar/cassandra/2.1.2 (3975 files, 92M) *
>   Built from source
> From: 
> https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cassandra.rb
> ==> Caveats
> To have launchd start cassandra at login:
>   ln -sfv /usr/local/opt/cassandra/*.plist ~/Library/LaunchAgents
> Then to load cassandra now:
>   launchctl load ~/Library/LaunchAgents/homebrew.mxcl.cassandra.plist
>Reporter: Jens Rantil
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> When I execute `cassandra -v` on the terminal the directory 
> `cassandra.logdir_IS_UNDEFINED` is created in my CWD:
> {noformat}
> $ tree cassandra.logdir_IS_UNDEFINED
> cassandra.logdir_IS_UNDEFINED
> └── system.log
> 0 directories, 1 file
> {noformat}
> Expected: That no log file nor directory is created when I'm simply checking 
> the version of Cassandra. Feels a bit ridiculous.
> Additionals: Just double checking, is this a bundling issue that should be 
> reported to Homebrew? Probably not, right?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11169) [sasi] exception thrown when trying to index row with index on set

2016-02-12 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145748#comment-15145748
 ] 

Jon Haddad commented on CASSANDRA-11169:


/cc [~xedin]

> [sasi] exception thrown when trying to index row with index on set
> 
>
> Key: CASSANDRA-11169
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11169
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>
> I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69
> I created a new table with a set, then a SASI index on the set.  I 
> tried to insert a row with a set, Cassandra throws an exception and becomes 
> unavailable.
> {code}
> cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> cqlsh> use test;
> cqlsh:test> create table a (id int PRIMARY KEY , s set );
> cqlsh:test>
> cqlsh:test> create CUSTOM INDEX on a(s) USING '
>  
> cqlsh:test> create CUSTOM INDEX on a(s) USING '
>  
> cqlsh:test> create CUSTOM INDEX on a(s) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh:test> insert into a (id, s) values (1, {"jon", "haddad"});
> SyntaxException:  message="line 1:39 no viable alternative at input ',' (... s) values (1, 
> [{]"jon",...)">
> cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
> WriteTimeout: code=1100 [Coordinator node timed out waiting for replica 
> nodes' responses] message="Operation timed out - received only 0 responses." 
> info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> Cassandra stacktrace:
> {code}
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:194)
>  ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:95) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:247) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>  ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:136) 
> ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.build(BTree.java:118) 
> ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.update(BTree.java:177) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>  ~[main/:na]
>   at org.apache.cassandra.db.Memtable.put(Memtable.java:244) ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1216) 
> ~[main/:na]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:531) ~[main/:na]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:399) ~[main/:na]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:202) 
> ~[main/:na]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[main/:na]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:228) ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$$Lambda$201/413275033.run(Unknown 
> Source) ~[na:na]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1343)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[main/:na]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10397) CQLSH not displaying correct timezone

2016-02-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145577#comment-15145577
 ] 

Paulo Motta commented on CASSANDRA-10397:
-

Thanks! Patch looks good. Fixed a couple of pep8 violations, added a simple 
test with {{TZ}} if pytz is on path and did some minor update in the warning 
message wording + added an additional {{\n}} so they will be noticed more 
easily on startup. Submitted cassci tests and will check if it's working 
correctly on Windows next.

||2.2||3.0||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-10397]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-10397]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-10397]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10397-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10397-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10397-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10397-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10397-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10397-dtest/lastCompletedBuild/testReport/]|

[~philipthompson] Could you setup a cqlshlib cassci run with the branches 
above? Do you know if the cassci machines have pytz installed? Thanks!

> CQLSH not displaying correct timezone
> -
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10397) Add local timezone support to cqlsh

2016-02-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10397:

Priority: Minor  (was: Major)

> Add local timezone support to cqlsh
> ---
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>Priority: Minor
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10397) Add local timezone support to cqlsh

2016-02-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10397:

Summary: Add local timezone support to cqlsh  (was: CQLSH not displaying 
correct timezone)

> Add local timezone support to cqlsh
> ---
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10397) Add local timezone support to cqlsh

2016-02-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10397:

Issue Type: Improvement  (was: Bug)

> Add local timezone support to cqlsh
> ---
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>  Labels: cqlsh
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11169) [sasi] exception thrown when trying to index row with index on set

2016-02-12 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-11169:
---
Description: 
I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69

I created a new table with a set, then a SASI index on the set.  I tried 
to insert a row with a set, Cassandra throws an exception and becomes 
unavailable.

{code}
cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use test;
cqlsh:test> create table a (id int PRIMARY KEY , s set );
cqlsh:test> create CUSTOM INDEX on a(s) USING 
'org.apache.cassandra.index.sasi.SASIIndex';

cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes' 
responses] message="Operation timed out - received only 0 responses." 
info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Cassandra stacktrace:

{code}
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) 
~[main/:na]
at 
org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:194)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:95) 
~[main/:na]
at 
org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:247) 
~[main/:na]
at 
org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
 ~[main/:na]
at org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:136) 
~[main/:na]
at org.apache.cassandra.utils.btree.BTree.build(BTree.java:118) 
~[main/:na]
at org.apache.cassandra.utils.btree.BTree.update(BTree.java:177) 
~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
 ~[main/:na]
at org.apache.cassandra.db.Memtable.put(Memtable.java:244) ~[main/:na]
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1216) 
~[main/:na]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:531) ~[main/:na]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:399) ~[main/:na]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:202) 
~[main/:na]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[main/:na]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:228) ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$$Lambda$201/413275033.run(Unknown 
Source) ~[na:na]
at 
org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1343) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
{code}


  was:
I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69

I created a new table with a set, then a SASI index on the set.  I tried 
to insert a row with a set, Cassandra throws an exception and becomes 
unavailable.

{code}
cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use test;
cqlsh:test> create table a (id int PRIMARY KEY , s set );
cqlsh:test> create CUSTOM INDEX on a(s) USING 
'org.apache.cassandra.index.sasi.SASIIndex';

cqlsh:test> insert into a (id, s) values (1, {"jon", "haddad"});
SyntaxException: 
cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes' 
responses] message="Operation timed out - received only 0 responses." 
info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Cassandra stacktrace:

{code}
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) 
~[main/:na]
at 
org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:194)
 ~[main/:na]
at 

[jira] [Commented] (CASSANDRA-11169) [sasi] exception thrown when trying to index row with index on set

2016-02-12 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145751#comment-15145751
 ] 

Jon Haddad commented on CASSANDRA-11169:


FWIW, the behavior I was expecting is to have a prefix index on each of the 
elements in the set.

> [sasi] exception thrown when trying to index row with index on set
> 
>
> Key: CASSANDRA-11169
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11169
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>
> I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69
> I created a new table with a set, then a SASI index on the set.  I 
> tried to insert a row with a set, Cassandra throws an exception and becomes 
> unavailable.
> {code}
> cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> cqlsh> use test;
> cqlsh:test> create table a (id int PRIMARY KEY , s set );
> cqlsh:test> create CUSTOM INDEX on a(s) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
> WriteTimeout: code=1100 [Coordinator node timed out waiting for replica 
> nodes' responses] message="Operation timed out - received only 0 responses." 
> info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> Cassandra stacktrace:
> {code}
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:194)
>  ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:95) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:247) 
> ~[main/:na]
>   at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>  ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:136) 
> ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.build(BTree.java:118) 
> ~[main/:na]
>   at org.apache.cassandra.utils.btree.BTree.update(BTree.java:177) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>  ~[main/:na]
>   at org.apache.cassandra.db.Memtable.put(Memtable.java:244) ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1216) 
> ~[main/:na]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:531) ~[main/:na]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:399) ~[main/:na]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:202) 
> ~[main/:na]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[main/:na]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:228) ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$$Lambda$201/413275033.run(Unknown 
> Source) ~[na:na]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1343)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[main/:na]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11169) [sasi] exception thrown when trying to index row with index on set

2016-02-12 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-11169:
---
Description: 
I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69

I created a new table with a set, then a SASI index on the set.  I tried 
to insert a row with a set, Cassandra throws an exception and becomes 
unavailable.

{code}
cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use test;
cqlsh:test> create table a (id int PRIMARY KEY , s set );
cqlsh:test> create CUSTOM INDEX on a(s) USING 
'org.apache.cassandra.index.sasi.SASIIndex';

cqlsh:test> insert into a (id, s) values (1, {"jon", "haddad"});
SyntaxException: 
cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes' 
responses] message="Operation timed out - received only 0 responses." 
info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Cassandra stacktrace:

{code}
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) 
~[main/:na]
at 
org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:194)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:95) 
~[main/:na]
at 
org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:247) 
~[main/:na]
at 
org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
 ~[main/:na]
at org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:136) 
~[main/:na]
at org.apache.cassandra.utils.btree.BTree.build(BTree.java:118) 
~[main/:na]
at org.apache.cassandra.utils.btree.BTree.update(BTree.java:177) 
~[main/:na]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
 ~[main/:na]
at org.apache.cassandra.db.Memtable.put(Memtable.java:244) ~[main/:na]
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1216) 
~[main/:na]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:531) ~[main/:na]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:399) ~[main/:na]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:202) 
~[main/:na]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[main/:na]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:228) ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$$Lambda$201/413275033.run(Unknown 
Source) ~[na:na]
at 
org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1343) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
{code}


  was:
I have a brand new cluster, built off 1944bf507d66b5c103c136319caeb4a9e3767a69

I created a new table with a set, then a SASI index on the set.  I tried 
to insert a row with a set, Cassandra throws an exception and becomes 
unavailable.

{code}
cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use test;
cqlsh:test> create table a (id int PRIMARY KEY , s set );
cqlsh:test>
cqlsh:test> create CUSTOM INDEX on a(s) USING '
 
cqlsh:test> create CUSTOM INDEX on a(s) USING '
 
cqlsh:test> create CUSTOM INDEX on a(s) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh:test> insert into a (id, s) values (1, {"jon", "haddad"});
SyntaxException: 
cqlsh:test> insert into a (id, s) values (1, {'jon', 'haddad'});
WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes' 
responses] message="Operation timed out - received only 0 responses." 
info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Cassandra stacktrace:

{code}
java.lang.AssertionError: