[jira] [Commented] (CASSANDRA-11850) cannot use cql since upgrading python to 2.7.11+

2016-07-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367306#comment-15367306
 ] 

Stefania commented on CASSANDRA-11850:
--

The reason why {{future.is_schema_agreed}} is always true is probably caused by 
the issue described in this [pull 
reqeust|https://github.com/datastax/python-driver/pull/615].

> cannot use cql since upgrading python to 2.7.11+
> 
>
> Key: CASSANDRA-11850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11850
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Development
>Reporter: Andrew Madison
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> OS: Debian GNU/Linux stretch/sid 
> Kernel: 4.5.0-2-amd64 #1 SMP Debian 4.5.4-1 (2016-05-16) x86_64 GNU/Linux
> Python version: 2.7.11+ (default, May  9 2016, 15:54:33)
> [GCC 5.3.1 20160429]
> cqlsh --version: cqlsh 5.0.1
> cassandra -v: 3.5 (also occurs with 3.0.6)
> Issue:
> when running cqlsh, it returns the following error:
> cqlsh -u dbarpt_usr01
> Password: *
> Connection error: ('Unable to connect to any servers', {'odbasandbox1': 
> TypeError('ref() does not take keyword arguments',)})
> I cleared PYTHONPATH:
> python -c "import json; print dir(json); print json.__version__"
> ['JSONDecoder', 'JSONEncoder', '__all__', '__author__', '__builtins__', 
> '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', 
> '_default_decoder', '_default_encoder', 'decoder', 'dump', 'dumps', 
> 'encoder', 'load', 'loads', 'scanner']
> 2.0.9
> Java based clients can connect to Cassandra with no issue. Just CQLSH and 
> Python clients cannot.
> nodetool status also works.
> Thank you for your help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11950) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy_each_quorum

2016-07-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367304#comment-15367304
 ] 

Stefania commented on CASSANDRA-11950:
--

I've modified the test code to avoid the {{UnknownColumnFamilyException}} 
warnings in the logs, by leveraging  
{{session.cluster.control_connection.wait_for_schema_agreement}}, changes are 
[here|https://github.com/stef1927/cassandra-dtest/commits/11950]. 

Before I create the pull request, I want to wait on another pull request I've 
created for the driver 
[here|https://github.com/datastax/python-driver/pull/615]: it seems there is a 
problem in that replicas for which there is no connection are excluded, as well 
as remote replicas unless we use a DC aware load balancing policy that contacts 
remote hosts as well. I want to confirm we really need to change the load 
balancing policy, or whether we can change the behavior in the driver, cc 
[~aholmber].

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy_each_quorum
> -
>
> Key: CASSANDRA-11950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11950
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log, node4.log, node4_debug.log, node5.log, 
> node5_debug.log, node6.log, node6_debug.log, node7.log, node7_debug.log, 
> node8.log, node8_debug.log, node9.log, node9_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/10/testReport/consistency_test/TestAvailability/test_network_topology_strategy_each_quorum
> Failed on CassCI build trunk_large_dtest #10
> Logs are attached.
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 719, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,462 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-4] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-5] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-7] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-6] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367302#comment-15367302
 ] 

DOAN DuyHai commented on CASSANDRA-12149:
-

I can try to test and reproduce

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the sequence of queries (see the attached file), Cassandra 
> aborts a connection reporting NPE on server side. SELECT query without token 
> range filter works, but does not work when token range filter is specified. 
> My intent was to issue multiple SELECT queries targeting the same single 
> partition, filtered by a column indexed by SASI, partitioning results by 
> different token ranges.
> Output from cqlsh on SELECT is the following:
> cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> ServerError:  message="java.lang.NullPointerException">



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread Andrey Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Konstantinov updated CASSANDRA-12149:

Reproduced In: 3.7

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the sequence of queries (see the attached file), Cassandra 
> aborts a connection reporting NPE on server side. SELECT query without token 
> range filter works, but does not work when token range filter is specified. 
> My intent was to issue multiple SELECT queries targeting the same single 
> partition, filtered by a column indexed by SASI, partitioning results by 
> different token ranges.
> Output from cqlsh on SELECT is the following:
> cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> ServerError:  message="java.lang.NullPointerException">



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread Andrey Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367294#comment-15367294
 ] 

Andrey Konstantinov commented on CASSANDRA-12149:
-

It is 3.7.0.

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the sequence of queries (see the attached file), Cassandra 
> aborts a connection reporting NPE on server side. SELECT query without token 
> range filter works, but does not work when token range filter is specified. 
> My intent was to issue multiple SELECT queries targeting the same single 
> partition, filtered by a column indexed by SASI, partitioning results by 
> different token ranges.
> Output from cqlsh on SELECT is the following:
> cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> ServerError:  message="java.lang.NullPointerException">



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11914) Provide option for cassandra-stress to dump all settings

2016-07-07 Thread Ben Slater (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367261#comment-15367261
 ] 

Ben Slater commented on CASSANDRA-11914:


Hi - just thought I'd give this a bump. This isn't a terribly exciting task so 
don't want to waste my time if it's not deemed as useful.

> Provide option for cassandra-stress to dump all settings
> 
>
> Key: CASSANDRA-11914
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11914
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Ben Slater
>Priority: Minor
> Attachments: 11914-trunk.patch
>
>
> cassandra-stress has quite a lot of default settings and settings that are 
> derived as side effects of explicit options. For people learning the tool and 
> saving a clear record of what was run, I think it would be useful if there 
> was an option to have the tool print all its settings at the start of a run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367255#comment-15367255
 ] 

DOAN DuyHai commented on CASSANDRA-12149:
-

WHich version of Cassandra are you using ?

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the sequence of queries (see the attached file), Cassandra 
> aborts a connection reporting NPE on server side. SELECT query without token 
> range filter works, but does not work when token range filter is specified. 
> My intent was to issue multiple SELECT queries targeting the same single 
> partition, filtered by a column indexed by SASI, partitioning results by 
> different token ranges.
> Output from cqlsh on SELECT is the following:
> cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> ServerError:  message="java.lang.NullPointerException">



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11917) nodetool disablethrift hangs under load

2016-07-07 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367235#comment-15367235
 ] 

Edward Capriolo commented on CASSANDRA-11917:
-

Just checking in. I know thrift is unsupported, but I am working with someone 
who has to shut down a fleet of 100 servers every day and roughly 5% do not 
shut down properly and end up with corrupt commit logs. It would be great to 
understand this better.

> nodetool disablethrift hangs under load
> ---
>
> Key: CASSANDRA-11917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11917
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
> Attachments: disable_thrift.txt, stack_summary.txt, 
> unexpected_throw.txt
>
>
> Under production load some nodetool commands such as disablethrift and drain 
> never complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread Andrey Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Konstantinov updated CASSANDRA-12149:

Description: 
If I execute the sequence of queries (see the attached file), Cassandra aborts 
a connection reporting NPE on server side. SELECT query without token range 
filter works, but does not work when token range filter is specified. My intent 
was to issue multiple SELECT queries targeting the same single partition, 
filtered by a column indexed by SASI, partitioning results by different token 
ranges.

Output from cqlsh on SELECT is the following:

cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
9223372036854775807;
ServerError: 


  was:
If I execute the following sequence of queries, Cassandra aborts a connection 
reporting NPE on server side. SELECT query without token range filter works, 
but does not work when token range filter is specified. My intent was to issue 
multiple SELECT queries targeting the same single partition, filtered by a 
column indexed by SASI, partitioning results by different token ranges.

CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};

CREATE TABLE mykeyspace.myrecordtable (
namespace text,
entity text,
timestamp bigint,
feature1 bigint,
feature2 bigint,
PRIMARY KEY ((namespace, entity), timestamp)
) WITH CLUSTERING ORDER BY (timestamp ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (timestamp) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (feature1) USING 
'org.apache.cassandra.index.sasi.SASIIndex';

INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);

SELECT namespace, entity, timestamp, feature1, feature2 FROM 
mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
9223372036854775807;


Output from cqlsh is the following:

cqlsh> CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};
cqlsh>
cqlsh> CREATE TABLE mykeyspace.myrecordtable (
   ... namespace text,
   ... entity text,
   ... timestamp bigint,
   ... feature1 bigint,
   ... feature2 bigint,
   ... PRIMARY KEY ((namespace, entity), timestamp)
   ... ) WITH CLUSTERING ORDER BY (timestamp ASC)
   ... AND bloom_filter_fp_chance = 0.01
   ... AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
   ... AND comment = ''
   ... AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
   ... AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
   ... AND crc_check_chance = 1.0
   ... AND dclocal_read_repair_chance = 0.1
   ... AND default_time_to_live = 0
   ... AND gc_grace_seconds = 864000
   ... AND max_index_interval = 2048
   ... AND memtable_flush_period_in_ms = 0
   ... AND min_index_interval = 128
   ... AND read_repair_chance = 0.0
   ... AND speculative_retry = '99PERCENTILE';
cqlsh> CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (timestamp) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh> CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (feature1) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh>
cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
entity, timestamp) VALUES (31, 3

[jira] [Updated] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread Andrey Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Konstantinov updated CASSANDRA-12149:

Attachment: CASSANDRA-12149.txt

File with CQL queries causing the issue.

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the following sequence of queries, Cassandra aborts a connection 
> reporting NPE on server side. SELECT query without token range filter works, 
> but does not work when token range filter is specified. My intent was to 
> issue multiple SELECT queries targeting the same single partition, filtered 
> by a column indexed by SASI, partitioning results by different token ranges.
> CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
> 'SimpleStrategy', 'replication_factor' : 1};
> CREATE TABLE mykeyspace.myrecordtable (
> namespace text,
> entity text,
> timestamp bigint,
> feature1 bigint,
> feature2 bigint,
> PRIMARY KEY ((namespace, entity), timestamp)
> ) WITH CLUSTERING ORDER BY (timestamp ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (timestamp) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (feature1) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
> timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
> timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
> timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);
> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> Output from cqlsh is the following:
> cqlsh> CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
> 'SimpleStrategy', 'replication_factor' : 1};
> cqlsh>
> cqlsh> CREATE TABLE mykeyspace.myrecordtable (
>... namespace text,
>... entity text,
>... timestamp bigint,
>... feature1 bigint,
>... feature2 bigint,
>... PRIMARY KEY ((namespace, entity), timestamp)
>... ) WITH CLUSTERING ORDER BY (timestamp ASC)
>... AND bloom_filter_fp_chance = 0.01
>... AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>... AND comment = ''
>... AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
>... AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>... AND crc_check_chance = 1.0
>... AND dclocal_read_repair_chance = 0.1
>... AND default_time_to_live = 0
>... AND gc_grace_seconds = 864000
>... AND max_index_interval = 2048
>... AND memtable_flush_period_in_ms = 0
>... AND min_index_interval = 128
>... AND read_repair_chance = 0.0
>... AND speculative_retry = '99PERCENTILE';
> cqlsh> CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (timestamp) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh> CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (feature1) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh>
> cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
> entity, timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
> cqlsh> INSERT INTO mykeyspace.myrecordtable (f

[jira] [Updated] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread Andrey Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Konstantinov updated CASSANDRA-12149:

Description: 
If I execute the following sequence of queries, Cassandra aborts a connection 
reporting NPE on server side. SELECT query without token range filter works, 
but does not work when token range filter is specified. My intent was to issue 
multiple SELECT queries targeting the same single partition, filtered by a 
column indexed by SASI, partitioning results by different token ranges.

CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};

CREATE TABLE mykeyspace.myrecordtable (
namespace text,
entity text,
timestamp bigint,
feature1 bigint,
feature2 bigint,
PRIMARY KEY ((namespace, entity), timestamp)
) WITH CLUSTERING ORDER BY (timestamp ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (timestamp) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (feature1) USING 
'org.apache.cassandra.index.sasi.SASIIndex';

INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);

SELECT namespace, entity, timestamp, feature1, feature2 FROM 
mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
9223372036854775807;


Output from cqlsh is the following:

cqlsh> CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};
cqlsh>
cqlsh> CREATE TABLE mykeyspace.myrecordtable (
   ... namespace text,
   ... entity text,
   ... timestamp bigint,
   ... feature1 bigint,
   ... feature2 bigint,
   ... PRIMARY KEY ((namespace, entity), timestamp)
   ... ) WITH CLUSTERING ORDER BY (timestamp ASC)
   ... AND bloom_filter_fp_chance = 0.01
   ... AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
   ... AND comment = ''
   ... AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
   ... AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
   ... AND crc_check_chance = 1.0
   ... AND dclocal_read_repair_chance = 0.1
   ... AND default_time_to_live = 0
   ... AND gc_grace_seconds = 864000
   ... AND max_index_interval = 2048
   ... AND memtable_flush_period_in_ms = 0
   ... AND min_index_interval = 128
   ... AND read_repair_chance = 0.0
   ... AND speculative_retry = '99PERCENTILE';
cqlsh> CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (timestamp) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh> CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (feature1) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh>
cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
entity, timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
entity, timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
entity, timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);
cqlsh>
cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
9223372036854775807;
ServerError: 
cqlsh>

  was:
If I execute the following sequence of queries, Cassandra aborts a connection 
reporting NPE on server side.

[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread Andrey Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367187#comment-15367187
 ] 

Andrey Konstantinov commented on CASSANDRA-12149:
-

Please, let me know if you need more information.

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
>
> If I execute the following sequence of queries, Cassandra aborts a connection 
> reporting NPE on server side.
> CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
> 'SimpleStrategy', 'replication_factor' : 1};
> CREATE TABLE mykeyspace.myrecordtable (
> namespace text,
> entity text,
> timestamp bigint,
> feature1 bigint,
> feature2 bigint,
> PRIMARY KEY ((namespace, entity), timestamp)
> ) WITH CLUSTERING ORDER BY (timestamp ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (timestamp) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (feature1) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
> timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
> timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
> timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);
> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> Output from cqlsh is the following:
> cqlsh> CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
> 'SimpleStrategy', 'replication_factor' : 1};
> cqlsh>
> cqlsh> CREATE TABLE mykeyspace.myrecordtable (
>... namespace text,
>... entity text,
>... timestamp bigint,
>... feature1 bigint,
>... feature2 bigint,
>... PRIMARY KEY ((namespace, entity), timestamp)
>... ) WITH CLUSTERING ORDER BY (timestamp ASC)
>... AND bloom_filter_fp_chance = 0.01
>... AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>... AND comment = ''
>... AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
>... AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>... AND crc_check_chance = 1.0
>... AND dclocal_read_repair_chance = 0.1
>... AND default_time_to_live = 0
>... AND gc_grace_seconds = 864000
>... AND max_index_interval = 2048
>... AND memtable_flush_period_in_ms = 0
>... AND min_index_interval = 128
>... AND read_repair_chance = 0.0
>... AND speculative_retry = '99PERCENTILE';
> cqlsh> CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (timestamp) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh> CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
> mykeyspace.myrecordtable (feature1) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh>
> cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
> entity, timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
> cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
> entity, timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
> cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
> entity, timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);
> cqlsh>
> cqlsh> SELECT namespace, entity, tim

[jira] [Created] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-07 Thread Andrey Konstantinov (JIRA)
Andrey Konstantinov created CASSANDRA-12149:
---

 Summary: NullPointerException on SELECT with SASI index
 Key: CASSANDRA-12149
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
 Project: Cassandra
  Issue Type: Bug
  Components: sasi
Reporter: Andrey Konstantinov


If I execute the following sequence of queries, Cassandra aborts a connection 
reporting NPE on server side.

CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};

CREATE TABLE mykeyspace.myrecordtable (
namespace text,
entity text,
timestamp bigint,
feature1 bigint,
feature2 bigint,
PRIMARY KEY ((namespace, entity), timestamp)
) WITH CLUSTERING ORDER BY (timestamp ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (timestamp) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (feature1) USING 
'org.apache.cassandra.index.sasi.SASIIndex';

INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, entity, 
timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);

SELECT namespace, entity, timestamp, feature1, feature2 FROM 
mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
9223372036854775807;


Output from cqlsh is the following:

cqlsh> CREATE KEYSPACE IF NOT EXISTS mykeyspace WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};
cqlsh>
cqlsh> CREATE TABLE mykeyspace.myrecordtable (
   ... namespace text,
   ... entity text,
   ... timestamp bigint,
   ... feature1 bigint,
   ... feature2 bigint,
   ... PRIMARY KEY ((namespace, entity), timestamp)
   ... ) WITH CLUSTERING ORDER BY (timestamp ASC)
   ... AND bloom_filter_fp_chance = 0.01
   ... AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
   ... AND comment = ''
   ... AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
   ... AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
   ... AND crc_check_chance = 1.0
   ... AND dclocal_read_repair_chance = 0.1
   ... AND default_time_to_live = 0
   ... AND gc_grace_seconds = 864000
   ... AND max_index_interval = 2048
   ... AND memtable_flush_period_in_ms = 0
   ... AND min_index_interval = 128
   ... AND read_repair_chance = 0.0
   ... AND speculative_retry = '99PERCENTILE';
cqlsh> CREATE CUSTOM INDEX record_timestamp_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (timestamp) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh> CREATE CUSTOM INDEX record_feature1_90e05e6caa714f29 ON 
mykeyspace.myrecordtable (feature1) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh>
cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
entity, timestamp) VALUES (31, 32, 'ns2', 'entity3', 201606210131);
cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
entity, timestamp) VALUES (11, 12, 'ns1', 'entity1', 201606210129);
cqlsh> INSERT INTO mykeyspace.myrecordtable (feature1, feature2, namespace, 
entity, timestamp) VALUES (21, 22, 'ns2', 'entity2', 201606210130);
cqlsh>
cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
9223372036854775807;
ServerError: 
cqlsh>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix format specifier markers for String.format

2016-07-07 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3bf8bdceb -> de86ccf3a


fix format specifier markers for String.format


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de86ccf3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de86ccf3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de86ccf3

Branch: refs/heads/trunk
Commit: de86ccf3a3b21e406a3e337019c2197bf15d8053
Parents: 3bf8bdc
Author: Dave Brosius 
Authored: Thu Jul 7 23:51:00 2016 -0400
Committer: Dave Brosius 
Committed: Thu Jul 7 23:51:00 2016 -0400

--
 src/java/org/apache/cassandra/streaming/StreamReceiveTask.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de86ccf3/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java 
b/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
index 88238bc..8fe5a49 100644
--- a/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
@@ -136,7 +136,7 @@ public class StreamReceiveTask extends StreamTask
 public synchronized LifecycleTransaction getTransaction()
 {
 if (done)
-throw new RuntimeException(String.format("Stream receive task {} 
of cf {} already finished.", session.planId(), cfId));
+throw new RuntimeException(String.format("Stream receive task %s 
of cf %s already finished.", session.planId(), cfId));
 return txn;
 }
 



[jira] [Commented] (CASSANDRA-11950) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy_each_quorum

2016-07-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367172#comment-15367172
 ] 

Stefania commented on CASSANDRA-11950:
--

At the moment this test is consistently failing for an entirely different 
reason, sample 
[here|http://cassci.datastax.com/job/trunk_large_dtest/14/testReport/junit/consistency_test/TestAvailability/test_network_topology_strategy_each_quorum/].
 Despite the timeout, this is not related to the recent Netty upgrade 
(CASSANDRA-12032), rather this was introduced by CASSANDRA-11971. We cannot 
recycle the buffer in SP.sendMessagesToNonlocalDC, since its backing array will 
be used asynchronously by the message.

Patch for 3.9 and trunk:

||3.9||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11950-3.9]|[patch|https://github.com/stef1927/cassandra/commits/11950]|
|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11950-3.9-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11950-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11950-3.9-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11950-dtest/]|

[~tjake] could you review? We may also want to put it in 3.8 if we are 
re-tagging.

--

As for the original issue, this looks like the race that was fixed by 
CASSANDRA-12083, cc [~beobal]. Relevant log messages:

{code}
DEBUG [MigrationStage:1] 2016-06-03 14:25:27,380 Schema.java:465 - Adding 
org.apache.cassandra.config.CFMetaData@5fa5b473[cfId=03b14ad0-2997-11e6-b8c7-01c3aea11be7,ksName=mytestks,cfName=users,flags=[],params=TableParams{comment=,
 read_repair_chance=0.0, dclocal_read_repair_chance=0.1, 
bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, 
default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, 
max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 
'ALL', 'rows_per_partition' : 'NONE'}, 
compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,
 options={max_threshold=32, min_threshold=4}}, 
compression=org.apache.cassandra.schema.CompressionParams@2c86b999, 
extensions={}},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[age
 firstname lastname] | 
[value]],partitionKeyColumns=[ColumnDefinition{name=userid, 
type=org.apache.cassandra.db.marshal.Int32Type, kind=PARTITION_KEY, 
position=0}],clusteringColumns=[ColumnDefinition{name=column1, 
type=org.apache.cassandra.db.marshal.UTF8Type, kind=CLUSTERING, 
position=0}],keyValidator=org.apache.cassandra.db.marshal.Int32Type,columnMetadata=[ColumnDefinition{name=column1,
 type=org.apache.cassandra.db.marshal.UTF8Type, kind=CLUSTERING, position=0}, 
ColumnDefinition{name=firstname, type=org.apache.cassandra.db.marshal.UTF8Type, 
kind=STATIC, position=-1}, ColumnDefinition{name=value, 
type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, position=-1}, 
ColumnDefinition{name=userid, type=org.apache.cassandra.db.marshal.Int32Type, 
kind=PARTITION_KEY, position=0}, ColumnDefinition{name=lastname, 
type=org.apache.cassandra.db.marshal.UTF8Type, kind=STATIC, position=-1}, 
ColumnDefinition{name=age, type=org.apache.cassandra.db.marshal.Int32Type, 
kind=STATIC, position=-1}],droppedColumns={},triggers=[],indexes=[]] to cfIdMap
ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
(mytestks.users)
{code}

--

The test should pass consistently provided the problem above is fixed, but 
there are still several warnings because the schema hasn't fully propagated 
yet, I wonder if we should add a pause to the test after creating the tables or 
if there is something else that we can do to remove the warnings.

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy_each_quorum
> -
>
> Key: CASSANDRA-11950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11950
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log, node4.log, node4_debug.log, node5.log, 
> node5_debug.log, node6.log, node6_debug.log, node7.log, node7_debug.log, 
> node8.log, node8_debug.log, node9.log, node9_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/10/testReport/consistency_test/TestAvailability/test_network_topology_strategy_each_quorum
> Failed on CassCI build trunk_large_dtest #10
> Logs are attached.
> {code}
> Stacktrace
>   File "/

[jira] [Updated] (CASSANDRA-11950) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy_each_quorum

2016-07-07 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11950:
-
Reviewer: T Jake Luciani
  Status: Patch Available  (was: In Progress)

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy_each_quorum
> -
>
> Key: CASSANDRA-11950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11950
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log, node4.log, node4_debug.log, node5.log, 
> node5_debug.log, node6.log, node6_debug.log, node7.log, node7_debug.log, 
> node8.log, node8_debug.log, node9.log, node9_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/10/testReport/consistency_test/TestAvailability/test_network_topology_strategy_each_quorum
> Failed on CassCI build trunk_large_dtest #10
> Logs are attached.
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 719, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,462 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-4] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-5] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-7] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-6] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: avoid map lookups in doubly nested loops

2016-07-07 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk d019136ee -> 3bf8bdceb


avoid map lookups in doubly nested loops


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3bf8bdce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3bf8bdce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3bf8bdce

Branch: refs/heads/trunk
Commit: 3bf8bdceb8ad15de5711da383f1ec3e39ecba2e7
Parents: d019136
Author: Dave Brosius 
Authored: Thu Jul 7 23:31:45 2016 -0400
Committer: Dave Brosius 
Committed: Thu Jul 7 23:31:45 2016 -0400

--
 src/java/org/apache/cassandra/hadoop/cql3/CqlInputFormat.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bf8bdce/src/java/org/apache/cassandra/hadoop/cql3/CqlInputFormat.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlInputFormat.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlInputFormat.java
index daba701..5dba3e2 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlInputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlInputFormat.java
@@ -324,9 +324,9 @@ public class CqlInputFormat extends 
org.apache.hadoop.mapreduce.InputFormat subSplitEntry : 
subSplits.entrySet())
 {
-List ranges = subSplit.unwrap();
+List ranges = subSplitEntry.getKey().unwrap();
 for (TokenRange subrange : ranges)
 {
 ColumnFamilySplit split =
@@ -335,7 +335,7 @@ public class CqlInputFormat extends 
org.apache.hadoop.mapreduce.InputFormat

cassandra git commit: use platform agnostic new line chars (%n)

2016-07-07 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 27d6d19a9 -> d019136ee


use platform agnostic new line chars (%n)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d019136e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d019136e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d019136e

Branch: refs/heads/trunk
Commit: d019136ee4830a4022354d473a6290d827793057
Parents: 27d6d19
Author: Dave Brosius 
Authored: Thu Jul 7 23:24:00 2016 -0400
Committer: Dave Brosius 
Committed: Thu Jul 7 23:24:00 2016 -0400

--
 .../dht/tokenallocator/ReplicationAwareTokenAllocator.java   | 2 +-
 src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java   | 4 ++--
 tools/stress/src/org/apache/cassandra/stress/Stress.java | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d019136e/src/java/org/apache/cassandra/dht/tokenallocator/ReplicationAwareTokenAllocator.java
--
diff --git 
a/src/java/org/apache/cassandra/dht/tokenallocator/ReplicationAwareTokenAllocator.java
 
b/src/java/org/apache/cassandra/dht/tokenallocator/ReplicationAwareTokenAllocator.java
index 054a90e..a60be94 100644
--- 
a/src/java/org/apache/cassandra/dht/tokenallocator/ReplicationAwareTokenAllocator.java
+++ 
b/src/java/org/apache/cassandra/dht/tokenallocator/ReplicationAwareTokenAllocator.java
@@ -772,7 +772,7 @@ class ReplicationAwareTokenAllocator implements 
TokenAllocator
 BaseTokenInfo token = tokens;
 do
 {
-System.out.format("%s%s: rs %s rt %s size %.2e\n", lead, token, 
token.replicationStart, token.replicationThreshold, token.replicatedOwnership);
+System.out.format("%s%s: rs %s rt %s size %.2e%n", lead, token, 
token.replicationStart, token.replicationThreshold, token.replicatedOwnership);
 token = token.next;
 } while (token != null && token != tokens);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d019136e/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java 
b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
index 3c8ba64..b455ad7 100644
--- a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
+++ b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
@@ -112,8 +112,8 @@ public class SSTableMetadataViewer
 out.printf("Estimated droppable tombstones: %s%n", 
stats.getEstimatedDroppableTombstoneRatio((int) (System.currentTimeMillis() / 
1000)));
 out.printf("SSTable Level: %d%n", stats.sstableLevel);
 out.printf("Repaired at: %d%n", stats.repairedAt);
-out.printf("Minimum replay position: %s\n", 
stats.commitLogLowerBound);
-out.printf("Maximum replay position: %s\n", 
stats.commitLogUpperBound);
+out.printf("Minimum replay position: %s%n", 
stats.commitLogLowerBound);
+out.printf("Maximum replay position: %s%n", 
stats.commitLogUpperBound);
 out.printf("totalColumnsSet: %s%n", stats.totalColumnsSet);
 out.printf("totalRows: %s%n", stats.totalRows);
 out.println("Estimated tombstone drop times:");

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d019136e/tools/stress/src/org/apache/cassandra/stress/Stress.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/Stress.java 
b/tools/stress/src/org/apache/cassandra/stress/Stress.java
index 874f515..daa7303 100644
--- a/tools/stress/src/org/apache/cassandra/stress/Stress.java
+++ b/tools/stress/src/org/apache/cassandra/stress/Stress.java
@@ -80,7 +80,7 @@ public final class Stress
 }
 catch (IllegalArgumentException e)
 {
-System.out.printf("%s\n", e.getMessage());
+System.out.printf("%s%n", e.getMessage());
 printHelpMessage();
 return 1;
 }



[jira] [Resolved] (CASSANDRA-12084) Host comes up during replacement when all replicas are down

2016-07-07 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli resolved CASSANDRA-12084.
---
Resolution: Duplicate

>  Host comes up during replacement when all replicas are down
> 
>
> Key: CASSANDRA-12084
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12084
> Project: Cassandra
>  Issue Type: Bug
>Reporter: sankalp kohli
>Priority: Minor
>
> Description: Steps to reproduce 
> 1) Setup a 3 instance cluster
> 2) Create a Rf=3 keyspace
> 3) Bring non seed instance down. 
> 4) Start 4th instance with -Dcassandra.replace_address args set.
> 5) When it is sleeping after “JOINING: calculation complete, ready to 
> bootstrap”, kill the other two instances
> 6) This node comes up with no data. 
> This is happening due to code in RangeStreamer.getRangeFetchMap where it does 
> not throw an exception if localhost is part of the sources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11999) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_refresh_schema_on_timeout_error

2016-07-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367010#comment-15367010
 ] 

Stefania commented on CASSANDRA-11999:
--

This test may change as part of the driver upgrade, see [this 
comment|https://issues.apache.org/jira/browse/CASSANDRA-11850?focusedCommentId=15361976&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15361976]
 in CASSANDRA-11850.

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_refresh_schema_on_timeout_error
> ---
>
> Key: CASSANDRA-11999
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11999
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/745/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error
> Failed on CassCI build cassandra-3.0_dtest #745



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-07 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366747#comment-15366747
 ] 

Michael Shuler edited comment on CASSANDRA-11895 at 7/7/16 9:07 PM:


We've already been setting {{PYTHONIOENCODING=utf-8}} in dtest run environments 
since CASSANDRA-11799. I switched the cassandra-2.1_dtest to use a straight 
{{export}} when nosetests is run, instead of from a system-wide profile 
configuration, so we'll see if this test passes with that(?). This should make 
no difference at all, but we'll see.

https://cassci.datastax.com/job/cassandra-2.1_dtest/493/


was (Author: mshuler):
We've already been setting {{PYTHONIOENCODING=utf=8}} in dtest run environments 
since CASSANDRA-11799. I switched the cassandra-2.1_dtest to use a straight 
{{export}} when nosetests is run, instead of from a system-wide profile 
configuration, so we'll see if this test passes with that(?). This should make 
no difference at all, but we'll see.

https://cassci.datastax.com/job/cassandra-2.1_dtest/493/

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11999) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_refresh_schema_on_timeout_error

2016-07-07 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366774#comment-15366774
 ] 

Jim Witschey commented on CASSANDRA-11999:
--

And, in case that build gets GC'd, here's the output:

{code}
Error Message

'Warning: schema version mismatch detected, which might be caused by DOWN 
nodes; if this is not the case, check the schema versions of your nodes in 
system.local and system.peers.' not found in ':3:OperationTimedOut: 
errors={}, last_host=127.0.0.1\n'
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SKJiqF
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 1424, 
in test_refresh_schema_on_timeout_error
stderr)
  File "/usr/lib/python2.7/unittest/case.py", line 803, in assertIn
self.fail(self._formatMessage(msg, standardMsg))
  File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
raise self.failureException(msg)
"'Warning: schema version mismatch detected, which might be caused by DOWN 
nodes; if this is not the case, check the schema versions of your nodes in 
system.local and system.peers.' not found in ':3:OperationTimedOut: 
errors={}, last_host=127.0.0.1\\n'\n >> begin captured 
logging << \ndtest: DEBUG: cluster ccm directory: 
/mnt/tmp/dtest-SKJiqF\ndtest: DEBUG: Custom init_config not found. Setting 
defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'range_request_timeout_in_ms': 1,\n
'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\n- >> end captured logging << -"
{code}

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_refresh_schema_on_timeout_error
> ---
>
> Key: CASSANDRA-11999
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11999
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/745/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error
> Failed on CassCI build cassandra-3.0_dtest #745



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-07 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366747#comment-15366747
 ] 

Michael Shuler edited comment on CASSANDRA-11895 at 7/7/16 9:07 PM:


We've already been setting {{PYTHONIOENCODING=utf=8}} in dtest run environments 
since CASSANDRA-11799. I switched the cassandra-2.1_dtest to use a straight 
{{export}} when nosetests is run, instead of from a system-wide profile 
configuration, so we'll see if this test passes with that(?). This should make 
no difference at all, but we'll see.

https://cassci.datastax.com/job/cassandra-2.1_dtest/493/


was (Author: mshuler):
We're already been setting {{PYTHONIOENCODING=utf=8}} in dtest run environments 
since CASSANDRA-11799. I switched the cassandra-2.1_dtest to use a straight 
{{export}} when nosetests is run, instead of from a system-wide profile 
configuration, so we'll see if this test passes with that(?). This should make 
no difference at all, but we'll see.

https://cassci.datastax.com/job/cassandra-2.1_dtest/493/

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11752) histograms/metrics in 2.2 do not appear recency biased

2016-07-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366750#comment-15366750
 ] 

Per Otterström commented on CASSANDRA-11752:


I understand your concern. I have not verified the performance impact myself. 
The locking scheme is very much inspired by the one used by the 
[ExponentiallyDecayingReservoir|https://github.com/dropwizard/metrics/blob/3.1-maintenance/metrics-core/src/main/java/com/codahale/metrics/ExponentiallyDecayingReservoir.java]
 in the Metrics library.

It should be possible to add some kind of buffering during rescale. Another 
option with less complexity could be to simply skip the collection of metrics 
during rescale and let threads continue. We would loose some accuracy in the 
percentiles and possibly an outlier in the min/max values. We can still add 
metrics to the non-decaying buckets during rescale, so getValues() will still 
be just as accurate as it is now. Any opinion on this?

I went for 30 minutes rescale interval based on the assumption that in an 
extreme case a metric could hit the same bucket a million times every second, 
so 60M times every minute. After 30 minutes forward decay factor will be 
29^2=536870912. Accumulated value will be 60M, 180M, 420M...64P which will be 
represented with 56 bits, giving us some extra head room in a signed 64 bit 
long. Based on these assumptions it could be possible to fit another few 
minutes, but 60 would be to much. Should perhaps mention these assumptions in 
the java-doc.

I don't have plots showing the effect of the rescale. I'm out of office for a 
few weeks but I'll try to verify this and performance impact as soon as I find 
the time.



> histograms/metrics in 2.2 do not appear recency biased
> --
>
> Key: CASSANDRA-11752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Burroughs
>Assignee: Per Otterström
>  Labels: metrics
> Fix For: 2.2.8
>
> Attachments: 11752-2.2.txt, boost-metrics.png, 
> c-jconsole-comparison.png, c-metrics.png, default-histogram.png
>
>
> In addition to upgrading to metrics3, CASSANDRA-5657 switched to using  a 
> custom histogram implementation.  After upgrading to Cassandra 2.2 
> histograms/timer metrics are not suspiciously flat.  To be useful for 
> graphing and alerting metrics need to be biased towards recent events.
> I have attached images that I think illustrate this.
>  * The first two are a comparison between latency observed by a C* 2.2 (us) 
> cluster shoring very flat lines and a client (using metrics 2.2.0, ms) 
> showing server performance problems.  We can't rule out with total certainty 
> that something else isn't the cause (that's why we measure from both the 
> client & server) but they very rarely disagree.
>  * The 3rd image compares jconsole viewing of metrics on a 2.2 and 2.1 
> cluster over several minutes.  Not a single digit changed on the 2.2 cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-07 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366747#comment-15366747
 ] 

Michael Shuler commented on CASSANDRA-11895:


We're already been setting {{PYTHONIOENCODING=utf=8}} in dtest run environments 
since CASSANDRA-11799. I switched the cassandra-2.1_dtest to use a straight 
{{export}} when nosetests is run, instead of from a system-wide profile 
configuration, so we'll see if this test passes with that(?). This should make 
no difference at all, but we'll see.

https://cassci.datastax.com/job/cassandra-2.1_dtest/493/

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11999) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_refresh_schema_on_timeout_error

2016-07-07 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-11999:
-
Attachment: node2_debug.log
node3_debug.log
node3.log
node1.log
node2.log
node1_debug.log

Logs from the failure.

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_refresh_schema_on_timeout_error
> ---
>
> Key: CASSANDRA-11999
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11999
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/745/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error
> Failed on CassCI build cassandra-3.0_dtest #745



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11687) dtest failure in rebuild_test.TestRebuild.simple_rebuild_test

2016-07-07 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366717#comment-15366717
 ] 

Jim Witschey commented on CASSANDRA-11687:
--

Running this test 200x here:

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/163/

with xlarge instances.

> dtest failure in rebuild_test.TestRebuild.simple_rebuild_test
> -
>
> Key: CASSANDRA-11687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11687
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> single failure on most recent run (3.0 no-vnode)
> {noformat}
> concurrent rebuild should not be allowed, but one rebuild command should have 
> succeeded.
> {noformat}
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/217/testReport/rebuild_test/TestRebuild/simple_rebuild_test
> Failed on CassCI build cassandra-3.0_novnode_dtest #217



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11902) dtest failure in hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test

2016-07-07 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366693#comment-15366693
 ] 

Jim Witschey commented on CASSANDRA-11902:
--

Running this test a bunch of times here: 
https://cassci.datastax.com/job/parameterized_dtest_multiplexer/162/

> dtest failure in 
> hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test
> ---
>
> Key: CASSANDRA-11902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11902
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log
>
>
> Failure occurred on trunk here:
> http://cassci.datastax.com/job/trunk_dtest/1239/testReport/hintedhandoff_test/TestHintedHandoffConfig/hintedhandoff_dc_reenabled_test/
> Logs are attached
> We re-enable HH on a DC, but we aren't seeing hints move in the logs, so this 
> does worry me a bit. I'm not sure quite how flaky it is. It's only failed 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11902) dtest failure in hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test

2016-07-07 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366682#comment-15366682
 ] 

Jim Witschey commented on CASSANDRA-11902:
--

No flakes since this was filed.

> dtest failure in 
> hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test
> ---
>
> Key: CASSANDRA-11902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11902
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log
>
>
> Failure occurred on trunk here:
> http://cassci.datastax.com/job/trunk_dtest/1239/testReport/hintedhandoff_test/TestHintedHandoffConfig/hintedhandoff_dc_reenabled_test/
> Logs are attached
> We re-enable HH on a DC, but we aren't seeing hints move in the logs, so this 
> does worry me a bit. I'm not sure quite how flaky it is. It's only failed 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-07 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366681#comment-15366681
 ] 

Jim Witschey commented on CASSANDRA-11895:
--

I can't reproduce this locally. Based on the problems in 
https://issues.apache.org/jira/browse/CASSANDRA-11799, I tried setting 
{{PYTHONIOENCODING=latin-1}}, and it did nothing. Anyone have thoughts on this?

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12067) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_explicit_column_order_reading

2016-07-07 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12067.
---
Resolution: Fixed

We determined this was a result of [CASSANDRA-12032], which bumped the Netty 
version. We've reverted that commit.

Annotations have been removed on dtest with commit 
[440d09e0bd2ecb2e115f555e5dffa380d720ed55|https://github.com/riptano/cassandra-dtest/commit/440d09e0bd2ecb2e115f555e5dffa380d720ed55].

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_explicit_column_order_reading
> --
>
> Key: CASSANDRA-12067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12067
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: cqlsh, dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/405/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_explicit_column_order_reading
> Failed on CassCI build trunk_novnode_dtest #405
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 1453, in test_explicit_column_order_reading
> self.assertCsvResultEqual(reference_file.name, results, 'testorder')
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 318, in assertCsvResultEqual
> raise e
> "Element counts were not equal:\nFirst has 1, Second has 0:  ['1', 'ham', 
> '20']\nFirst has 1, Second has 0:  ['2', 'eggs', '40']\nFirst has 1, Second 
> has 0:  ['3', 'beans', '60']\nFirst has 1, Second has 0:  ['4', 'toast', '80']
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12072) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test

2016-07-07 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12072.
---
   Resolution: Fixed
Fix Version/s: 3.10
   3.9

We determined this was a result of [CASSANDRA-12032], which bumped the Netty 
version. We've reverted that commit.

Annotations have been removed on dtest with commit 
[440d09e0bd2ecb2e115f555e5dffa380d720ed55|https://github.com/riptano/cassandra-dtest/commit/440d09e0bd2ecb2e115f555e5dffa380d720ed55].

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test
> --
>
> Key: CASSANDRA-12072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12072
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Fix For: 3.9, 3.10
>
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> Multiple failures:
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/create_and_grant_roles_with_superuser_status_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/drop_keyspace_cleans_up_function_level_permissions_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_wrong_column_names/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_boolstyle_round_trip/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/disable_autocompaction_alter_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe_mv/
> Logs are from 
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12067) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_explicit_column_order_reading

2016-07-07 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12067:
--
Fix Version/s: (was: 3.x)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_explicit_column_order_reading
> --
>
> Key: CASSANDRA-12067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12067
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: cqlsh, dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/405/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_explicit_column_order_reading
> Failed on CassCI build trunk_novnode_dtest #405
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 1453, in test_explicit_column_order_reading
> self.assertCsvResultEqual(reference_file.name, results, 'testorder')
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 318, in assertCsvResultEqual
> raise e
> "Element counts were not equal:\nFirst has 1, Second has 0:  ['1', 'ham', 
> '20']\nFirst has 1, Second has 0:  ['2', 'eggs', '40']\nFirst has 1, Second 
> has 0:  ['3', 'beans', '60']\nFirst has 1, Second has 0:  ['4', 'toast', '80']
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12058) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_describe

2016-07-07 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12058.
---
Resolution: Fixed

We determined this was a result of [CASSANDRA-12032], which bumped the Netty 
version. We've reverted that commit.

Annotations have been removed on dtest with commit 
[440d09e0bd2ecb2e115f555e5dffa380d720ed55|https://github.com/riptano/cassandra-dtest/commit/440d09e0bd2ecb2e115f555e5dffa380d720ed55].

> dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_describe
> 
>
> Key: CASSANDRA-12058
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12058
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1283/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe
> Failed on CassCI build trunk_dtest #1283
> Logs are attached.
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 726, in test_describe
> self.execute(cql="DESCRIBE test.users2", expected_err="'users2' not found 
> in keyspace 'test'")
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 986, in execute
> self.check_response(err, expected_err)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 999, in check_response
> self.assertEqual(expected_lines, lines)
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "/usr/lib/python2.7/unittest/case.py", line 724, in assertSequenceEqual
> self.fail(msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}
> {code}
> Error Message
> Lists differ: ["'users2' not found in keyspa... != ["error: ('Unable to 
> complete ...
> First differing element 0:
> 'users2' not found in keyspace 'test'
> error: ('Unable to complete the operation against any hosts', {})
> - ["'users2' not found in keyspace 'test'"]
> + ["error: ('Unable to complete the operation against any hosts', {})"]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12059) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_ttl

2016-07-07 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12059.
---
Resolution: Fixed

We determined this was a result of [CASSANDRA-12032], which bumped the Netty 
version. We've reverted that commit.

Annotations have been removed on dtest with commit 
[440d09e0bd2ecb2e115f555e5dffa380d720ed55|https://github.com/riptano/cassandra-dtest/commit/440d09e0bd2ecb2e115f555e5dffa380d720ed55].

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_ttl
> -
>
> Key: CASSANDRA-12059
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12059
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1283/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_ttl
> Failed on CassCI build trunk_dtest #1283
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 850, in test_reading_with_ttl
> self.assertItemsEqual(data, result)
>   File "/usr/lib/python2.7/unittest/case.py", line 901, in assertItemsEqual
> self.fail(msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "Element counts were not equal:\nFirst has 1, Second has 0:  [1, 20]\nFirst 
> has 1, Second has 0:  [2, 40]
> {code}
> Logs are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12061) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_round_trip_with_different_number_precision

2016-07-07 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12061.
---
Resolution: Fixed

We determined this was a result of [CASSANDRA-12032], which bumped the Netty 
version. We've reverted that commit.

Annotations have been removed on dtest with commit 
[440d09e0bd2ecb2e115f555e5dffa380d720ed55|https://github.com/riptano/cassandra-dtest/commit/440d09e0bd2ecb2e115f555e5dffa380d720ed55].

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_round_trip_with_different_number_precision
> ---
>
> Key: CASSANDRA-12061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12061
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1284/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_different_number_precision
> Failed on CassCI build trunk_dtest #1284
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2046, in test_round_trip_with_different_number_precision
> do_test(None, None)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2044, in do_test
> self.assertItemsEqual(sorted(list(csv_rows(tempfile1.name))), 
> sorted(list(csv_rows(tempfile2.name
>   File "/usr/lib/python2.7/unittest/case.py", line 901, in assertItemsEqual
> self.fail(msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "Element counts were not equal:\nFirst has 1, Second has 0:  ['1', '1.1235', 
> '1.12345678912']
> {code}
> Logs are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9293) Unit tests should fail if any LEAK DETECTED errors are printed

2016-07-07 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366620#comment-15366620
 ] 

Joshua McKenzie commented on CASSANDRA-9293:


Still relevant. Unassigning from test eng and we'll prioritize it on our end.

> Unit tests should fail if any LEAK DETECTED errors are printed
> --
>
> Key: CASSANDRA-9293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: DS Test Eng
>  Labels: test
> Attachments: 9293.txt
>
>
> We shouldn't depend on dtests to inform us of these problems (which have 
> error log monitoring) - they should be caught by unit tests, which may also 
> cover different failure conditions (besides being faster).
> There are a couple of ways we could do this, but probably the easiest is to 
> add a static flag that is set to true if we ever see a leak (in Ref), and to 
> just assert that this is false at the end of every test.
> [~enigmacurry] is this something TE can help with?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9293) Unit tests should fail if any LEAK DETECTED errors are printed

2016-07-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9293:
---
Assignee: (was: DS Test Eng)

> Unit tests should fail if any LEAK DETECTED errors are printed
> --
>
> Key: CASSANDRA-9293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>  Labels: test
> Attachments: 9293.txt
>
>
> We shouldn't depend on dtests to inform us of these problems (which have 
> error log monitoring) - they should be caught by unit tests, which may also 
> cover different failure conditions (besides being faster).
> There are a couple of ways we could do this, but probably the easiest is to 
> add a static flag that is set to true if we ever see a leak (in Ref), and to 
> just assert that this is false at the end of every test.
> [~enigmacurry] is this something TE can help with?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9293) Unit tests should fail if any LEAK DETECTED errors are printed

2016-07-07 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366605#comment-15366605
 ] 

Jim Witschey commented on CASSANDRA-9293:
-

Is this problem still relevant? [~JoshuaMcKenzie] Could you have a look please?

> Unit tests should fail if any LEAK DETECTED errors are printed
> --
>
> Key: CASSANDRA-9293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: DS Test Eng
>  Labels: test
> Attachments: 9293.txt
>
>
> We shouldn't depend on dtests to inform us of these problems (which have 
> error log monitoring) - they should be caught by unit tests, which may also 
> cover different failure conditions (besides being faster).
> There are a couple of ways we could do this, but probably the easiest is to 
> add a static flag that is set to true if we ever see a leak (in Ref), and to 
> just assert that this is false at the end of every test.
> [~enigmacurry] is this something TE can help with?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12137) dtest failure in sstable_generation_loading_test.TestSSTableGenerationAndLoading.sstableloader_compression_deflate_to_snappy_test

2016-07-07 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366593#comment-15366593
 ] 

Jim Witschey commented on CASSANDRA-12137:
--

Filed a PR to address the problem with accessing {{sstableloader}}:

https://github.com/riptano/cassandra-dtest/pull/1080

Reassigning this to TE while we wait.

> dtest failure in 
> sstable_generation_loading_test.TestSSTableGenerationAndLoading.sstableloader_compression_deflate_to_snappy_test
> -
>
> Key: CASSANDRA-12137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12137
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Jim Witschey
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/764/testReport/sstable_generation_loading_test/TestSSTableGenerationAndLoading/sstableloader_compression_deflate_to_snappy_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/sstable_generation_loading_test.py", 
> line 75, in sstableloader_compression_deflate_to_snappy_test
> self.load_sstable_with_configuration('Deflate', 'Snappy')
>   File "/home/automaton/cassandra-dtest/sstable_generation_loading_test.py", 
> line 178, in load_sstable_with_configuration
> "sstableloader exited with a non-zero status: {}".format(exit_status))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "sstableloader exited with a non-zero status: 1
> {code}
> Related failures:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/764/testReport/sstable_generation_loading_test/TestSSTableGenerationAndLoading/sstableloader_compression_none_to_snappy_test/
> http://cassci.datastax.com/job/cassandra-3.0_dtest/764/testReport/sstable_generation_loading_test/TestSSTableGenerationAndLoading/sstableloader_with_mv_test/
> Failed on CassCI build cassandra-3.0_dtest #764



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12137) dtest failure in sstable_generation_loading_test.TestSSTableGenerationAndLoading.sstableloader_compression_deflate_to_snappy_test

2016-07-07 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12137:
-
Assignee: DS Test Eng  (was: Jim Witschey)

> dtest failure in 
> sstable_generation_loading_test.TestSSTableGenerationAndLoading.sstableloader_compression_deflate_to_snappy_test
> -
>
> Key: CASSANDRA-12137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12137
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/764/testReport/sstable_generation_loading_test/TestSSTableGenerationAndLoading/sstableloader_compression_deflate_to_snappy_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/sstable_generation_loading_test.py", 
> line 75, in sstableloader_compression_deflate_to_snappy_test
> self.load_sstable_with_configuration('Deflate', 'Snappy')
>   File "/home/automaton/cassandra-dtest/sstable_generation_loading_test.py", 
> line 178, in load_sstable_with_configuration
> "sstableloader exited with a non-zero status: {}".format(exit_status))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "sstableloader exited with a non-zero status: 1
> {code}
> Related failures:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/764/testReport/sstable_generation_loading_test/TestSSTableGenerationAndLoading/sstableloader_compression_none_to_snappy_test/
> http://cassci.datastax.com/job/cassandra-3.0_dtest/764/testReport/sstable_generation_loading_test/TestSSTableGenerationAndLoading/sstableloader_with_mv_test/
> Failed on CassCI build cassandra-3.0_dtest #764



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-07 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366583#comment-15366583
 ] 

Sergio Bossa commented on CASSANDRA-9318:
-

[~Stefania], [~slebresne],

I've pushed a few more commits to address your concerns.

First of all, I've got rid of the back-pressure timeout: the back-pressure 
window for the rate-based algorithm is now equal to the write timeout, and the 
overall implementation has been improved to better track in/out rates and avoid 
the need of a larger window; more specifically, the rates are now tracked 
together when either a response is received or the callback is expired, which 
avoids edge cases causing an unbalanced in/out rate when a burst of outgoing 
messages is recorded on the edge of a window.

Also, I've abstracted {{BackPressureState}} into an interface as requested.

Configuration-wise, we're now left with only the {{back_pressure_enabled}} 
boolean and the {{back_pressure_strategy}}, and I'd really like to keep the 
former, as it makes way easier to dynamically turn the back-pressure on/off.

Talking about the overloaded state and the usage of {{OverloadedException}}, I 
agree the latter might be misleading, and I agree some failure conditions could 
lead to requests being wrongly refused, but I'd also like to keep some form of 
"emergency" feedback towards the client: what about throwing OE only if _all_ 
(or a given number depending on the CL?) replicas are overloaded?

Regarding when and how to ship this, I'm fine with trunk and I agree it should 
be off by default for now.

Finally, one more wild idea to consider: given this patch greatly reduces the 
number of dropped mutations, and hence the number of inflight hints, what do 
you think about disabling load shedding by the replica side when back-pressure 
is enabled? This way we'd trade "full consistency" for an hopefully smaller 
number of unnecessary hints sent over to "pressured" replicas when their 
callbacks expire by the coordinator side.

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12148) Improve determinism of CDC data availability

2016-07-07 Thread Joshua McKenzie (JIRA)
Joshua McKenzie created CASSANDRA-12148:
---

 Summary: Improve determinism of CDC data availability
 Key: CASSANDRA-12148
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12148
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie


The latency with which CDC data becomes available has a known limitation due to 
our reliance on CommitLogSegments being discarded to have the data available in 
cdc_raw: if a slowly written table co-habitates a CommitLogSegment with CDC 
data, the CommitLogSegment won't be flushed until we hit either memory pressure 
on memtables or CommitLog limit pressure. Ultimately, this leaves a 
non-deterministic element to when data becomes available for CDC consumption 
unless a consumer parses live CommitLogSegments.

To work around this limitation and make semi-realtime CDC consumption more 
friendly to end-users, I propose we extend CDC as follows:
h6. High level:
* Consumers parse hard links of active CommitLogSegments in cdc_raw instead of 
waiting for flush/discard and file move
* C* stores an offset of the highest seen CDC mutation in a separate idx file 
per commit log segment in cdc_raw. Clients tail this index file, delta their 
local last parsed offset on change, and parse the corresponding commit log 
segment using their last parsed offset as min
* C* flags that index file with an offset and DONE when the file is flushed so 
clients know when they can clean up

h6. Details:
* On creation of a CommitLogSegment, also hard-link the file in cdc_raw
* On first write of a CDC-enabled mutation to a segment, we:
** Flag it as {{CDCState.CONTAINS}}
** Set a long tracking the {{CommitLogPosition}} of the 1st CDC-enabled 
mutation in the log
** Set a long in the CommitLogSegment tracking the offset of the end of the 
last written CDC mutation in the segment if higher than the previously known 
highest CDC offset
* On subsequent writes to the segment, we update the offset of the highest 
known CDC data
* On CommitLogSegment fsync, we write a file in cdc_raw as 
_cdc.idx containing the min offset and end offset fsynced to disk 
per file
* On segment discard, if CDCState == {{CDCState.PERMITTED}}, delete both the 
segment in commitlog and in cdc_raw
* On segment discard, if CDCState == {{CDCState.CONTAINS}}, delete the segment 
in commitlog and update the _cdc.idx file w/end offset and a DONE 
marker
* On segment replay, store the highest end offset of seen CDC-enabled mutations 
from a segment and write that to _cdc.idx on completion of 
segment replay. This should bridge the potential correctness gap of a node 
writing to a segment and then dying before it can write the 
_cdc.idx file.

This should allow clients to skip the beginning of a file to the 1st CDC 
mutation, track an offset of how far they've parsed, delta against the _cdc.idx 
file end offset, and use that as a determinant on when to parse new CDC data. 
Any existing clients written to the initial implementation of CDC need only add 
the _cdc.idx logic and checking for DONE marker to their code, so 
the burden on users to update to support this should be quite small for the 
benefit of having data available as soon as it's fsynced instead of at a 
non-deterministic time when potentially unrelated tables are flushed.

Finally, we should look into extending the interface on CommitLogReader to be 
more friendly for realtime parsing, perhaps supporting taking a 
CommitLogDescriptor and RandomAccessReader and resuming readSection calls, 
assuming the reader is at the start of a SyncSegment. Would probably also need 
to rewind to the start of the segment before returning so subsequent calls 
would respect this contract. This would skip needing to deserialize the 
descriptor and all completed SyncSegments to get to the root of the desired 
segment for parsing.

One alternative we discussed offline - instead of just storing the highest seen 
CDC offset, we could instead store an offset per CDC mutation (potentially 
delta encoded) in the idx file to allow clients to seek and only parse the 
mutations with CDC enabled. My hunch is that the performance delta from doing 
so wouldn't justify the complexity given the SyncSegment deserialization and 
seeking restrictions in the compressed and encrypted cases as mentioned above.

The only complication I can think of with the above design is uncompressed 
mmapped CommitLogSegments on Windows being undeletable, but it'd be pretty 
simple to disallow configuration of CDC w/uncompressed CommitLog on that 
environment.

And as a final note: while the above might sound involved, it really shouldn't 
be a big change from where we are with v1 of CDC from a C* complexity nor code 
perspective, or from a client implementation perspective.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11752) histograms/metrics in 2.2 do not appear recency biased

2016-07-07 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11752:
---
Status: Open  (was: Patch Available)

> histograms/metrics in 2.2 do not appear recency biased
> --
>
> Key: CASSANDRA-11752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Burroughs
>Assignee: Per Otterström
>  Labels: metrics
> Fix For: 2.2.8
>
> Attachments: 11752-2.2.txt, boost-metrics.png, 
> c-jconsole-comparison.png, c-metrics.png, default-histogram.png
>
>
> In addition to upgrading to metrics3, CASSANDRA-5657 switched to using  a 
> custom histogram implementation.  After upgrading to Cassandra 2.2 
> histograms/timer metrics are not suspiciously flat.  To be useful for 
> graphing and alerting metrics need to be biased towards recent events.
> I have attached images that I think illustrate this.
>  * The first two are a comparison between latency observed by a C* 2.2 (us) 
> cluster shoring very flat lines and a client (using metrics 2.2.0, ms) 
> showing server performance problems.  We can't rule out with total certainty 
> that something else isn't the cause (that's why we measure from both the 
> client & server) but they very rarely disagree.
>  * The 3rd image compares jconsole viewing of metrics on a 2.2 and 2.1 
> cluster over several minutes.  Not a single digit changed on the 2.2 cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12109) Configuring SSL for JMX connections forces requirement of local truststore

2016-07-07 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12109:
---
Status: Awaiting Feedback  (was: Open)

> Configuring SSL for JMX connections forces requirement of local truststore
> --
>
> Key: CASSANDRA-12109
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12109
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Lifecycle, Observability
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.x
>
>
> In CASSANDRA-10091 we changed the way the JMX server is constructed such that 
> this is always done programatically, which gives us control over the 
> authentication and authorization mechanisms. Previously, when 
> {{LOCAL_JMX=no}}, Cassandra would allow the JMX setup to be done by the built 
> in JVM agent, which delegates to 
> {{sun.management.jmxremote.ConnectorBootstrap}} to do the actual JMX & RMI 
> setup. 
> This change has introduced a regression when SSL is enabled for JMX 
> connections, namely that now it is not possible to start C* with only the 
> server-side elements of the SSL setup specified. That is, if enabling SSL 
> with {{com.sun.management.jmxremote.ssl=true}}, it should only be necessary 
> to specify a keystore (via {{javax.net.ssl.keyStore}}), and a truststore 
> should only be necessary if client authentication is also enabled 
> ({{com.sun.management.jmxremote.ssl.need.client.auth=true}}). 
> As it is, C* cannot currently startup without a truststore containing the 
> server's own certificate, which is clearly a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12109) Configuring SSL for JMX connections forces requirement of local truststore

2016-07-07 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12109:
---
Status: Open  (was: Patch Available)

> Configuring SSL for JMX connections forces requirement of local truststore
> --
>
> Key: CASSANDRA-12109
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12109
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Lifecycle, Observability
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.x
>
>
> In CASSANDRA-10091 we changed the way the JMX server is constructed such that 
> this is always done programatically, which gives us control over the 
> authentication and authorization mechanisms. Previously, when 
> {{LOCAL_JMX=no}}, Cassandra would allow the JMX setup to be done by the built 
> in JVM agent, which delegates to 
> {{sun.management.jmxremote.ConnectorBootstrap}} to do the actual JMX & RMI 
> setup. 
> This change has introduced a regression when SSL is enabled for JMX 
> connections, namely that now it is not possible to start C* with only the 
> server-side elements of the SSL setup specified. That is, if enabling SSL 
> with {{com.sun.management.jmxremote.ssl=true}}, it should only be necessary 
> to specify a keystore (via {{javax.net.ssl.keyStore}}), and a truststore 
> should only be necessary if client authentication is also enabled 
> ({{com.sun.management.jmxremote.ssl.need.client.auth=true}}). 
> As it is, C* cannot currently startup without a truststore containing the 
> server's own certificate, which is clearly a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11752) histograms/metrics in 2.2 do not appear recency biased

2016-07-07 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11752:
---
Status: Awaiting Feedback  (was: Open)

> histograms/metrics in 2.2 do not appear recency biased
> --
>
> Key: CASSANDRA-11752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Burroughs
>Assignee: Per Otterström
>  Labels: metrics
> Fix For: 2.2.8
>
> Attachments: 11752-2.2.txt, boost-metrics.png, 
> c-jconsole-comparison.png, c-metrics.png, default-histogram.png
>
>
> In addition to upgrading to metrics3, CASSANDRA-5657 switched to using  a 
> custom histogram implementation.  After upgrading to Cassandra 2.2 
> histograms/timer metrics are not suspiciously flat.  To be useful for 
> graphing and alerting metrics need to be biased towards recent events.
> I have attached images that I think illustrate this.
>  * The first two are a comparison between latency observed by a C* 2.2 (us) 
> cluster shoring very flat lines and a client (using metrics 2.2.0, ms) 
> showing server performance problems.  We can't rule out with total certainty 
> that something else isn't the cause (that's why we measure from both the 
> client & server) but they very rarely disagree.
>  * The 3rd image compares jconsole viewing of metrics on a 2.2 and 2.1 
> cluster over several minutes.  Not a single digit changed on the 2.2 cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12147) Static thrift tables with non UTF8Type comparators can have column names converted incorrectly

2016-07-07 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-12147:

Status: Ready to Commit  (was: Patch Available)

> Static thrift tables with non UTF8Type comparators can have column names 
> converted incorrectly
> --
>
> Key: CASSANDRA-12147
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12147
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 3.8, 3.0.x
>
>
> {{CompactTables::columnDefinitionComparator()}} has been broken since 
> CASSANDRA-8099 for non-super columnfamilies, if the comparator is not 
> {{UTF8Type}}. This results in being unable to read some pre-existing 2.x data 
> post upgrade (it's not lost, but becomes inaccessible).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10993) Make read and write requests paths fully non-blocking, eliminate related stages

2016-07-07 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366364#comment-15366364
 ] 

Norman Maurer commented on CASSANDRA-10993:
---

[~thobbs] basically we made it final to guard ourselves from users that will 
depend on some methods that we may want to remove later on. Can you explain me 
a bit what you try to do or show some code to better understand the use case ? 

> Make read and write requests paths fully non-blocking, eliminate related 
> stages
> ---
>
> Key: CASSANDRA-10993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10993
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination, Local Write-Read Paths
>Reporter: Aleksey Yeschenko
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> Building on work done by [~tjake] (CASSANDRA-10528), [~slebresne] 
> (CASSANDRA-5239), and others, convert read and write request paths to be 
> fully non-blocking, to enable the eventual transition from SEDA to TPC 
> (CASSANDRA-10989)
> Eliminate {{MUTATION}}, {{COUNTER_MUTATION}}, {{VIEW_MUTATION}}, {{READ}}, 
> and {{READ_REPAIR}} stages, move read and write execution directly to Netty 
> context.
> For lack of decent async I/O options on Linux, we’ll still have to retain an 
> extra thread pool for serving read requests for data not residing in our page 
> cache (CASSANDRA-5863), however.
> Implementation-wise, we only have two options available to us: explicit FSMs 
> and chained futures. Fibers would be the third, and easiest option, but 
> aren’t feasible in Java without resorting to direct bytecode manipulation 
> (ourselves or using [quasar|https://github.com/puniverse/quasar]).
> I have seen 4 implementations bases on chained futures/promises now - three 
> in Java and one in C++ - and I’m not convinced that it’s the optimal (or 
> sane) choice for representing our complex logic - think 2i quorum read 
> requests with timeouts at all levels, read repair (blocking and 
> non-blocking), and speculative retries in the mix, {{SERIAL}} reads and 
> writes.
> I’m currently leaning towards an implementation based on explicit FSMs, and 
> intend to provide a prototype - soonish - for comparison with 
> {{CompletableFuture}}-like variants.
> Either way the transition is a relatively boring straightforward refactoring.
> There are, however, some extension points on both write and read paths that 
> we do not control:
> - authorisation implementations will have to be non-blocking. We have control 
> over built-in ones, but for any custom implementation we will have to execute 
> them in a separate thread pool
> - 2i hooks on the write path will need to be non-blocking
> - any trigger implementations will not be allowed to block
> - UDFs and UDAs
> We are further limited by API compatibility restrictions in the 3.x line, 
> forbidding us to alter, or add any non-{{default}} interface methods to those 
> extension points, so these pose a problem.
> Depending on logistics, expecting to get this done in time for 3.4 or 3.6 
> feature release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11345) Assertion Errors "Memory was freed" during streaming

2016-07-07 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-11345:
---
Status: Open  (was: Patch Available)

> Assertion Errors "Memory was freed" during streaming
> 
>
> Key: CASSANDRA-11345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11345
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jean-Francois Gosselin
>Assignee: Paulo Motta
>
> We encountered the following AssertionError (twice on the same node) during a 
> repair :
> On node /172.16.63.41
> {noformat}
> INFO  [STREAM-IN-/10.174.216.160] 2016-03-09 02:38:13,900 
> StreamResultFuture.java:180 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Session with /10.174.216.160 is complete  
>   
> WARN  [STREAM-IN-/10.174.216.160] 2016-03-09 02:38:13,900 
> StreamResultFuture.java:207 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Stream failed   
> ERROR [STREAM-OUT-/10.174.216.160] 2016-03-09 02:38:13,906 
> StreamSession.java:505 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Streaming error occurred
> java.lang.AssertionError: Memory was freed
>   
>
> at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
>   
> at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
>  
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
> 
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]  
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]  
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]   
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]   
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
>   
>
> {noformat} 
> On node /10.174.216.160
>  
> {noformat}   
> ERROR [STREAM-OUT-/172.16.63.41] 2016-03-09 02:38:14,140 
> StreamSession.java:505 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Streaming error occurred  
> java.io.IOException: Connection reset by peer 
>   
>
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.7.0_65] 
>   
>
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) 
> ~[na:1.7.0_65]
>   
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) 
> ~[na:1.7.0_65]
>   
> at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.7.0_65] 
>   
>
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) 
> ~[na:1.7.0_65]
>
> at 
> org.apache.c

[jira] [Updated] (CASSANDRA-11345) Assertion Errors "Memory was freed" during streaming

2016-07-07 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-11345:
---
Status: Awaiting Feedback  (was: Open)

> Assertion Errors "Memory was freed" during streaming
> 
>
> Key: CASSANDRA-11345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11345
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jean-Francois Gosselin
>Assignee: Paulo Motta
>
> We encountered the following AssertionError (twice on the same node) during a 
> repair :
> On node /172.16.63.41
> {noformat}
> INFO  [STREAM-IN-/10.174.216.160] 2016-03-09 02:38:13,900 
> StreamResultFuture.java:180 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Session with /10.174.216.160 is complete  
>   
> WARN  [STREAM-IN-/10.174.216.160] 2016-03-09 02:38:13,900 
> StreamResultFuture.java:207 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Stream failed   
> ERROR [STREAM-OUT-/10.174.216.160] 2016-03-09 02:38:13,906 
> StreamSession.java:505 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Streaming error occurred
> java.lang.AssertionError: Memory was freed
>   
>
> at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
>   
> at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
>  
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
> 
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]  
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]  
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]   
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]   
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
>   
>
> {noformat} 
> On node /10.174.216.160
>  
> {noformat}   
> ERROR [STREAM-OUT-/172.16.63.41] 2016-03-09 02:38:14,140 
> StreamSession.java:505 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Streaming error occurred  
> java.io.IOException: Connection reset by peer 
>   
>
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.7.0_65] 
>   
>
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) 
> ~[na:1.7.0_65]
>   
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) 
> ~[na:1.7.0_65]
>   
> at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.7.0_65] 
>   
>
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) 
> ~[na:1.7.0_65]
>
> at 
> org.apache

[jira] [Commented] (CASSANDRA-11345) Assertion Errors "Memory was freed" during streaming

2016-07-07 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366355#comment-15366355
 ] 

Yuki Morishita commented on CASSANDRA-11345:


Thanks for update. One more thing I want to discuss is that I think we can 
calculate {{size()}} at constructor and cache it to {{final long size}}. 
{{size()}} can be called at anytime from several thread so it seems safer that 
way.

> Assertion Errors "Memory was freed" during streaming
> 
>
> Key: CASSANDRA-11345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11345
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jean-Francois Gosselin
>Assignee: Paulo Motta
>
> We encountered the following AssertionError (twice on the same node) during a 
> repair :
> On node /172.16.63.41
> {noformat}
> INFO  [STREAM-IN-/10.174.216.160] 2016-03-09 02:38:13,900 
> StreamResultFuture.java:180 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Session with /10.174.216.160 is complete  
>   
> WARN  [STREAM-IN-/10.174.216.160] 2016-03-09 02:38:13,900 
> StreamResultFuture.java:207 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Stream failed   
> ERROR [STREAM-OUT-/10.174.216.160] 2016-03-09 02:38:13,906 
> StreamSession.java:505 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Streaming error occurred
> java.lang.AssertionError: Memory was freed
>   
>
> at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
>   
> at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
>  
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546) 
> ~[apache-cassandra-2.1.13.jar:2.1.13] 
> 
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]  
> at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]  
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]   
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
>  ~[apache-cassandra-2.1.13.jar:2.1.13]   
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
>   
>
> {noformat} 
> On node /10.174.216.160
>  
> {noformat}   
> ERROR [STREAM-OUT-/172.16.63.41] 2016-03-09 02:38:14,140 
> StreamSession.java:505 - [Stream #f6980580-e55f-11e5-8f08-ef9e099ce99e] 
> Streaming error occurred  
> java.io.IOException: Connection reset by peer 
>   
>
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.7.0_65] 
>   
>
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) 
> ~[na:1.7.0_65]
>   
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) 
> ~[na:1.7.0_65]
>   
> at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.7.0_65] 
>  

[jira] [Commented] (CASSANDRA-10993) Make read and write requests paths fully non-blocking, eliminate related stages

2016-07-07 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366347#comment-15366347
 ] 

Tyler Hobbs commented on CASSANDRA-10993:
-

I hit a bit of a roadblock yesteraday with combining the event loops.  The 
combination of {{NioEventLoop}} being declared {{final}} along with 
{{AbstractNioChannel.isCompatible()}} having an [{{instanceof 
NioEventLoop}}|https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java#L371-L374]
 check makes this impossible without modifying netty.

[~norman] what's the reasoning behind making {{NioEventLoop}} final?  We'd like 
to explore merging our event loop task handling into a custom Netty event loop, 
but it looks like this isn't going to be an option with the current codebase.  
Any recommendations?

In the meantime, I'm going to focus on the read path.

> Make read and write requests paths fully non-blocking, eliminate related 
> stages
> ---
>
> Key: CASSANDRA-10993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10993
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination, Local Write-Read Paths
>Reporter: Aleksey Yeschenko
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> Building on work done by [~tjake] (CASSANDRA-10528), [~slebresne] 
> (CASSANDRA-5239), and others, convert read and write request paths to be 
> fully non-blocking, to enable the eventual transition from SEDA to TPC 
> (CASSANDRA-10989)
> Eliminate {{MUTATION}}, {{COUNTER_MUTATION}}, {{VIEW_MUTATION}}, {{READ}}, 
> and {{READ_REPAIR}} stages, move read and write execution directly to Netty 
> context.
> For lack of decent async I/O options on Linux, we’ll still have to retain an 
> extra thread pool for serving read requests for data not residing in our page 
> cache (CASSANDRA-5863), however.
> Implementation-wise, we only have two options available to us: explicit FSMs 
> and chained futures. Fibers would be the third, and easiest option, but 
> aren’t feasible in Java without resorting to direct bytecode manipulation 
> (ourselves or using [quasar|https://github.com/puniverse/quasar]).
> I have seen 4 implementations bases on chained futures/promises now - three 
> in Java and one in C++ - and I’m not convinced that it’s the optimal (or 
> sane) choice for representing our complex logic - think 2i quorum read 
> requests with timeouts at all levels, read repair (blocking and 
> non-blocking), and speculative retries in the mix, {{SERIAL}} reads and 
> writes.
> I’m currently leaning towards an implementation based on explicit FSMs, and 
> intend to provide a prototype - soonish - for comparison with 
> {{CompletableFuture}}-like variants.
> Either way the transition is a relatively boring straightforward refactoring.
> There are, however, some extension points on both write and read paths that 
> we do not control:
> - authorisation implementations will have to be non-blocking. We have control 
> over built-in ones, but for any custom implementation we will have to execute 
> them in a separate thread pool
> - 2i hooks on the write path will need to be non-blocking
> - any trigger implementations will not be allowed to block
> - UDFs and UDAs
> We are further limited by API compatibility restrictions in the 3.x line, 
> forbidding us to alter, or add any non-{{default}} interface methods to those 
> extension points, so these pose a problem.
> Depending on logistics, expecting to get this done in time for 3.4 or 3.6 
> feature release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12051) JSON does not take functions

2016-07-07 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366334#comment-15366334
 ] 

Tyler Hobbs commented on CASSANDRA-12051:
-

Ah, good point.  I forgot that we accepted the string literal {{now}} for 
timestamp values.  This isn't technically a function call (in CQL terms), so 
other function names will not work.

> JSON does not take functions
> 
>
> Key: CASSANDRA-12051
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12051
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tianshi Wang
>
> toTimestamp(now()) does not work in JSON format.
> {code}
> cqlsh:ops> create table test (
>... id int,
>... ts timestamp,
>... primary key(id)
>... );
> cqlsh:ops> insert into test (id, ts) values (1, toTimestamp(now()));
> cqlsh:ops> select * from test;
>  id | ts
> +-
>   1 | 2016-06-21 18:46:28.753000+
> (1 rows)
> cqlsh:ops> insert into test JSON '{"id":2,"ts":toTimestamp(now())}';
> InvalidRequest: code=2200 [Invalid query] message="Could not decode JSON 
> string as a map: org.codehaus.jackson.JsonParseException: Unrecognized token 
> 'toTimestamp': was expecting
>  at [Source: java.io.StringReader@2da0329d; line: 1, column: 25]. (String 
> was: {"id":2,"ts":toTimestamp(now())})"
> cqlsh:ops> insert into test JSON '{"id":2,"ts":"toTimestamp(now())"}';
> InvalidRequest: code=2200 [Invalid query] message="Error decoding JSON value 
> for ts: Unable to coerce 'toTimestamp(now())' to a formatted date (long)"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11424) Add support to "unset" JSON fields in prepared statements

2016-07-07 Thread Oded Peer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oded Peer reassigned CASSANDRA-11424:
-

Assignee: Oded Peer

> Add support to "unset" JSON fields in prepared statements
> -
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>Assignee: Oded Peer
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11424) Add support to "unset" JSON fields in prepared statements

2016-07-07 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366327#comment-15366327
 ] 

Benjamin Lerer commented on CASSANDRA-11424:


+1 for {{DEFAULT UNSET/NULL}}


> Add support to "unset" JSON fields in prepared statements
> -
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12146) Use dedicated executor for sending JMX notifications

2016-07-07 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366325#comment-15366325
 ] 

Chris Lohfink commented on CASSANDRA-12146:
---

If we have a new thread pool shouldnt we use a JMX one so its monitored and 
shows up in tpstats?

> Use dedicated executor for sending JMX notifications
> 
>
> Key: CASSANDRA-12146
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12146
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 2.2.8, 3.0.9, 3.9
>
> Attachments: 12146-2.2.patch
>
>
> I'm currently looking into an issue with our repair process where we can 
> notice a significant delay at the end of the repair task and before nodetool 
> is actually terminating. At the same time JMX NOTIF_LOST errors are reported 
> in nodetool during most repair runs.
> Currently {{StorageService.repairAsync(keyspace, options)}} is called through 
> JMX, which will start a new thread executing RepairRunnable using the 
> provided options. StorageService itself implements 
> NotificationBroadcasterSupport and will send JMX progress notifications 
> emitted from RepairRunnable (or during bootstrap). If you take a closer look 
> at {{RepairRunnable}}, {{JMXProgressSupport}} and 
> {{StorageService/NotificationBroadcasterSupport.sendNotification}} you'll 
> notice that this all happens within the calling thread, i.e. RepairRunnable. 
> Given the lost notifications and all kind of potential networking related 
> issues, I'm not really comfortable having the repair coordinator thread 
> running in the JMX stack. Fortunately NotificationBroadcasterSupport accepts 
> a custom executor as constructor argument. See attached patched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12146) Use dedicated executor for sending JMX notifications

2016-07-07 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366317#comment-15366317
 ] 

Yuki Morishita edited comment on CASSANDRA-12146 at 7/7/16 4:03 PM:


Thanks for the patch. Nice idea.

+1 and committed as {{f28409bb9730c0318c3243f9d0febbb05ec0c2dc}}.


was (Author: yukim):
Thanks for the patch. Nice idea.

+1 and committed as {f28409bb9730c0318c3243f9d0febbb05ec0c2dc}.

> Use dedicated executor for sending JMX notifications
> 
>
> Key: CASSANDRA-12146
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12146
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 2.2.8, 3.0.9, 3.9
>
> Attachments: 12146-2.2.patch
>
>
> I'm currently looking into an issue with our repair process where we can 
> notice a significant delay at the end of the repair task and before nodetool 
> is actually terminating. At the same time JMX NOTIF_LOST errors are reported 
> in nodetool during most repair runs.
> Currently {{StorageService.repairAsync(keyspace, options)}} is called through 
> JMX, which will start a new thread executing RepairRunnable using the 
> provided options. StorageService itself implements 
> NotificationBroadcasterSupport and will send JMX progress notifications 
> emitted from RepairRunnable (or during bootstrap). If you take a closer look 
> at {{RepairRunnable}}, {{JMXProgressSupport}} and 
> {{StorageService/NotificationBroadcasterSupport.sendNotification}} you'll 
> notice that this all happens within the calling thread, i.e. RepairRunnable. 
> Given the lost notifications and all kind of potential networking related 
> issues, I'm not really comfortable having the repair coordinator thread 
> running in the JMX stack. Fortunately NotificationBroadcasterSupport accepts 
> a custom executor as constructor argument. See attached patched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12146) Use dedicated executor for sending JMX notifications

2016-07-07 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-12146:
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.9
   3.0.9
   2.2.8
   Status: Resolved  (was: Patch Available)

Thanks for the patch. Nice idea.

+1 and committed as {f28409bb9730c0318c3243f9d0febbb05ec0c2dc}.

> Use dedicated executor for sending JMX notifications
> 
>
> Key: CASSANDRA-12146
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12146
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 2.2.8, 3.0.9, 3.9
>
> Attachments: 12146-2.2.patch
>
>
> I'm currently looking into an issue with our repair process where we can 
> notice a significant delay at the end of the repair task and before nodetool 
> is actually terminating. At the same time JMX NOTIF_LOST errors are reported 
> in nodetool during most repair runs.
> Currently {{StorageService.repairAsync(keyspace, options)}} is called through 
> JMX, which will start a new thread executing RepairRunnable using the 
> provided options. StorageService itself implements 
> NotificationBroadcasterSupport and will send JMX progress notifications 
> emitted from RepairRunnable (or during bootstrap). If you take a closer look 
> at {{RepairRunnable}}, {{JMXProgressSupport}} and 
> {{StorageService/NotificationBroadcasterSupport.sendNotification}} you'll 
> notice that this all happens within the calling thread, i.e. RepairRunnable. 
> Given the lost notifications and all kind of potential networking related 
> issues, I'm not really comfortable having the repair coordinator thread 
> running in the JMX stack. Fortunately NotificationBroadcasterSupport accepts 
> a custom executor as constructor argument. See attached patched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-07-07 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a227cc61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a227cc61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a227cc61

Branch: refs/heads/cassandra-3.0
Commit: a227cc61c501ff81d5dfeba3f6f9c2f214d19c30
Parents: 76e68e9 f28409b
Author: Yuki Morishita 
Authored: Thu Jul 7 11:00:31 2016 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 7 11:00:31 2016 -0500

--
 CHANGES.txt   | 2 ++
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a227cc61/CHANGES.txt
--
diff --cc CHANGES.txt
index 20ed6e0,9fef5a2..0e483f1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,28 -1,16 +1,30 @@@
 +3.0.9
 + * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
+ 2.2.8
+  * Use dedicated thread for JMX notifications (CASSANDRA-12146)
 - * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
   * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
 + * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
 + * Avoid missing sstables when getting the canonical sstables 
(CASSANDRA-11996)
 + * Always select the live sstables when getting sstables in bounds 
(CASSANDRA-11944)
 + * Fix column ordering of results with static columns for Thrift requests in
 +   a mixed 2.x/3.x cluster, also fix potential non-resolved duplication of
 +   those static columns in query results (CASSANDRA-12123)
 + * Avoid digest mismatch with empty but static rows (CASSANDRA-12090)
 + * Fix EOF exception when altering column type (CASSANDRA-11820)
 +Merged from 2.2:
   * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
  Merged from 2.1:
 - * Don't write shadowed range tombstone (CASSANDRA-12030)
 - * Improve digest calculation in the presence of overlapping tombstones 
(CASSANDRA-11349)
   * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
 - * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
  
  
 -2.2.7
 +3.0.8
 + * Fix potential race in schema during new table creation (CASSANDRA-12083)
 + * cqlsh: fix error handling in rare COPY FROM failure scenario 
(CASSANDRA-12070)
 + * Disable autocompaction during drain (CASSANDRA-11878)
 + * Add a metrics timer to MemtablePool and use it to track time spent blocked 
on memory in MemtableAllocator (CASSANDRA-11327)
 + * Fix upgrading schema with super columns with non-text subcomparators 
(CASSANDRA-12023)
 + * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
 +Merged from 2.2:
   * Allow nodetool info to run with readonly JMX access (CASSANDRA-11755)
   * Validate bloom_filter_fp_chance against lowest supported
 value when the table is created (CASSANDRA-11920)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a227cc61/src/java/org/apache/cassandra/service/StorageService.java
--



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-07 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8475f891
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8475f891
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8475f891

Branch: refs/heads/cassandra-3.9
Commit: 8475f891c7576e3816ac450178344a5232b72738
Parents: a006f57 a227cc6
Author: Yuki Morishita 
Authored: Thu Jul 7 11:00:37 2016 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 7 11:00:37 2016 -0500

--
 CHANGES.txt   | 2 ++
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8475f891/CHANGES.txt
--
diff --cc CHANGES.txt
index 1d11149,0e483f1..34e7587
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,7 +1,10 @@@
 -3.0.9
 +3.9
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 +Merged from 3.0:
   * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
+ 2.2.8
+  * Use dedicated thread for JMX notifications (CASSANDRA-12146)
   * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
   * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
   * Avoid missing sstables when getting the canonical sstables 
(CASSANDRA-11996)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8475f891/src/java/org/apache/cassandra/service/StorageService.java
--



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-07-07 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a227cc61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a227cc61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a227cc61

Branch: refs/heads/trunk
Commit: a227cc61c501ff81d5dfeba3f6f9c2f214d19c30
Parents: 76e68e9 f28409b
Author: Yuki Morishita 
Authored: Thu Jul 7 11:00:31 2016 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 7 11:00:31 2016 -0500

--
 CHANGES.txt   | 2 ++
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a227cc61/CHANGES.txt
--
diff --cc CHANGES.txt
index 20ed6e0,9fef5a2..0e483f1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,28 -1,16 +1,30 @@@
 +3.0.9
 + * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
+ 2.2.8
+  * Use dedicated thread for JMX notifications (CASSANDRA-12146)
 - * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
   * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
 + * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
 + * Avoid missing sstables when getting the canonical sstables 
(CASSANDRA-11996)
 + * Always select the live sstables when getting sstables in bounds 
(CASSANDRA-11944)
 + * Fix column ordering of results with static columns for Thrift requests in
 +   a mixed 2.x/3.x cluster, also fix potential non-resolved duplication of
 +   those static columns in query results (CASSANDRA-12123)
 + * Avoid digest mismatch with empty but static rows (CASSANDRA-12090)
 + * Fix EOF exception when altering column type (CASSANDRA-11820)
 +Merged from 2.2:
   * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
  Merged from 2.1:
 - * Don't write shadowed range tombstone (CASSANDRA-12030)
 - * Improve digest calculation in the presence of overlapping tombstones 
(CASSANDRA-11349)
   * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
 - * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
  
  
 -2.2.7
 +3.0.8
 + * Fix potential race in schema during new table creation (CASSANDRA-12083)
 + * cqlsh: fix error handling in rare COPY FROM failure scenario 
(CASSANDRA-12070)
 + * Disable autocompaction during drain (CASSANDRA-11878)
 + * Add a metrics timer to MemtablePool and use it to track time spent blocked 
on memory in MemtableAllocator (CASSANDRA-11327)
 + * Fix upgrading schema with super columns with non-text subcomparators 
(CASSANDRA-12023)
 + * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
 +Merged from 2.2:
   * Allow nodetool info to run with readonly JMX access (CASSANDRA-11755)
   * Validate bloom_filter_fp_chance against lowest supported
 value when the table is created (CASSANDRA-11920)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a227cc61/src/java/org/apache/cassandra/service/StorageService.java
--



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-07-07 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a227cc61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a227cc61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a227cc61

Branch: refs/heads/cassandra-3.9
Commit: a227cc61c501ff81d5dfeba3f6f9c2f214d19c30
Parents: 76e68e9 f28409b
Author: Yuki Morishita 
Authored: Thu Jul 7 11:00:31 2016 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 7 11:00:31 2016 -0500

--
 CHANGES.txt   | 2 ++
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a227cc61/CHANGES.txt
--
diff --cc CHANGES.txt
index 20ed6e0,9fef5a2..0e483f1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,28 -1,16 +1,30 @@@
 +3.0.9
 + * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
+ 2.2.8
+  * Use dedicated thread for JMX notifications (CASSANDRA-12146)
 - * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
   * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
 + * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
 + * Avoid missing sstables when getting the canonical sstables 
(CASSANDRA-11996)
 + * Always select the live sstables when getting sstables in bounds 
(CASSANDRA-11944)
 + * Fix column ordering of results with static columns for Thrift requests in
 +   a mixed 2.x/3.x cluster, also fix potential non-resolved duplication of
 +   those static columns in query results (CASSANDRA-12123)
 + * Avoid digest mismatch with empty but static rows (CASSANDRA-12090)
 + * Fix EOF exception when altering column type (CASSANDRA-11820)
 +Merged from 2.2:
   * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
  Merged from 2.1:
 - * Don't write shadowed range tombstone (CASSANDRA-12030)
 - * Improve digest calculation in the presence of overlapping tombstones 
(CASSANDRA-11349)
   * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
 - * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
  
  
 -2.2.7
 +3.0.8
 + * Fix potential race in schema during new table creation (CASSANDRA-12083)
 + * cqlsh: fix error handling in rare COPY FROM failure scenario 
(CASSANDRA-12070)
 + * Disable autocompaction during drain (CASSANDRA-11878)
 + * Add a metrics timer to MemtablePool and use it to track time spent blocked 
on memory in MemtableAllocator (CASSANDRA-11327)
 + * Fix upgrading schema with super columns with non-text subcomparators 
(CASSANDRA-12023)
 + * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
 +Merged from 2.2:
   * Allow nodetool info to run with readonly JMX access (CASSANDRA-11755)
   * Validate bloom_filter_fp_chance against lowest supported
 value when the table is created (CASSANDRA-11920)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a227cc61/src/java/org/apache/cassandra/service/StorageService.java
--



[10/10] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-07-07 Thread yukim
Merge branch 'cassandra-3.9' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/27d6d19a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/27d6d19a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/27d6d19a

Branch: refs/heads/trunk
Commit: 27d6d19a95fa3ef75f838ef4855106f1c426e83d
Parents: 3016dc7 8475f89
Author: Yuki Morishita 
Authored: Thu Jul 7 11:00:43 2016 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 7 11:00:43 2016 -0500

--
 CHANGES.txt   | 2 ++
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/27d6d19a/CHANGES.txt
--



[03/10] cassandra git commit: Use dedicated thread for sending JMX notifications

2016-07-07 Thread yukim
Use dedicated thread for sending JMX notifications

patch by Stefan Podkowinski; reviewed by yukim for CASSANDRA-12146


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f28409bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f28409bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f28409bb

Branch: refs/heads/cassandra-3.9
Commit: f28409bb9730c0318c3243f9d0febbb05ec0c2dc
Parents: ef18a17
Author: Stefan Podkowinski 
Authored: Wed Jul 6 16:58:47 2016 +0200
Committer: Yuki Morishita 
Committed: Thu Jul 7 10:59:44 2016 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e10af6f..9fef5a2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * Use dedicated thread for JMX notifications (CASSANDRA-12146)
  * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a877074..fa04595 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -210,6 +210,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public StorageService()
 {
+// use dedicated executor for sending JMX notifications
+super(Executors.newSingleThreadExecutor());
+
 MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
 try
 {



[02/10] cassandra git commit: Use dedicated thread for sending JMX notifications

2016-07-07 Thread yukim
Use dedicated thread for sending JMX notifications

patch by Stefan Podkowinski; reviewed by yukim for CASSANDRA-12146


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f28409bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f28409bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f28409bb

Branch: refs/heads/cassandra-3.0
Commit: f28409bb9730c0318c3243f9d0febbb05ec0c2dc
Parents: ef18a17
Author: Stefan Podkowinski 
Authored: Wed Jul 6 16:58:47 2016 +0200
Committer: Yuki Morishita 
Committed: Thu Jul 7 10:59:44 2016 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e10af6f..9fef5a2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * Use dedicated thread for JMX notifications (CASSANDRA-12146)
  * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a877074..fa04595 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -210,6 +210,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public StorageService()
 {
+// use dedicated executor for sending JMX notifications
+super(Executors.newSingleThreadExecutor());
+
 MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
 try
 {



[04/10] cassandra git commit: Use dedicated thread for sending JMX notifications

2016-07-07 Thread yukim
Use dedicated thread for sending JMX notifications

patch by Stefan Podkowinski; reviewed by yukim for CASSANDRA-12146


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f28409bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f28409bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f28409bb

Branch: refs/heads/trunk
Commit: f28409bb9730c0318c3243f9d0febbb05ec0c2dc
Parents: ef18a17
Author: Stefan Podkowinski 
Authored: Wed Jul 6 16:58:47 2016 +0200
Committer: Yuki Morishita 
Committed: Thu Jul 7 10:59:44 2016 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e10af6f..9fef5a2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * Use dedicated thread for JMX notifications (CASSANDRA-12146)
  * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a877074..fa04595 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -210,6 +210,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public StorageService()
 {
+// use dedicated executor for sending JMX notifications
+super(Executors.newSingleThreadExecutor());
+
 MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
 try
 {



[01/10] cassandra git commit: Use dedicated thread for sending JMX notifications

2016-07-07 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 ef18a1768 -> f28409bb9
  refs/heads/cassandra-3.0 76e68e9b4 -> a227cc61c
  refs/heads/cassandra-3.9 a006f577b -> 8475f891c
  refs/heads/trunk 3016dc7c2 -> 27d6d19a9


Use dedicated thread for sending JMX notifications

patch by Stefan Podkowinski; reviewed by yukim for CASSANDRA-12146


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f28409bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f28409bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f28409bb

Branch: refs/heads/cassandra-2.2
Commit: f28409bb9730c0318c3243f9d0febbb05ec0c2dc
Parents: ef18a17
Author: Stefan Podkowinski 
Authored: Wed Jul 6 16:58:47 2016 +0200
Committer: Yuki Morishita 
Committed: Thu Jul 7 10:59:44 2016 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e10af6f..9fef5a2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * Use dedicated thread for JMX notifications (CASSANDRA-12146)
  * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f28409bb/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a877074..fa04595 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -210,6 +210,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public StorageService()
 {
+// use dedicated executor for sending JMX notifications
+super(Executors.newSingleThreadExecutor());
+
 MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
 try
 {



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-07 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8475f891
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8475f891
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8475f891

Branch: refs/heads/trunk
Commit: 8475f891c7576e3816ac450178344a5232b72738
Parents: a006f57 a227cc6
Author: Yuki Morishita 
Authored: Thu Jul 7 11:00:37 2016 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 7 11:00:37 2016 -0500

--
 CHANGES.txt   | 2 ++
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8475f891/CHANGES.txt
--
diff --cc CHANGES.txt
index 1d11149,0e483f1..34e7587
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,7 +1,10 @@@
 -3.0.9
 +3.9
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 +Merged from 3.0:
   * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
+ 2.2.8
+  * Use dedicated thread for JMX notifications (CASSANDRA-12146)
   * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
   * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
   * Avoid missing sstables when getting the canonical sstables 
(CASSANDRA-11996)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8475f891/src/java/org/apache/cassandra/service/StorageService.java
--



[jira] [Commented] (CASSANDRA-11424) Add support to "unset" JSON fields in prepared statements

2016-07-07 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366315#comment-15366315
 ] 

Tyler Hobbs commented on CASSANDRA-11424:
-

I have not thought about this too much yet, but I do like the {{DEFAULT 
UNSET/NULL}} approach because it can be very explicit.  So, +1 on making that 
the solution.

> Add support to "unset" JSON fields in prepared statements
> -
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12133) Failed to load Java8 implementation ohc-core-j8

2016-07-07 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12133:
-
Status: Patch Available  (was: Open)

> Failed to load Java8 implementation ohc-core-j8
> ---
>
> Key: CASSANDRA-12133
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12133
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.04, Java 1.8.0_91
>Reporter: Mike
>Assignee: Robert Stupp
>Priority: Trivial
> Fix For: 3.x
>
>
> After enabling row cache in cassandra.yaml by setting row_cache_size_in_mb, I 
> receive this warning in system.log during startup:
> {noformat}
> WARN  [main] 2016-07-05 13:36:14,671 Uns.java:169 - Failed to load Java8 
> implementation ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11424) Add support to "unset" JSON fields in prepared statements

2016-07-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366055#comment-15366055
 ] 

Sylvain Lebresne commented on CASSANDRA-11424:
--

To be clear, I'm not really against the {{IGNORE_OMITTED}} approach, but I just 
want to explore all the options.

In fact, *if* our default had been to left omitted columns unset, then I'd have 
insisted more on that column idea since getting null for omitted values could 
have bee then done with {{INSERT INTO t( *) JSON ...}}, which is kind of 
consistent, but as that's not the case and it's too late to change, a simple 
flag is probably the most pragmatic option.

That said, to bikkeshed on syntax, we don't use underscore for keywords in CQL, 
and having it after the value reads a bit better imo, so:
{noformat}
INSERT INTO t JSON '{"k":"v"}' IGNORE OMITTED
{noformat}
In fact, to bikkeshed even further, an alternative would be to call it 
{{DEFAULT UNSET}} (as in, by default, columns are unset), and to also support 
{{DEFAULT NULL}}, which would be the default, but that you could add if you 
like explicitness. I have a slight preference for that later option but that's 
arguably totally subjective.

Anyway, [~thobbs] might also have an opinion since he added the JSON support 
and so may have though about this already.

> Add support to "unset" JSON fields in prepared statements
> -
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11424) Add support to "unset" JSON fields in prepared statements

2016-07-07 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366022#comment-15366022
 ] 

Benjamin Lerer commented on CASSANDRA-11424:


Personally, the {{IGNORE_OMITTED}} approach looks fine to me. It requires a new 
keyword but the request is easily understandable.
That being said, I am probably not the best person for the JSON related 
questions. 

> Add support to "unset" JSON fields in prepared statements
> -
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables

2016-07-07 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365999#comment-15365999
 ] 

Benjamin Lerer commented on CASSANDRA-12127:


Thanks for the patch. Unfortunatly, is only covers the problem of single slice 
restrictions with an empty start bound and the solution will not work if the 
table is sorted in descending order.

I should have a patch ready today. Writing the tests to cover (hopefully) all 
of the possible queries took much more time than I expected.
  

> Queries with empty ByteBuffer values in clustering column restrictions fail 
> for non-composite compact tables
> 
>
> Key: CASSANDRA-12127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12127
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 12127.txt
>
>
> For the following table:
> {code}
> CREATE TABLE myTable (pk int,
>   c blob,
>   value int,
>   PRIMARY KEY (pk, c)) WITH COMPACT STORAGE;
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1);
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2);
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}}
> Will result in the following Exception:
> {code}
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188)
>   at 
> org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125)
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
>   [...]
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}}
> Will return 2 rows instead of 0.
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}}
> {code}
> java.lang.AssertionError
>   at 
> org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253)
>   [...]
> {code}
> I checked 2.0 and {{SELECT * FROM myTable  WHERE pk = 1 AND c > 
> textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND 
> c < textAsBlob('');}} return the same wrong results than in 2.1.
> The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is 
> rejected if a clear error message: {{Invalid empty value for clustering 
> column of COMPACT TABLE}}.
> As it is not possible to insert an empty ByteBuffer value within the 
> clustering column of a non-composite compact tables those queries do not
> have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < 
> textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = 
> textAsBlob('');}} will return nothing
> and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will 
> return the entire partition (pk = 1).
> In my opinion those queries should probably all be rejected as it seems that 
> the fact that {{SELECT * FROM myTable  WHERE pk = 1 AND c > textAsBlob('');}} 
> was accepted in {{2.0}} was due to a bug.
> I am of course open to discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11733) SSTableReversedIterator ignores range tombstones

2016-07-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11733:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.9
   3.0.9
   Status: Resolved  (was: Patch Available)

Committed, thanks.

> SSTableReversedIterator ignores range tombstones
> 
>
> Key: CASSANDRA-11733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11733
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.9
>
> Attachments: remove_delete.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/9] cassandra git commit: Don't ignore deletion info in sstable on reverse queries

2016-07-07 Thread slebresne
Don't ignore deletion info in sstable on reverse queries

patch by Sylvain Lebresne; reviewed by Aleksey Yeschenko for CASSANDRA-11733


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76e68e9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76e68e9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76e68e9b

Branch: refs/heads/cassandra-3.9
Commit: 76e68e9b49b1fbcb601633e6e2b8d8e1f71c7402
Parents: 30f5d44
Author: Sylvain Lebresne 
Authored: Thu Jun 30 15:13:24 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:54:52 2016 +0200

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  2 +-
 .../cql3/validation/operations/DeleteTest.java  | 26 
 3 files changed, 28 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8118de1..20ed6e0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
  * Avoid missing sstables when getting the canonical sstables (CASSANDRA-11996)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 14cec36..3e49a3a 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -356,7 +356,7 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 {
 deletionInfo = deletionBuilder.build();
 built = new ImmutableBTreePartition(metadata, partitionKey, 
columns, Rows.EMPTY_STATIC_ROW, rowBuilder.build(),
-DeletionInfo.LIVE, 
EncodingStats.NO_STATS);
+deletionInfo, 
EncodingStats.NO_STATS);
 deletionBuilder = null;
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 76351ee..814e822 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -1057,4 +1057,30 @@ public class DeleteTest extends CQLTester
 if (forceFlush)
 flush();
 }
+
+@Test
+public void testDeleteAndReverseQueries() throws Throwable
+{
+// This test insert rows in one sstable and a range tombstone covering 
some of those rows in another, and it
+// validates we correctly get only the non-removed rows when doing 
reverse queries.
+
+createTable("CREATE TABLE %s (k text, i int, PRIMARY KEY (k, i))");
+
+for (int i = 0; i < 10; i++)
+execute("INSERT INTO %s(k, i) values (?, ?)", "a", i);
+
+flush();
+
+execute("DELETE FROM %s WHERE k = ? AND i >= ? AND i <= ?", "a", 2, 7);
+
+assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+row(9), row(8), row(1), row(0)
+);
+
+flush();
+
+assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+row(9), row(8), row(1), row(0)
+);
+}
 }



[4/9] cassandra git commit: Don't ignore deletion info in sstable on reverse queries

2016-07-07 Thread slebresne
Don't ignore deletion info in sstable on reverse queries

patch by Sylvain Lebresne; reviewed by Aleksey Yeschenko for CASSANDRA-11733


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76e68e9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76e68e9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76e68e9b

Branch: refs/heads/cassandra-3.0
Commit: 76e68e9b49b1fbcb601633e6e2b8d8e1f71c7402
Parents: 30f5d44
Author: Sylvain Lebresne 
Authored: Thu Jun 30 15:13:24 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:54:52 2016 +0200

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  2 +-
 .../cql3/validation/operations/DeleteTest.java  | 26 
 3 files changed, 28 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8118de1..20ed6e0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
  * Avoid missing sstables when getting the canonical sstables (CASSANDRA-11996)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 14cec36..3e49a3a 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -356,7 +356,7 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 {
 deletionInfo = deletionBuilder.build();
 built = new ImmutableBTreePartition(metadata, partitionKey, 
columns, Rows.EMPTY_STATIC_ROW, rowBuilder.build(),
-DeletionInfo.LIVE, 
EncodingStats.NO_STATS);
+deletionInfo, 
EncodingStats.NO_STATS);
 deletionBuilder = null;
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 76351ee..814e822 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -1057,4 +1057,30 @@ public class DeleteTest extends CQLTester
 if (forceFlush)
 flush();
 }
+
+@Test
+public void testDeleteAndReverseQueries() throws Throwable
+{
+// This test insert rows in one sstable and a range tombstone covering 
some of those rows in another, and it
+// validates we correctly get only the non-removed rows when doing 
reverse queries.
+
+createTable("CREATE TABLE %s (k text, i int, PRIMARY KEY (k, i))");
+
+for (int i = 0; i < 10; i++)
+execute("INSERT INTO %s(k, i) values (?, ?)", "a", i);
+
+flush();
+
+execute("DELETE FROM %s WHERE k = ? AND i >= ? AND i <= ?", "a", 2, 7);
+
+assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+row(9), row(8), row(1), row(0)
+);
+
+flush();
+
+assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+row(9), row(8), row(1), row(0)
+);
+}
 }



[9/9] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-07-07 Thread slebresne
Merge branch 'cassandra-3.9' into trunk

* cassandra-3.9:
  Don't ignore deletion info in sstable on reverse queries
  NPE when trying to remove purgable tombstones from result


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3016dc7c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3016dc7c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3016dc7c

Branch: refs/heads/trunk
Commit: 3016dc7c2f321c072dc11831be92a0331795ae89
Parents: 9fd6077 a006f57
Author: Sylvain Lebresne 
Authored: Thu Jul 7 13:02:53 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 13:02:53 2016 +0200

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  2 +-
 .../cql3/validation/operations/DeleteTest.java  | 26 
 3 files changed, 28 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3016dc7c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3016dc7c/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--



[2/9] cassandra git commit: Merge commit 'ef18a17' into cassandra-3.0

2016-07-07 Thread slebresne
Merge commit 'ef18a17' into cassandra-3.0

* commit 'ef18a17':
  NPE when trying to remove purgable tombstones from result


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30f5d44d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30f5d44d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30f5d44d

Branch: refs/heads/trunk
Commit: 30f5d44d8cc53726fc9a17b6df4928ccd23af977
Parents: 778f2a4 ef18a17
Author: Sylvain Lebresne 
Authored: Thu Jul 7 12:50:03 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:50:03 2016 +0200

--

--




[8/9] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-07 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  Don't ignore deletion info in sstable on reverse queries


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a006f577
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a006f577
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a006f577

Branch: refs/heads/cassandra-3.9
Commit: a006f577bdba7c4b248ef9f4cbd02a6c35a03162
Parents: 376dae2 76e68e9
Author: Sylvain Lebresne 
Authored: Thu Jul 7 12:59:34 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:59:34 2016 +0200

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  2 +-
 .../cql3/validation/operations/DeleteTest.java  | 26 
 3 files changed, 28 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a006f577/CHANGES.txt
--
diff --cc CHANGES.txt
index d459e34,20ed6e0..1d11149
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,7 -1,5 +1,8 @@@
 -3.0.9
 +3.9
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 +Merged from 3.0:
+  * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
   * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
   * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
   * Avoid missing sstables when getting the canonical sstables 
(CASSANDRA-11996)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a006f577/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a006f577/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --cc 
test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 9ead942,814e822..9b92ebb
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@@ -1105,4 -1051,36 +1105,30 @@@ public class DeleteTest extends CQLTest
  compact();
  assertRows(execute("SELECT * FROM %s"), row(0, null));
  }
+ 
 -private void flush(boolean forceFlush)
 -{
 -if (forceFlush)
 -flush();
 -}
 -
+ @Test
+ public void testDeleteAndReverseQueries() throws Throwable
+ {
+ // This test insert rows in one sstable and a range tombstone 
covering some of those rows in another, and it
+ // validates we correctly get only the non-removed rows when doing 
reverse queries.
+ 
+ createTable("CREATE TABLE %s (k text, i int, PRIMARY KEY (k, i))");
+ 
+ for (int i = 0; i < 10; i++)
+ execute("INSERT INTO %s(k, i) values (?, ?)", "a", i);
+ 
+ flush();
+ 
+ execute("DELETE FROM %s WHERE k = ? AND i >= ? AND i <= ?", "a", 2, 
7);
+ 
+ assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+ row(9), row(8), row(1), row(0)
+ );
+ 
+ flush();
+ 
+ assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+ row(9), row(8), row(1), row(0)
+ );
+ }
  }



[5/9] cassandra git commit: Don't ignore deletion info in sstable on reverse queries

2016-07-07 Thread slebresne
Don't ignore deletion info in sstable on reverse queries

patch by Sylvain Lebresne; reviewed by Aleksey Yeschenko for CASSANDRA-11733


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76e68e9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76e68e9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76e68e9b

Branch: refs/heads/trunk
Commit: 76e68e9b49b1fbcb601633e6e2b8d8e1f71c7402
Parents: 30f5d44
Author: Sylvain Lebresne 
Authored: Thu Jun 30 15:13:24 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:54:52 2016 +0200

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  2 +-
 .../cql3/validation/operations/DeleteTest.java  | 26 
 3 files changed, 28 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8118de1..20ed6e0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
  * Avoid missing sstables when getting the canonical sstables (CASSANDRA-11996)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 14cec36..3e49a3a 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -356,7 +356,7 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 {
 deletionInfo = deletionBuilder.build();
 built = new ImmutableBTreePartition(metadata, partitionKey, 
columns, Rows.EMPTY_STATIC_ROW, rowBuilder.build(),
-DeletionInfo.LIVE, 
EncodingStats.NO_STATS);
+deletionInfo, 
EncodingStats.NO_STATS);
 deletionBuilder = null;
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76e68e9b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 76351ee..814e822 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -1057,4 +1057,30 @@ public class DeleteTest extends CQLTester
 if (forceFlush)
 flush();
 }
+
+@Test
+public void testDeleteAndReverseQueries() throws Throwable
+{
+// This test insert rows in one sstable and a range tombstone covering 
some of those rows in another, and it
+// validates we correctly get only the non-removed rows when doing 
reverse queries.
+
+createTable("CREATE TABLE %s (k text, i int, PRIMARY KEY (k, i))");
+
+for (int i = 0; i < 10; i++)
+execute("INSERT INTO %s(k, i) values (?, ?)", "a", i);
+
+flush();
+
+execute("DELETE FROM %s WHERE k = ? AND i >= ? AND i <= ?", "a", 2, 7);
+
+assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+row(9), row(8), row(1), row(0)
+);
+
+flush();
+
+assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+row(9), row(8), row(1), row(0)
+);
+}
 }



[1/9] cassandra git commit: NPE when trying to remove purgable tombstones from result

2016-07-07 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 30f5d44d8 -> 76e68e9b4
  refs/heads/cassandra-3.9 376dae268 -> a006f577b
  refs/heads/trunk 9fd607778 -> 3016dc7c2


NPE when trying to remove purgable tombstones from result

patch by mck; reviewed by Sylvain Lebresne for CASSANDRA-12143


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef18a176
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef18a176
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef18a176

Branch: refs/heads/trunk
Commit: ef18a1768a6589eac212a7f320f9748ca6dc8371
Parents: 00e7ecf
Author: mck 
Authored: Thu Jul 7 11:17:40 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:49:12 2016 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  3 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 50 
 3 files changed, 44 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7d62f97..e10af6f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 Merged from 2.1:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d86f941..ff63163 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2347,7 +2347,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 // remove purgable tombstones from result - see CASSANDRA-11427
-data.purgeTombstones(gcBefore(filter.timestamp));
+if (data != null)
+data.purgeTombstones(gcBefore(filter.timestamp));
 
 rows.add(new Row(rawRow.key, data));
 if (!ignoreTombstonedPartitions || 
!data.hasOnlyTombstones(filter.timestamp))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index 5419ef5..2d67baf 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -66,6 +66,7 @@ import org.apache.cassandra.db.composites.CellNameType;
 import org.apache.cassandra.db.composites.CellNames;
 import org.apache.cassandra.db.composites.Composites;
 import org.apache.cassandra.db.filter.ColumnSlice;
+import org.apache.cassandra.db.filter.ExtendedFilter;
 import org.apache.cassandra.db.filter.IDiskAtomFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.filter.QueryFilter;
@@ -94,7 +95,6 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.WrappedRunnable;
-import org.apache.thrift.TException;
 
 import static org.apache.cassandra.Util.cellname;
 import static org.apache.cassandra.Util.column;
@@ -246,6 +246,38 @@ public class ColumnFamilyStoreTest
 }
 
 @Test
+public void testFilterWithNullCF() throws Exception
+{
+Keyspace keyspace = Keyspace.open(KEYSPACE1);
+ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF_STANDARD1);
+final Row row = new Row(Util.dk("key1"), null);
+
+ColumnFamilyStore.AbstractScanIterator iterator = new 
ColumnFamilyStore.AbstractScanIterator()
+{
+Iterator it = Collections.singletonList(row).iterator();
+
+protected Row computeNext()
+{
+return it.hasNext() ? it.next() : endOfData();
+}
+
+@Override
+public void close()
+{
+}
+};
+
+ExtendedFilter filter = ExtendedFilter.create(
+cfs,
+DataRange.allData(DatabaseDescriptor.getPartitioner()), null, 
1, true, System.current

[3/9] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-07 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  NPE when trying to remove purgable tombstones from result


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/376dae26
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/376dae26
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/376dae26

Branch: refs/heads/trunk
Commit: 376dae26833591303cd3140001666f23aa216a11
Parents: 59ee46e 30f5d44
Author: Sylvain Lebresne 
Authored: Thu Jul 7 12:50:26 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:50:26 2016 +0200

--

--




[7/9] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-07 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  Don't ignore deletion info in sstable on reverse queries


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a006f577
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a006f577
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a006f577

Branch: refs/heads/trunk
Commit: a006f577bdba7c4b248ef9f4cbd02a6c35a03162
Parents: 376dae2 76e68e9
Author: Sylvain Lebresne 
Authored: Thu Jul 7 12:59:34 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:59:34 2016 +0200

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  2 +-
 .../cql3/validation/operations/DeleteTest.java  | 26 
 3 files changed, 28 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a006f577/CHANGES.txt
--
diff --cc CHANGES.txt
index d459e34,20ed6e0..1d11149
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,7 -1,5 +1,8 @@@
 -3.0.9
 +3.9
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 +Merged from 3.0:
+  * Fix reverse queries ignoring range tombstones (CASSANDRA-11733)
   * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
   * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098)
   * Avoid missing sstables when getting the canonical sstables 
(CASSANDRA-11996)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a006f577/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a006f577/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --cc 
test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 9ead942,814e822..9b92ebb
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@@ -1105,4 -1051,36 +1105,30 @@@ public class DeleteTest extends CQLTest
  compact();
  assertRows(execute("SELECT * FROM %s"), row(0, null));
  }
+ 
 -private void flush(boolean forceFlush)
 -{
 -if (forceFlush)
 -flush();
 -}
 -
+ @Test
+ public void testDeleteAndReverseQueries() throws Throwable
+ {
+ // This test insert rows in one sstable and a range tombstone 
covering some of those rows in another, and it
+ // validates we correctly get only the non-removed rows when doing 
reverse queries.
+ 
+ createTable("CREATE TABLE %s (k text, i int, PRIMARY KEY (k, i))");
+ 
+ for (int i = 0; i < 10; i++)
+ execute("INSERT INTO %s(k, i) values (?, ?)", "a", i);
+ 
+ flush();
+ 
+ execute("DELETE FROM %s WHERE k = ? AND i >= ? AND i <= ?", "a", 2, 
7);
+ 
+ assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+ row(9), row(8), row(1), row(0)
+ );
+ 
+ flush();
+ 
+ assertRows(execute("SELECT i FROM %s WHERE k = ? ORDER BY i DESC", 
"a"),
+ row(9), row(8), row(1), row(0)
+ );
+ }
  }



[jira] [Updated] (CASSANDRA-12143) NPE when trying to remove purgable tombstones from result

2016-07-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12143:
-
   Resolution: Fixed
Fix Version/s: (was: 2.2.x)
   2.2.8
   Status: Resolved  (was: Patch Available)

Committed, thanks.

> NPE when trying to remove purgable tombstones from result
> -
>
> Key: CASSANDRA-12143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12143
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
>Priority: Critical
> Fix For: 2.2.8
>
> Attachments: 12143-2.2.txt
>
>
> A cluster running 2.2.6 started throwing NPEs.
> (500K exceptions on a node was seen.)
> {noformat}WARN  … AbstractLocalAwareExecutorService.java:169 - Uncaught 
> exception on thread Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.NullPointerException: null{noformat}
> Bisecting this highlighted commit d3db33c008542c7044f3ed8c19f3a45679fcf52e as 
> the culprit, which was a fix for CASSANDRA-11427.
> This commit added a line to "remove purgable tombstones from result" but 
> failed to null check the {{data}} variable first. This variable comes from 
> {{Row.cf}} which is permitted to be null where the CFS has no data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/6] cassandra git commit: NPE when trying to remove purgable tombstones from result

2016-07-07 Thread slebresne
NPE when trying to remove purgable tombstones from result

patch by mck; reviewed by Sylvain Lebresne for CASSANDRA-12143


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef18a176
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef18a176
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef18a176

Branch: refs/heads/cassandra-3.9
Commit: ef18a1768a6589eac212a7f320f9748ca6dc8371
Parents: 00e7ecf
Author: mck 
Authored: Thu Jul 7 11:17:40 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:49:12 2016 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  3 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 50 
 3 files changed, 44 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7d62f97..e10af6f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 Merged from 2.1:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d86f941..ff63163 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2347,7 +2347,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 // remove purgable tombstones from result - see CASSANDRA-11427
-data.purgeTombstones(gcBefore(filter.timestamp));
+if (data != null)
+data.purgeTombstones(gcBefore(filter.timestamp));
 
 rows.add(new Row(rawRow.key, data));
 if (!ignoreTombstonedPartitions || 
!data.hasOnlyTombstones(filter.timestamp))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index 5419ef5..2d67baf 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -66,6 +66,7 @@ import org.apache.cassandra.db.composites.CellNameType;
 import org.apache.cassandra.db.composites.CellNames;
 import org.apache.cassandra.db.composites.Composites;
 import org.apache.cassandra.db.filter.ColumnSlice;
+import org.apache.cassandra.db.filter.ExtendedFilter;
 import org.apache.cassandra.db.filter.IDiskAtomFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.filter.QueryFilter;
@@ -94,7 +95,6 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.WrappedRunnable;
-import org.apache.thrift.TException;
 
 import static org.apache.cassandra.Util.cellname;
 import static org.apache.cassandra.Util.column;
@@ -246,6 +246,38 @@ public class ColumnFamilyStoreTest
 }
 
 @Test
+public void testFilterWithNullCF() throws Exception
+{
+Keyspace keyspace = Keyspace.open(KEYSPACE1);
+ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF_STANDARD1);
+final Row row = new Row(Util.dk("key1"), null);
+
+ColumnFamilyStore.AbstractScanIterator iterator = new 
ColumnFamilyStore.AbstractScanIterator()
+{
+Iterator it = Collections.singletonList(row).iterator();
+
+protected Row computeNext()
+{
+return it.hasNext() ? it.next() : endOfData();
+}
+
+@Override
+public void close()
+{
+}
+};
+
+ExtendedFilter filter = ExtendedFilter.create(
+cfs,
+DataRange.allData(DatabaseDescriptor.getPartitioner()), null, 
1, true, System.currentTimeMillis());
+
+List list = cfs.filter(iterator, filter);
+assert 1 == list.size();
+assert list.get(0).key == row.key;
+assert null == list.g

[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-07 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  NPE when trying to remove purgable tombstones from result


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/376dae26
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/376dae26
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/376dae26

Branch: refs/heads/cassandra-3.9
Commit: 376dae26833591303cd3140001666f23aa216a11
Parents: 59ee46e 30f5d44
Author: Sylvain Lebresne 
Authored: Thu Jul 7 12:50:26 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:50:26 2016 +0200

--

--




[2/6] cassandra git commit: NPE when trying to remove purgable tombstones from result

2016-07-07 Thread slebresne
NPE when trying to remove purgable tombstones from result

patch by mck; reviewed by Sylvain Lebresne for CASSANDRA-12143


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef18a176
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef18a176
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef18a176

Branch: refs/heads/cassandra-3.0
Commit: ef18a1768a6589eac212a7f320f9748ca6dc8371
Parents: 00e7ecf
Author: mck 
Authored: Thu Jul 7 11:17:40 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:49:12 2016 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  3 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 50 
 3 files changed, 44 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7d62f97..e10af6f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 Merged from 2.1:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d86f941..ff63163 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2347,7 +2347,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 // remove purgable tombstones from result - see CASSANDRA-11427
-data.purgeTombstones(gcBefore(filter.timestamp));
+if (data != null)
+data.purgeTombstones(gcBefore(filter.timestamp));
 
 rows.add(new Row(rawRow.key, data));
 if (!ignoreTombstonedPartitions || 
!data.hasOnlyTombstones(filter.timestamp))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index 5419ef5..2d67baf 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -66,6 +66,7 @@ import org.apache.cassandra.db.composites.CellNameType;
 import org.apache.cassandra.db.composites.CellNames;
 import org.apache.cassandra.db.composites.Composites;
 import org.apache.cassandra.db.filter.ColumnSlice;
+import org.apache.cassandra.db.filter.ExtendedFilter;
 import org.apache.cassandra.db.filter.IDiskAtomFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.filter.QueryFilter;
@@ -94,7 +95,6 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.WrappedRunnable;
-import org.apache.thrift.TException;
 
 import static org.apache.cassandra.Util.cellname;
 import static org.apache.cassandra.Util.column;
@@ -246,6 +246,38 @@ public class ColumnFamilyStoreTest
 }
 
 @Test
+public void testFilterWithNullCF() throws Exception
+{
+Keyspace keyspace = Keyspace.open(KEYSPACE1);
+ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF_STANDARD1);
+final Row row = new Row(Util.dk("key1"), null);
+
+ColumnFamilyStore.AbstractScanIterator iterator = new 
ColumnFamilyStore.AbstractScanIterator()
+{
+Iterator it = Collections.singletonList(row).iterator();
+
+protected Row computeNext()
+{
+return it.hasNext() ? it.next() : endOfData();
+}
+
+@Override
+public void close()
+{
+}
+};
+
+ExtendedFilter filter = ExtendedFilter.create(
+cfs,
+DataRange.allData(DatabaseDescriptor.getPartitioner()), null, 
1, true, System.currentTimeMillis());
+
+List list = cfs.filter(iterator, filter);
+assert 1 == list.size();
+assert list.get(0).key == row.key;
+assert null == list.g

[1/6] cassandra git commit: NPE when trying to remove purgable tombstones from result

2016-07-07 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 00e7ecf13 -> ef18a1768
  refs/heads/cassandra-3.0 778f2a46e -> 30f5d44d8
  refs/heads/cassandra-3.9 59ee46e55 -> 376dae268


NPE when trying to remove purgable tombstones from result

patch by mck; reviewed by Sylvain Lebresne for CASSANDRA-12143


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef18a176
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef18a176
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef18a176

Branch: refs/heads/cassandra-2.2
Commit: ef18a1768a6589eac212a7f320f9748ca6dc8371
Parents: 00e7ecf
Author: mck 
Authored: Thu Jul 7 11:17:40 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:49:12 2016 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  3 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 50 
 3 files changed, 44 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7d62f97..e10af6f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.8
+ * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
  * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
  * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 Merged from 2.1:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d86f941..ff63163 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2347,7 +2347,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 // remove purgable tombstones from result - see CASSANDRA-11427
-data.purgeTombstones(gcBefore(filter.timestamp));
+if (data != null)
+data.purgeTombstones(gcBefore(filter.timestamp));
 
 rows.add(new Row(rawRow.key, data));
 if (!ignoreTombstonedPartitions || 
!data.hasOnlyTombstones(filter.timestamp))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef18a176/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index 5419ef5..2d67baf 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -66,6 +66,7 @@ import org.apache.cassandra.db.composites.CellNameType;
 import org.apache.cassandra.db.composites.CellNames;
 import org.apache.cassandra.db.composites.Composites;
 import org.apache.cassandra.db.filter.ColumnSlice;
+import org.apache.cassandra.db.filter.ExtendedFilter;
 import org.apache.cassandra.db.filter.IDiskAtomFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.filter.QueryFilter;
@@ -94,7 +95,6 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.WrappedRunnable;
-import org.apache.thrift.TException;
 
 import static org.apache.cassandra.Util.cellname;
 import static org.apache.cassandra.Util.column;
@@ -246,6 +246,38 @@ public class ColumnFamilyStoreTest
 }
 
 @Test
+public void testFilterWithNullCF() throws Exception
+{
+Keyspace keyspace = Keyspace.open(KEYSPACE1);
+ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF_STANDARD1);
+final Row row = new Row(Util.dk("key1"), null);
+
+ColumnFamilyStore.AbstractScanIterator iterator = new 
ColumnFamilyStore.AbstractScanIterator()
+{
+Iterator it = Collections.singletonList(row).iterator();
+
+protected Row computeNext()
+{
+return it.hasNext() ? it.next() : endOfData();
+}
+
+@Override
+public void close()
+{
+}
+};
+
+ExtendedFilter filter = ExtendedFilter.create(
+cfs,
+DataRange.allData(DatabaseDescriptor.getPartitioner()), null, 
1, true

[5/6] cassandra git commit: Merge commit 'ef18a17' into cassandra-3.0

2016-07-07 Thread slebresne
Merge commit 'ef18a17' into cassandra-3.0

* commit 'ef18a17':
  NPE when trying to remove purgable tombstones from result


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30f5d44d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30f5d44d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30f5d44d

Branch: refs/heads/cassandra-3.0
Commit: 30f5d44d8cc53726fc9a17b6df4928ccd23af977
Parents: 778f2a4 ef18a17
Author: Sylvain Lebresne 
Authored: Thu Jul 7 12:50:03 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:50:03 2016 +0200

--

--




[4/6] cassandra git commit: Merge commit 'ef18a17' into cassandra-3.0

2016-07-07 Thread slebresne
Merge commit 'ef18a17' into cassandra-3.0

* commit 'ef18a17':
  NPE when trying to remove purgable tombstones from result


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30f5d44d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30f5d44d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30f5d44d

Branch: refs/heads/cassandra-3.9
Commit: 30f5d44d8cc53726fc9a17b6df4928ccd23af977
Parents: 778f2a4 ef18a17
Author: Sylvain Lebresne 
Authored: Thu Jul 7 12:50:03 2016 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 7 12:50:03 2016 +0200

--

--




[jira] [Commented] (CASSANDRA-11733) SSTableReversedIterator ignores range tombstones

2016-07-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365924#comment-15365924
 ] 

Aleksey Yeschenko commented on CASSANDRA-11733:
---

+1

> SSTableReversedIterator ignores range tombstones
> 
>
> Key: CASSANDRA-11733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11733
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
> Attachments: remove_delete.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-07-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11031:
-
Reviewer: Alex Petrov  (was: Sylvain Lebresne)

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-11031-3.7.patch
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8831) Create a system table to expose prepared statements

2016-07-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365918#comment-15365918
 ] 

Sylvain Lebresne commented on CASSANDRA-8831:
-

Haven't dug into why, but there is quite a few unit test failures that look 
abnormal (queries that should be invalid that aren't anymore).

> Create a system table to expose prepared statements
> ---
>
> Key: CASSANDRA-8831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8831
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
>  Labels: client-impacting, docs-impacting
> Fix For: 3.x
>
>
> Because drivers abstract from users the handling of up/down nodes, they have 
> to deal with the fact that when a node is restarted (or join), it won't know 
> any prepared statement. Drivers could somewhat ignore that problem and wait 
> for a query to return an error (that the statement is unknown by the node) to 
> re-prepare the query on that node, but it's relatively inefficient because 
> every time a node comes back up, you'll get bad latency spikes due to some 
> queries first failing, then being re-prepared and then only being executed. 
> So instead, drivers (at least the java driver but I believe others do as 
> well) pro-actively re-prepare statements when a node comes up. It solves the 
> latency problem, but currently every driver instance blindly re-prepare all 
> statements, meaning that in a large cluster with many clients there is a lot 
> of duplication of work (it would be enough for a single client to prepare the 
> statements) and a bigger than necessary load on the node that started.
> An idea to solve this it to have a (cheap) way for clients to check if some 
> statements are prepared on the node. There is different options to provide 
> that but what I'd suggest is to add a system table to expose the (cached) 
> prepared statements because:
> # it's reasonably straightforward to implement: we just add a line to the 
> table when a statement is prepared and remove it when it's evicted (we 
> already have eviction listeners). We'd also truncate the table on startup but 
> that's easy enough). We can even switch it to a "virtual table" if/when 
> CASSANDRA-7622 lands but it's trivial to do with a normal table in the 
> meantime.
> # it doesn't require a change to the protocol or something like that. It 
> could even be done in 2.1 if we wish to.
> # exposing prepared statements feels like a genuinely useful information to 
> have (outside of the problem exposed here that is), if only for 
> debugging/educational purposes.
> The exposed table could look something like:
> {noformat}
> CREATE TABLE system.prepared_statements (
>keyspace_name text,
>table_name text,
>prepared_id blob,
>query_string text,
>PRIMARY KEY (keyspace_name, table_name, prepared_id)
> )
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8831) Create a system table to expose prepared statements

2016-07-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8831:

Status: Open  (was: Patch Available)

> Create a system table to expose prepared statements
> ---
>
> Key: CASSANDRA-8831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8831
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
>  Labels: client-impacting, docs-impacting
> Fix For: 3.x
>
>
> Because drivers abstract from users the handling of up/down nodes, they have 
> to deal with the fact that when a node is restarted (or join), it won't know 
> any prepared statement. Drivers could somewhat ignore that problem and wait 
> for a query to return an error (that the statement is unknown by the node) to 
> re-prepare the query on that node, but it's relatively inefficient because 
> every time a node comes back up, you'll get bad latency spikes due to some 
> queries first failing, then being re-prepared and then only being executed. 
> So instead, drivers (at least the java driver but I believe others do as 
> well) pro-actively re-prepare statements when a node comes up. It solves the 
> latency problem, but currently every driver instance blindly re-prepare all 
> statements, meaning that in a large cluster with many clients there is a lot 
> of duplication of work (it would be enough for a single client to prepare the 
> statements) and a bigger than necessary load on the node that started.
> An idea to solve this it to have a (cheap) way for clients to check if some 
> statements are prepared on the node. There is different options to provide 
> that but what I'd suggest is to add a system table to expose the (cached) 
> prepared statements because:
> # it's reasonably straightforward to implement: we just add a line to the 
> table when a statement is prepared and remove it when it's evicted (we 
> already have eviction listeners). We'd also truncate the table on startup but 
> that's easy enough). We can even switch it to a "virtual table" if/when 
> CASSANDRA-7622 lands but it's trivial to do with a normal table in the 
> meantime.
> # it doesn't require a change to the protocol or something like that. It 
> could even be done in 2.1 if we wish to.
> # exposing prepared statements feels like a genuinely useful information to 
> have (outside of the problem exposed here that is), if only for 
> debugging/educational purposes.
> The exposed table could look something like:
> {noformat}
> CREATE TABLE system.prepared_statements (
>keyspace_name text,
>table_name text,
>prepared_id blob,
>query_string text,
>PRIMARY KEY (keyspace_name, table_name, prepared_id)
> )
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11828) Commit log needs to track unflushed intervals rather than positions

2016-07-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11828:
-
Status: Open  (was: Patch Available)

> Commit log needs to track unflushed intervals rather than positions
> ---
>
> Key: CASSANDRA-11828
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11828
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In CASSANDRA-11448 in an effort to give a more thorough handling of flush 
> errors I have introduced a possible correctness bug with disk failure policy 
> ignore if a flush fails with an error:
> - we report the error but continue
> - we correctly do not update the commit log with the flush position
> - but we allow the post-flush executor to resume
> - a successful later flush can thus move the log's clear position beyond the 
> data from the failed flush
> - the log will then delete segment(s) that contain unflushed data.
> After CASSANDRA-9669 it is relatively easy to fix this problem by making the 
> commit log track sets of intervals of unflushed data (as described in 
> CASSANDRA-8496).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11315) Upgrade from 2.2.6 to 3.0.5 Fails with AssertionError

2016-07-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11315:
-
Status: Ready to Commit  (was: Patch Available)

> Upgrade from 2.2.6 to 3.0.5 Fails with AssertionError
> -
>
> Key: CASSANDRA-11315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11315
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
> Environment: Ubuntu 14.04, Oracle Java 8, Apache Cassandra 2.2.5 -> 
> 3.0.3, Apache Cassandra 2.2.6 -> 3.0.5
>Reporter: Dominik Keil
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.x
>
>
> Hi,
> when trying to upgrade our development cluster from C* 2.2.5 to 3.0.3 
> Cassandra fails during startup.
> Here's the relevant log snippet:
> {noformat}
> [...]
> INFO  [main] 2016-03-08 11:42:01,291 ColumnFamilyStore.java:381 - 
> Initializing system.schema_triggers
> INFO  [main] 2016-03-08 11:42:01,302 ColumnFamilyStore.java:381 - 
> Initializing system.schema_usertypes
> INFO  [main] 2016-03-08 11:42:01,313 ColumnFamilyStore.java:381 - 
> Initializing system.schema_functions
> INFO  [main] 2016-03-08 11:42:01,324 ColumnFamilyStore.java:381 - 
> Initializing system.schema_aggregates
> INFO  [main] 2016-03-08 11:42:01,576 SystemKeyspace.java:1284 - Detected 
> version upgrade from 2.2.5 to 3.0.3, snapshotting system keyspace
> WARN  [main] 2016-03-08 11:42:01,911 CompressionParams.java:382 - The 
> sstable_compression option has been deprecated. You should use class instead
> WARN  [main] 2016-03-08 11:42:01,959 CompressionParams.java:333 - The 
> chunk_length_kb option has been deprecated. You should use chunk_length_in_kb 
> instead
> ERROR [main] 2016-03-08 11:42:02,638 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.CompactTables.getCompactValueColumn(CompactTables.java:90)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:315) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:291) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.config.CFMetaData.create(CFMetaData.java:367) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:337)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$227(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$224(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223) 
> [apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.3.jar:3.0.3]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11828) Commit log needs to track unflushed intervals rather than positions

2016-07-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11828:
-
Status: Awaiting Feedback  (was: Open)

> Commit log needs to track unflushed intervals rather than positions
> ---
>
> Key: CASSANDRA-11828
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11828
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In CASSANDRA-11448 in an effort to give a more thorough handling of flush 
> errors I have introduced a possible correctness bug with disk failure policy 
> ignore if a flush fails with an error:
> - we report the error but continue
> - we correctly do not update the commit log with the flush position
> - but we allow the post-flush executor to resume
> - a successful later flush can thus move the log's clear position beyond the 
> data from the failed flush
> - the log will then delete segment(s) that contain unflushed data.
> After CASSANDRA-9669 it is relatively easy to fix this problem by making the 
> commit log track sets of intervals of unflushed data (as described in 
> CASSANDRA-8496).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11315) Upgrade from 2.2.6 to 3.0.5 Fails with AssertionError

2016-07-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365906#comment-15365906
 ] 

Sylvain Lebresne commented on CASSANDRA-11315:
--

+1, with the very minor nit that I'd rename 
{{s/filterOutRedundantRows/filterOutRedundantRowForSparse}} to make it clearer 
why it can throw away what it does without having to look at the usage.

> Upgrade from 2.2.6 to 3.0.5 Fails with AssertionError
> -
>
> Key: CASSANDRA-11315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11315
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
> Environment: Ubuntu 14.04, Oracle Java 8, Apache Cassandra 2.2.5 -> 
> 3.0.3, Apache Cassandra 2.2.6 -> 3.0.5
>Reporter: Dominik Keil
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.x
>
>
> Hi,
> when trying to upgrade our development cluster from C* 2.2.5 to 3.0.3 
> Cassandra fails during startup.
> Here's the relevant log snippet:
> {noformat}
> [...]
> INFO  [main] 2016-03-08 11:42:01,291 ColumnFamilyStore.java:381 - 
> Initializing system.schema_triggers
> INFO  [main] 2016-03-08 11:42:01,302 ColumnFamilyStore.java:381 - 
> Initializing system.schema_usertypes
> INFO  [main] 2016-03-08 11:42:01,313 ColumnFamilyStore.java:381 - 
> Initializing system.schema_functions
> INFO  [main] 2016-03-08 11:42:01,324 ColumnFamilyStore.java:381 - 
> Initializing system.schema_aggregates
> INFO  [main] 2016-03-08 11:42:01,576 SystemKeyspace.java:1284 - Detected 
> version upgrade from 2.2.5 to 3.0.3, snapshotting system keyspace
> WARN  [main] 2016-03-08 11:42:01,911 CompressionParams.java:382 - The 
> sstable_compression option has been deprecated. You should use class instead
> WARN  [main] 2016-03-08 11:42:01,959 CompressionParams.java:333 - The 
> chunk_length_kb option has been deprecated. You should use chunk_length_in_kb 
> instead
> ERROR [main] 2016-03-08 11:42:02,638 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.CompactTables.getCompactValueColumn(CompactTables.java:90)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:315) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:291) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.config.CFMetaData.create(CFMetaData.java:367) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:337)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$227(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$224(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223) 
> [apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.3.jar:3.0.3]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12147) Static thrift tables with non UTF8Type comparators can have column names converted incorrectly

2016-07-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365884#comment-15365884
 ] 

Aleksey Yeschenko commented on CASSANDRA-12147:
---

We are working on a test, for this and a bunch of related JIRAs.

> Static thrift tables with non UTF8Type comparators can have column names 
> converted incorrectly
> --
>
> Key: CASSANDRA-12147
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12147
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 3.8, 3.0.x
>
>
> {{CompactTables::columnDefinitionComparator()}} has been broken since 
> CASSANDRA-8099 for non-super columnfamilies, if the comparator is not 
> {{UTF8Type}}. This results in being unable to read some pre-existing 2.x data 
> post upgrade (it's not lost, but becomes inaccessible).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >