[jira] [Updated] (CASSANDRA-12977) column expire to null can still be retrieved using not null value in where clause

2016-11-30 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12977:
--
Reviewer:   (was: Aleksey Yeschenko)

> column expire to null can still be retrieved using not null value in where 
> clause
> -
>
> Key: CASSANDRA-12977
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12977
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cql  5.0.1
> cassandra 2.1.5
>Reporter: ruilonghe1988
> Attachments: attatchment.txt, attatchment.txt
>
>
> 1. first create table:
> create table device_share(
> device_id text primary key,
> share_status text,
> share_expire boolean
> );
> CREATE INDEX expireIndex ON device_share (share_expire);
> create index statusIndex ON device_share (share_status);
> 2.insert a new record:
> insert into device_share(device_id,share_status,share_expire) values 
> ('d1','ready',false);
> 3. update the share_expire value to fase with ttl 20
> update device_share using ttl 20 set share_expire = false where device_id = 
> 'd1';
> 4.after 20 seconds, can retrieve the record with condition where share_expire 
> = false, but the record in the console show the share_expire is null.
> cqlsh:test> select * from device_share where device_id ='d1' and 
> share_status='ready' and share_expire = false allow filtering;
>  device_id | share_expire | share_status
> ---+--+--
> d1 | null |ready
> is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12975) Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException:

2016-11-30 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710713#comment-15710713
 ] 

Aleksey Yeschenko commented on CASSANDRA-12975:
---

Not sure what you mean. Can you upgrade to 3.0 safely now?

> Exception (java.lang.RuntimeException) encountered during startup: 
> org.codehaus.jackson.JsonParseException:
> ---
>
> Key: CASSANDRA-12975
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12975
> Project: Cassandra
>  Issue Type: Bug
>Reporter: JianwenSun
>
> We upgrade our thrift tables from v1.2.4  --> 1.2.9  --> 2.0.9 --> v2.1.13 
> without any problems, but when upgrade it to v3.9 something wrong.
> any help?
> [root@bj-dev-infra-001 cassandra]# apache-cassandra-3.9/bin/cassandra -R -f
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset 
> (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
> CompilerOracle: inline 
> org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody 
> (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
> CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds 
> (JJ)V
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes 
> (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes 
> (JJIJ[J)V
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> (Ljava/nio/ByteBuffer;[B)I
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> ([BLjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/lang/Object;JI)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt 
> (JI)[B
> INFO  06:05:28 Configuration location: 
> file:/usr/local/cassandra/apache-cassandra-3.9/conf/cassandra.yaml
> INFO  06:05:28 Node configuration:[allocate_tokens_for_keyspace=null; 
> authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; 
> auto_bootstrap=true; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; 
> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; 
> broadcast_address=null; broadcast_rpc_address=null; 
> buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; 
> cdc_enabled=false; cdc_free_space_check_interval_ms=250; 
> cdc_raw_directory=/usr/local/cassandra/data/cdc_raw; 
> cdc_total_space_in_mb=null; client_encryption_options=; 
> cluster_name=TestCluster; column_index_cache_size_in_kb=2; 
> column_index_size_in_kb=64; commit_failure_policy=stop; 
> commitlog_compression=null; 
> 

[jira] [Updated] (CASSANDRA-12977) column expire to null can still be retrieved using not null value in where clause

2016-11-30 Thread ruilonghe1988 (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ruilonghe1988 updated CASSANDRA-12977:
--
Reviewer: Aleksey Yeschenko  (was: Carl Yeksigian)

> column expire to null can still be retrieved using not null value in where 
> clause
> -
>
> Key: CASSANDRA-12977
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12977
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cql  5.0.1
> cassandra 2.1.5
>Reporter: ruilonghe1988
> Attachments: attatchment.txt, attatchment.txt
>
>
> 1. first create table:
> create table device_share(
> device_id text primary key,
> share_status text,
> share_expire boolean
> );
> CREATE INDEX expireIndex ON device_share (share_expire);
> create index statusIndex ON device_share (share_status);
> 2.insert a new record:
> insert into device_share(device_id,share_status,share_expire) values 
> ('d1','ready',false);
> 3. update the share_expire value to fase with ttl 20
> update device_share using ttl 20 set share_expire = false where device_id = 
> 'd1';
> 4.after 20 seconds, can retrieve the record with condition where share_expire 
> = false, but the record in the console show the share_expire is null.
> cqlsh:test> select * from device_share where device_id ='d1' and 
> share_status='ready' and share_expire = false allow filtering;
>  device_id | share_expire | share_status
> ---+--+--
> d1 | null |ready
> is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12975) Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException:

2016-11-30 Thread JianwenSun (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710578#comment-15710578
 ] 

JianwenSun commented on CASSANDRA-12975:


you're in fuck .

{"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}
 {"keys":"ALL", "rows_per_partition":"NONE"}

i just start the clean server, without changes!  So that's seems impossiable.. 
whether it affected by the configuration or not?


> Exception (java.lang.RuntimeException) encountered during startup: 
> org.codehaus.jackson.JsonParseException:
> ---
>
> Key: CASSANDRA-12975
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12975
> Project: Cassandra
>  Issue Type: Bug
>Reporter: JianwenSun
>
> We upgrade our thrift tables from v1.2.4  --> 1.2.9  --> 2.0.9 --> v2.1.13 
> without any problems, but when upgrade it to v3.9 something wrong.
> any help?
> [root@bj-dev-infra-001 cassandra]# apache-cassandra-3.9/bin/cassandra -R -f
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset 
> (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
> CompilerOracle: inline 
> org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody 
> (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
> CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds 
> (JJ)V
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes 
> (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes 
> (JJIJ[J)V
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> (Ljava/nio/ByteBuffer;[B)I
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> ([BLjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/lang/Object;JI)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline 

[jira] [Commented] (CASSANDRA-12975) Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException:

2016-11-30 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710409#comment-15710409
 ] 

Aleksey Yeschenko commented on CASSANDRA-12975:
---

Can you paste output of {{SELECT caching FROM system.schema_columnfamilies}} 
while still on 2.1? Thanks.

> Exception (java.lang.RuntimeException) encountered during startup: 
> org.codehaus.jackson.JsonParseException:
> ---
>
> Key: CASSANDRA-12975
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12975
> Project: Cassandra
>  Issue Type: Bug
>Reporter: JianwenSun
>
> We upgrade our thrift tables from v1.2.4  --> 1.2.9  --> 2.0.9 --> v2.1.13 
> without any problems, but when upgrade it to v3.9 something wrong.
> any help?
> [root@bj-dev-infra-001 cassandra]# apache-cassandra-3.9/bin/cassandra -R -f
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset 
> (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
> CompilerOracle: inline 
> org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody 
> (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
> CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds 
> (JJ)V
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes 
> (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes 
> (JJIJ[J)V
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> (Ljava/nio/ByteBuffer;[B)I
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> ([BLjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/lang/Object;JI)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt 
> (JI)[B
> INFO  06:05:28 Configuration location: 
> file:/usr/local/cassandra/apache-cassandra-3.9/conf/cassandra.yaml
> INFO  06:05:28 Node configuration:[allocate_tokens_for_keyspace=null; 
> authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; 
> auto_bootstrap=true; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; 
> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; 
> broadcast_address=null; broadcast_rpc_address=null; 
> buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; 
> cdc_enabled=false; cdc_free_space_check_interval_ms=250; 
> cdc_raw_directory=/usr/local/cassandra/data/cdc_raw; 
> cdc_total_space_in_mb=null; client_encryption_options=; 
> cluster_name=TestCluster; column_index_cache_size_in_kb=2; 
> column_index_size_in_kb=64; commit_failure_policy=stop; 
> 

[jira] [Commented] (CASSANDRA-12975) Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException:

2016-11-30 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710402#comment-15710402
 ] 

Aleksey Yeschenko commented on CASSANDRA-12975:
---

I think it's failing to migrate caching options from pre-2.1 format. Seems like 
something went wrong while you were still on 2.1, as those should be migrated 
by {{SystemKeyspace.migrateCachingOptions()}} method automatically on startup, 
in 2.1, since CASSANDRA-6745.

> Exception (java.lang.RuntimeException) encountered during startup: 
> org.codehaus.jackson.JsonParseException:
> ---
>
> Key: CASSANDRA-12975
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12975
> Project: Cassandra
>  Issue Type: Bug
>Reporter: JianwenSun
>
> We upgrade our thrift tables from v1.2.4  --> 1.2.9  --> 2.0.9 --> v2.1.13 
> without any problems, but when upgrade it to v3.9 something wrong.
> any help?
> [root@bj-dev-infra-001 cassandra]# apache-cassandra-3.9/bin/cassandra -R -f
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset 
> (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize 
> (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
> CompilerOracle: dontinline 
> org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
> CompilerOracle: inline 
> org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody 
> (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
> CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
> CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds 
> (JJ)V
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan 
> (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes 
> (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
> CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes 
> (JJIJ[J)V
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> (Ljava/nio/ByteBuffer;[B)I
> CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare 
> ([BLjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/lang/Object;JI)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
> CompilerOracle: inline 
> org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo 
> (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
> CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt 
> (JI)[B
> INFO  06:05:28 Configuration location: 
> file:/usr/local/cassandra/apache-cassandra-3.9/conf/cassandra.yaml
> INFO  06:05:28 Node configuration:[allocate_tokens_for_keyspace=null; 
> authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; 
> auto_bootstrap=true; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; 
> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; 
> broadcast_address=null; broadcast_rpc_address=null; 
> buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; 
> cdc_enabled=false; cdc_free_space_check_interval_ms=250; 
> cdc_raw_directory=/usr/local/cassandra/data/cdc_raw; 
> 

[jira] [Commented] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)

2016-11-30 Thread Simon Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710374#comment-15710374
 ] 

Simon Zhou commented on CASSANDRA-12203:


Also saw exactly the same issue after upgrading from 2.2.5 to 3.0.10.

> AssertionError on compaction after upgrade (2.1.9 -> 3.7)
> -
>
> Key: CASSANDRA-12203
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12203
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.7 (upgrade from 2.1.9)
> Java version "1.8.0_91"
> Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-83-generic x86_64)
>Reporter: Roman S. Borschel
> Fix For: 3.x
>
>
> After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family 
> (using SizeTieredCompaction) repeatedly and continuously failed compaction 
> (and thus also repair) across the cluster, with all nodes producing the 
> following errors in the logs:
> {noformat}
> 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread 
> Thread[CompactionExecutor:3,1,main]
> 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null
> 2016-07-14T09:29:47.96859 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96868 |srv=cassandra|   at 
> 

[jira] [Commented] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-11-30 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710203#comment-15710203
 ] 

Sotirios Delimanolis commented on CASSANDRA-12979:
--

Slightly related: that {{RuntimeException}} seems to get ignored and eventually 
swallowed by the current executor.

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra-builds git commit: Fix dtest output location to archive

2016-11-30 Thread mshuler
Repository: cassandra-builds
Updated Branches:
  refs/heads/master 8202d6620 -> fc28d3e58


Fix dtest output location to archive


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/fc28d3e5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/fc28d3e5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/fc28d3e5

Branch: refs/heads/master
Commit: fc28d3e586fa004787c4055050a6a7c1dd2369a9
Parents: 8202d66
Author: Michael Shuler 
Authored: Wed Nov 30 17:11:22 2016 -0600
Committer: Michael Shuler 
Committed: Wed Nov 30 17:11:22 2016 -0600

--
 jenkins-dsl/cassandra_job_dsl_seed.groovy | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/fc28d3e5/jenkins-dsl/cassandra_job_dsl_seed.groovy
--
diff --git a/jenkins-dsl/cassandra_job_dsl_seed.groovy 
b/jenkins-dsl/cassandra_job_dsl_seed.groovy
index ca6dc33..e4977a7 100644
--- a/jenkins-dsl/cassandra_job_dsl_seed.groovy
+++ b/jenkins-dsl/cassandra_job_dsl_seed.groovy
@@ -157,7 +157,7 @@ job('Cassandra-template-dtest') {
 shell("git clean -xdff ; git clone ${buildsRepo} ; git clone 
${dtestRepo}")
 }
 publishers {
-archiveArtifacts('cassandra-dtest/test_stdout.txt')
+archiveArtifacts('test_stdout.txt')
 junit {
 testResults('cassandra-dtest/nosetests.xml')
 testDataPublishers {



[jira] [Created] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-11-30 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-12979:
--

 Summary: checkAvailableDiskSpace doesn't update expectedWriteSize 
when reducing thread scope
 Key: CASSANDRA-12979
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Haddad
Assignee: Jon Haddad


If a compaction occurs that looks like it'll take up more space than remaining 
disk available, the compaction manager attempts to reduce the scope of the 
compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  

Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated from 
the original call to {{hasAvailableDiskSpace}}, so the comparisons that are 
done will always be against the size of the original compaction, rather than 
the reduced scope one.

Full method below:

{code}
protected void checkAvailableDiskSpace(long estimatedSSTables, long 
expectedWriteSize)
{
if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
OperationType.COMPACTION)
{
logger.info("Compaction space check is disabled");
return;
}

while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
expectedWriteSize))
{
if (!reduceScopeForLimitedSpace())
throw new RuntimeException(String.format("Not enough space for 
compaction, estimated sstables = %d, expected write size = %d", 
estimatedSSTables, expectedWriteSize));

  
}
}
{code}

I'm proposing to recalculate the {{estimatedSSTables}} and 
{{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra-builds git commit: Add dtest jobs to DSL

2016-11-30 Thread mshuler
Repository: cassandra-builds
Updated Branches:
  refs/heads/master bdb23517c -> 8202d6620


Add dtest jobs to DSL


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/8202d662
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/8202d662
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/8202d662

Branch: refs/heads/master
Commit: 8202d6620558e2bf54a6389c281c7a87bf3c15bb
Parents: bdb2351
Author: Michael Shuler 
Authored: Wed Nov 30 16:09:03 2016 -0600
Committer: Michael Shuler 
Committed: Wed Nov 30 16:09:03 2016 -0600

--
 jenkins-dsl/cassandra_job_dsl_seed.groovy | 41 --
 1 file changed, 19 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/8202d662/jenkins-dsl/cassandra_job_dsl_seed.groovy
--
diff --git a/jenkins-dsl/cassandra_job_dsl_seed.groovy 
b/jenkins-dsl/cassandra_job_dsl_seed.groovy
index c099c3f..ca6dc33 100644
--- a/jenkins-dsl/cassandra_job_dsl_seed.groovy
+++ b/jenkins-dsl/cassandra_job_dsl_seed.groovy
@@ -9,6 +9,7 @@ def jdkLabel = 'jdk1.8.0_66-unlimited-security'
 def slaveLabel = 'cassandra'
 def mainRepo = 'https://git-wip-us.apache.org/repos/asf/cassandra.git'
 def buildsRepo = 'https://git.apache.org/cassandra-builds.git'
+def dtestRepo = 'https://github.com/riptano/cassandra-dtest.git'
 def buildDescStr = 'REF = ${GIT_BRANCH}  COMMIT = ${GIT_COMMIT}'
 // Cassandra active branches
 def cassandraBranches = ['cassandra-2.2', 'cassandra-3.0', 'cassandra-3.11', 
'cassandra-3.X', 'trunk']
@@ -153,11 +154,12 @@ job('Cassandra-template-dtest') {
 }
 steps {
 buildDescription('', buildDescStr)
-shell("git clean -xdff ; git clone ${buildsRepo}")
+shell("git clean -xdff ; git clone ${buildsRepo} ; git clone 
${dtestRepo}")
 }
 publishers {
+archiveArtifacts('cassandra-dtest/test_stdout.txt')
 junit {
-testResults('nosetests.xml')
+testResults('cassandra-dtest/nosetests.xml')
 testDataPublishers {
 stabilityTestDataPublisher()
 }
@@ -215,25 +217,20 @@ cassandraBranches.each {
 }
 }
 
-///**
-// * Main branch dtest variation jobs
-// */
-//dtestTargets.each {
-//def targetName = it
-//
-//job("${jobNamePrefix}-${targetName}") {
-////disabled(false)
-//using('Cassandra-template-dtest')
-//configure { node ->
-//node / scm / branches / 'hudson.plugins.git.BranchSpec' / 
name(branchName)
-//}
-//steps {
-//shell("./cassandra-builds/build-scripts/cassandra-dtest.sh 
${targetName}")
-//}
-//}
-//}
-
-
-
+/**
+ * Main branch dtest variation jobs
+ */
+// TODO: set up variations similar to unittest above, ie. novnodes - 
currently, this is a default dtest run for each branch
+job("${jobNamePrefix}-dtest") {
+disabled(false)
+using('Cassandra-template-dtest')
+configure { node ->
+node / scm / branches / 'hudson.plugins.git.BranchSpec' / 
name(branchName)
+}
+steps {
+shell("./cassandra-builds/build-scripts/cassandra-dtest.sh")
+}
+}
 
+// The End.
 }



[jira] [Commented] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-11-30 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709932#comment-15709932
 ] 

Dikang Gu commented on CASSANDRA-8398:
--

Here is the patch, [~tjake], do you mind to take a look? Thanks!

|[patch | 
https://github.com/DikangGu/cassandra/commit/447a58a75019998214eb16dfef90387da9c05583]|
 [unit test | 
https://cassci.datastax.com/view/Dev/view/DikangGu/job/DikangGu-CASSANDRA-8398-trunk-testall/]|
 [dtest | 
https://cassci.datastax.com/view/Dev/view/DikangGu/job/DikangGu-CASSANDRA-8398-trunk-dtest/]
 |

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-11-30 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-8398:
-
 Reviewer: T Jake Luciani
Fix Version/s: (was: 2.1.x)
   3.12
   Status: Patch Available  (was: In Progress)

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra-builds git commit: Add dtest build script

2016-11-30 Thread mshuler
Repository: cassandra-builds
Updated Branches:
  refs/heads/master ce91316c1 -> bdb23517c


Add dtest build script


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/bdb23517
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/bdb23517
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/bdb23517

Branch: refs/heads/master
Commit: bdb23517c8028ea59deb3f5153f13aa80fa4be40
Parents: ce91316
Author: Michael Shuler 
Authored: Wed Nov 30 15:54:28 2016 -0600
Committer: Michael Shuler 
Committed: Wed Nov 30 15:54:28 2016 -0600

--
 build-scripts/cassandra-dtest.sh | 55 +++
 1 file changed, 55 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/bdb23517/build-scripts/cassandra-dtest.sh
--
diff --git a/build-scripts/cassandra-dtest.sh b/build-scripts/cassandra-dtest.sh
new file mode 100755
index 000..e96b089
--- /dev/null
+++ b/build-scripts/cassandra-dtest.sh
@@ -0,0 +1,55 @@
+#!/bin/bash -x
+
+
+#
+# Prep
+#
+
+
+export PYTHONIOENCODING="utf-8"
+export PYTHONUNBUFFERED=true
+export CASS_DRIVER_NO_EXTENSIONS=true
+export CASS_DRIVER_NO_CYTHON=true
+export CCM_MAX_HEAP_SIZE="2048M"
+export CCM_HEAP_NEWSIZE="200M"
+export NUM_TOKENS="32"
+export CASSANDRA_DIR=${WORKSPACE}
+
+# Loop to prevent failure due to maven-ant-tasks not downloading a jar..
+for x in $(seq 1 3); do
+ant clean jar
+RETURN="$?"
+if [ "${RETURN}" -eq "0" ]; then
+break
+fi
+done
+
+# Set up venv with dtest dependencies
+set -e # enable immediate exit if venv setup fails
+virtualenv venv
+source venv/bin/activate
+pip install -r cassandra-dtest/requirements.txt
+pip freeze
+
+
+#
+# Main
+#
+
+
+cd cassandra-dtest/
+rm -r upgrade_tests/ # TEMP: remove upgrade_tests
+set +e # disable immediate exit from this point
+./run_dtests.py --vnodes true --nose-options="--verbosity=3 --with-xunit 
--nocapture --attr=!resource-intensive" | tee -a ${WORKSPACE}/test_stdout.txt
+
+
+#
+# Clean
+#
+
+
+# /virtualenv
+deactivate
+
+# Exit cleanly for usable "Unstable" status
+exit 0



[jira] [Comment Edited] (CASSANDRA-12728) Handling partially written hint files

2016-11-30 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709858#comment-15709858
 ] 

Benjamin Roth edited comment on CASSANDRA-12728 at 11/30/16 9:38 PM:
-

+1

Let the operator decide if he prefers a crash or inconsistency. When not 
crashing it should be logged as error, so you can check error logs and instead 
of having to recover from a crash, you could start a repair if desired. The 
only recovery action one can take is to repair anyway. The only question is how 
to fail and how to get notified.
If the node crashes and the operator recognizes too late, situation may become 
even worse when hints expire.
The crash doesn't necessarily happen on startup. It may occur much later if 
there are a lot of hints and only the very last file is broken.


was (Author: brstgt):
+1

Let the operator decide if he prefers a crash or inconsistency. When not 
crashing it should be logged as error, so you can check error logs and instead 
of having to recover from a crash, you could start a repair if desired. The 
only recovery action one can take is to repair anyway. The only question is how 
to fail and how to get notified.
If the node crashes and the operator recognizes too late, situation may become 
even worse when hints expire.

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>Assignee: Aleksey Yeschenko
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> 

[jira] [Commented] (CASSANDRA-12728) Handling partially written hint files

2016-11-30 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709858#comment-15709858
 ] 

Benjamin Roth commented on CASSANDRA-12728:
---

+1

Let the operator decide if he prefers a crash or inconsistency. When not 
crashing it should be logged as error, so you can check error logs and instead 
of having to recover from a crash, you could start a repair if desired. The 
only recovery action one can take is to repair anyway. The only question is how 
to fail and how to get notified.
If the node crashes and the operator recognizes too late, situation may become 
even worse when hints expire.

> Handling partially written hint files
> -
>
> Key: CASSANDRA-12728
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12728
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>Assignee: Aleksey Yeschenko
>  Labels: lhf
> Attachments: CASSANDRA-12728.patch
>
>
> {noformat}
> ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.0.6.jar:3.0.6]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.io.EOFException: null
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> ... 15 common frames omitted
> {noformat}
> We've found out that the hint file was truncated because there was a hard 
> reboot around the time of last write to the file. I think we basically need 
> to handle partially written hint files. Also, the CRC file does not exist in 
> this case (probably because it crashed while writing the hints file). May be 
> ignoring and cleaning up such partially written hint files can be a way to 
> fix 

[jira] [Commented] (CASSANDRA-12844) nodetool drain causing mutiple nodes crashing with hint file corruption in Cassandra 3.9

2016-11-30 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709831#comment-15709831
 ] 

Benjamin Roth commented on CASSANDRA-12844:
---

Can you reproduce that? Or in other words, are you sure, the bare "nodetool 
drain" caused that? Or is it possible that the process has been killed in the 
middle? A drain itself should not write or or truncate or do anything else with 
hints. A drain is just flushing all memtables and shutting down thrift + native 
transport. Hints are created locally if outgoing requests fail, so this is sth 
really different.

I'm asking as I did many many drains (3.10) and never had such an issue.

> nodetool drain causing mutiple nodes crashing with hint file corruption in 
> Cassandra 3.9
> 
>
> Key: CASSANDRA-12844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12844
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Harikrishnan
>Priority: Critical
>  Labels: hints
>
> The steps are as follows.
> we have 4/4 node cassandra running in 3.9 version.
> In one node made some changes to cassanra.yaml. issued a nodetool drain 
> killed the cassandra process and restarted the node. After sometime nodetool 
> status reported multiple nodes are down in that DC.
> Went and check the system.log of all the files and found the hint corruption 
> occuring(CASSANDRA-12728).  nodetool drain causing this corruption and 
> bringing multiple nodes down is a big concern.
> ERROR [HintsDispatcher:2] 2016-10-26 12:17:59,361 
> HintsDispatchExecutor.java:225 - Failed to dispatch hints file 
> 4d1362f0-053c-4042-80a7-bfc85a26c90f-1477509190999-1.hints: file is corrupted 
> ({})
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:284)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:254)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) 
> ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) 
> ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_102]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_102]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_102]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_102]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12976) NullPointerException in SharedPool-Worker

2016-11-30 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12976:
-
Environment: Ubuntu 14.04/16.04  (was: Cassandra 3.0.9)

> NullPointerException in SharedPool-Worker
> -
>
> Key: CASSANDRA-12976
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12976
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.04/16.04
>Reporter: Mike
>
> After an update from 3.0.8 to 3.0.9 we are getting the following exception on 
> every node for the query:
> {noformat}
> SELECT * FROM keyspace.table WHERE partition = 0 AND expiration_time < 
> 1480519469368;
> {noformat}
> on the following table:
> {noformat}
> CREATE TABLE keyspace.table (
> partition int,
> expiration_time timestamp,
> phone text,
> PRIMARY KEY (partition, expiration_time, phone)
> ) WITH CLUSTERING ORDER BY (expiration_time ASC, phone ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 360
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {noformat}
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.NullPointerException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.9.jar:3.0.9]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.9.jar:3.0.9]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
> Caused by: java.lang.NullPointerException: null
> at 
> org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
>  ~[apache-cassandra-3.0.9.jar:3.0.9]
> ... 5 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-11-30 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709562#comment-15709562
 ] 

Marcus Eriksson commented on CASSANDRA-9143:


bq. I'd say we should keep full repairs simple. Don't do anti-compaction on 
them, and don't make them consistent.
sounds good

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12928) Assert error, 3.9 mutation stage

2016-11-30 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709548#comment-15709548
 ] 

Romain GERARD commented on CASSANDRA-12928:
---

Hello,

I see this exception in my logs, do you have any more information about how 
does it happen ?

Regards,
Romain

> Assert error, 3.9 mutation stage
> 
>
> Key: CASSANDRA-12928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12928
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3.9 
>Reporter: Jeff Jirsa
>
> {code}
> WARN  [MutationStage-341] 2016-11-17 18:39:18,781 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[MutationStage-341,5,main]: {}
> java.lang.AssertionError: null
>   at 
> io.netty.util.Recycler$WeakOrderQueue.(Recycler.java:225) 
> ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:180) 
> ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at io.netty.util.Recycler.recycle(Recycler.java:141) 
> ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> org.apache.cassandra.utils.btree.BTree$Builder.recycle(BTree.java:836) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.utils.btree.BTree$Builder.build(BTree.java:1089) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.build(PartitionUpdate.java:587)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.maybeBuild(PartitionUpdate.java:577)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate.holder(PartitionUpdate.java:388)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:177)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:172)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:868)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:456)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:257) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2016-11-30 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10145:

Status: Open  (was: Patch Available)

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
> Fix For: 3.x
>
> Attachments: 10145-trunk.txt
>
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12956) CL is not replayed on custom 2i exception

2016-11-30 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12956:

Assignee: Alex Petrov

> CL is not replayed on custom 2i exception
> -
>
> Key: CASSANDRA-12956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12956
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Critical
>
> If during the node shutdown / drain the custom (non-cf) 2i throws an 
> exception, CommitLog will get correctly preserved (segments won't get 
> discarded because segment tracking is correct). 
> However, when it gets replayed on node startup,  we're making a decision 
> whether or not to replay the commit log. CL segment starts getting replayed, 
> since there are non-discarded segments and during this process we're checking 
> whether there every [individual 
> mutation|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L215]
>  in commit log is already committed or no. Information about the sstables is 
> taken from [live sstables on 
> disk|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L250-L256].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12956) CL is not replayed on custom 2i exception

2016-11-30 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709291#comment-15709291
 ] 

Joshua McKenzie commented on CASSANDRA-12956:
-

[~blambov] to review.

> CL is not replayed on custom 2i exception
> -
>
> Key: CASSANDRA-12956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12956
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Critical
>
> If during the node shutdown / drain the custom (non-cf) 2i throws an 
> exception, CommitLog will get correctly preserved (segments won't get 
> discarded because segment tracking is correct). 
> However, when it gets replayed on node startup,  we're making a decision 
> whether or not to replay the commit log. CL segment starts getting replayed, 
> since there are non-discarded segments and during this process we're checking 
> whether there every [individual 
> mutation|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L215]
>  in commit log is already committed or no. Information about the sstables is 
> taken from [live sstables on 
> disk|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L250-L256].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12956) CL is not replayed on custom 2i exception

2016-11-30 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12956:

Reviewer: Branimir Lambov

> CL is not replayed on custom 2i exception
> -
>
> Key: CASSANDRA-12956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12956
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Priority: Critical
>
> If during the node shutdown / drain the custom (non-cf) 2i throws an 
> exception, CommitLog will get correctly preserved (segments won't get 
> discarded because segment tracking is correct). 
> However, when it gets replayed on node startup,  we're making a decision 
> whether or not to replay the commit log. CL segment starts getting replayed, 
> since there are non-discarded segments and during this process we're checking 
> whether there every [individual 
> mutation|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L215]
>  in commit log is already committed or no. Information about the sstables is 
> taken from [live sstables on 
> disk|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L250-L256].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-11-30 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709261#comment-15709261
 ] 

Blake Eggleston commented on CASSANDRA-9143:


bq. Should we prioritize the pending-repair-cleanup compactions?

Makes sense.

bq. Is there any point in doing anticompaction after repair with -full repairs? 
Can we always do consistent repairs? We would need to anticompact already 
repaired sstables into pending, but that should not be a big problem?

Good point. I'd say we should keep full repairs simple. Don't do 
anti-compaction on them, and don't make them consistent. Given the newness and 
relative complexity of consistent repair, it would be smart to have a full 
workaround in case we find a problem with it. If we're not going to do 
anti-compaction though, we should preserve repairedAt values of the sstables 
we're streaming around as part of a full repair. That will make is possible to 
fix corrupted or lost data in the repair buckets without adversely affecting 
the next incremental repair.

bq. In handleStatusRequest - if we don't have the local session, we should 
probably return that the session is failed?

That makes sense

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12978) mx4j -> HTTP 500 -> ConcurrentModificationException

2016-11-30 Thread Rob Emery (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Emery updated CASSANDRA-12978:
--
Priority: Critical  (was: Major)

> mx4j -> HTTP 500 -> ConcurrentModificationException
> ---
>
> Key: CASSANDRA-12978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Debian, Single cluster, 2 data centres, E5-2620 v3, 
> 16GB, RAID1 SSD Commit log, RAID10 15k HDD data
>Reporter: Rob Emery
>Priority: Critical
> Fix For: 2.1.6
>
>
> We run some checks from our Monitoring software that rely on mx4j.
> The checks typically grab some xml via HTTP request and parse it. For 
> example, CF Stats on 'MyKeySpace' and 'MyColumnFamily' are retrieved 
> using:
> http://cassandra001:8081/mbean?template=identity=org.apache.cassandra.db%3Atype%3DColumnFamilies%2Ckeyspace%3DMyKeySpace%2Ccolumnfamily%3DMyColumnFamily
> The checks run each minute. Periodically they result in a "HTTP 500 internal 
> server error". The HTML body returned is empty.
> Experimentally we ran Cassandra in the foreground on one node and reproduced 
> the problem. this elicited the following stack trace:
> javax.management.RuntimeMBeanException: 
> java.util.ConcurrentModificationException
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
> at 
> mx4j.tools.adaptor.http.MBeanCommandProcessor.createMBeanElement(MBeanCommandProcessor.java:119)
> at 
> mx4j.tools.adaptor.http.MBeanCommandProcessor.executeRequest(MBeanCommandProcessor.java:56)
> at 
> mx4j.tools.adaptor.http.HttpAdaptor$HttpClient.run(HttpAdaptor.java:980)
> Caused by: java.util.ConcurrentModificationException
> at 
> java.util.TreeMap$NavigableSubMap$SubMapIterator.nextEntry(TreeMap.java:1594)
> at 
> java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1642)
> at 
> java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1636)
> at java.util.AbstractMap$2$1.next(AbstractMap.java:385)
> at 
> org.apache.cassandra.utils.StreamingHistogram.sum(StreamingHistogram.java:160)
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata.getDroppableTombstonesBefore(StatsMetadata.java:113)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getDroppableTombstonesBefore(SSTableReader.java:2004)
> at 
> org.apache.cassandra.db.DataTracker.getDroppableTombstoneRatio(DataTracker.java:507)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getDroppableTombstoneRatio(ColumnFamilyStore.java:3089)
> at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
> at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
> at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12978) mx4j -> HTTP 500 -> ConcurrentModificationException

2016-11-30 Thread Rob Emery (JIRA)
Rob Emery created CASSANDRA-12978:
-

 Summary: mx4j -> HTTP 500 -> ConcurrentModificationException
 Key: CASSANDRA-12978
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12978
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian, Single cluster, 2 data centres, E5-2620 v3, 16GB, 
RAID1 SSD Commit log, RAID10 15k HDD data
Reporter: Rob Emery
 Fix For: 2.1.6


We run some checks from our Monitoring software that rely on mx4j.

The checks typically grab some xml via HTTP request and parse it. For 
example, CF Stats on 'MyKeySpace' and 'MyColumnFamily' are retrieved 
using:

http://cassandra001:8081/mbean?template=identity=org.apache.cassandra.db%3Atype%3DColumnFamilies%2Ckeyspace%3DMyKeySpace%2Ccolumnfamily%3DMyColumnFamily

The checks run each minute. Periodically they result in a "HTTP 500 internal 
server error". The HTML body returned is empty.

Experimentally we ran Cassandra in the foreground on one node and reproduced 
the problem. this elicited the following stack trace:

javax.management.RuntimeMBeanException: 
java.util.ConcurrentModificationException
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
mx4j.tools.adaptor.http.MBeanCommandProcessor.createMBeanElement(MBeanCommandProcessor.java:119)
at 
mx4j.tools.adaptor.http.MBeanCommandProcessor.executeRequest(MBeanCommandProcessor.java:56)
at 
mx4j.tools.adaptor.http.HttpAdaptor$HttpClient.run(HttpAdaptor.java:980)
Caused by: java.util.ConcurrentModificationException
at 
java.util.TreeMap$NavigableSubMap$SubMapIterator.nextEntry(TreeMap.java:1594)
at 
java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1642)
at 
java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1636)
at java.util.AbstractMap$2$1.next(AbstractMap.java:385)
at 
org.apache.cassandra.utils.StreamingHistogram.sum(StreamingHistogram.java:160)
at 
org.apache.cassandra.io.sstable.metadata.StatsMetadata.getDroppableTombstonesBefore(StatsMetadata.java:113)
at 
org.apache.cassandra.io.sstable.SSTableReader.getDroppableTombstonesBefore(SSTableReader.java:2004)
at 
org.apache.cassandra.db.DataTracker.getDroppableTombstoneRatio(DataTracker.java:507)
at 
org.apache.cassandra.db.ColumnFamilyStore.getDroppableTombstoneRatio(ColumnFamilyStore.java:3089)
at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at 
com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at 
com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
... 4 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12956) CL is not replayed on custom 2i exception

2016-11-30 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12956:

Status: Patch Available  (was: Open)

> CL is not replayed on custom 2i exception
> -
>
> Key: CASSANDRA-12956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12956
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Priority: Critical
>
> If during the node shutdown / drain the custom (non-cf) 2i throws an 
> exception, CommitLog will get correctly preserved (segments won't get 
> discarded because segment tracking is correct). 
> However, when it gets replayed on node startup,  we're making a decision 
> whether or not to replay the commit log. CL segment starts getting replayed, 
> since there are non-discarded segments and during this process we're checking 
> whether there every [individual 
> mutation|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L215]
>  in commit log is already committed or no. Information about the sstables is 
> taken from [live sstables on 
> disk|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L250-L256].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12976) NullPointerException in SharedPool-Worker

2016-11-30 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12976:
-
Description: 
After an update from 3.0.8 to 3.0.9 we are getting the following exception on 
every node for the query:
{noformat}
SELECT * FROM keyspace.table WHERE partition = 0 AND expiration_time < 
1480519469368;
{noformat}
on the following table:
{noformat}
CREATE TABLE keyspace.table (
partition int,
expiration_time timestamp,
phone text,
PRIMARY KEY (partition, expiration_time, phone)
) WITH CLUSTERING ORDER BY (expiration_time ASC, phone ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 360
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
{noformat}

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
... 5 common frames omitted
{noformat}

  was:
After an update from 3.0.8 to 3.0.9 we are getting the following exception on 
every node for the query:
{noformat}
SELECT * FROM keyspace.table WHERE partition = 0 AND expiration_time < 
1480519469368;
{noformat}
on the following table:
{noformat}
CREATE TABLE keyspace.table (
partition int,
expiration_time timestamp,
phone text,
PRIMARY KEY (partition, expiration_time, phone)
) WITH CLUSTERING ORDER BY (expiration_time ASC, phone ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 360
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND 

[jira] [Commented] (CASSANDRA-12966) Gossip thread slows down when using batch commit log

2016-11-30 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709058#comment-15709058
 ] 

Stefan Podkowinski commented on CASSANDRA-12966:


Seems like the gossip single thread execution is a bit problematic, as this 
also caused some pain in CASSANDRA-12281. Looks like CASSANDRA-8398 will be a 
good thing to have here.

Some comments regarding your patch:

My thoughts on concurrency aspects:
StorageService.handleStateNormal will update tokens for both TokenMetadata and 
SystemKeyspace. The 
previous blocking behavior would ensure both would be in-sync. Offloading the 
system table update to the mutation stage would allow to have the table lag 
behind, but I would not expect any races between mutations, as the execution 
order hasn't changed, just the executor.
Uncoupling the mutations this way without waiting for the write result 
shouldn't be a problem, as the system table is only used during initialization 
and there's no guarantees that the gossip state for a node is always recent 
anyways.

The synchronized keywords for removeEndpoints looks like a leftover from when 
the code would read and write back the modified token set and it should be safe 
to remove it.

As for API modifications, there are now two updateToken versions, one blocking 
and one asynchronous. Maybe async methods should be named differently, as the 
Future return value will not be checked in the code and you wouldn't be able to 
tell which version is called by reading code on the caller side.


> Gossip thread slows down when using batch commit log
> 
>
> Key: CASSANDRA-12966
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12966
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>
> When using batch commit log mode, the Gossip thread slows down when peers 
> after a node bounces. This is because we perform a bunch of updates to the 
> peers table via {{SystemKeyspace.updatePeerInfo}}, which is a synchronized 
> method. How quickly each one of those individual updates takes depends on how 
> busy the system is at the time wrt write traffic. If the system is largely 
> quiescent, each update will be relatively quick (just waiting for the fsync). 
> If the system is getting a lot of writes, and depending on the 
> commitlog_sync_batch_window_in_ms, each of the Gossip thread's updates can 
> get stuck in the backlog, which causes the Gossip thread to stop processing. 
> We have observed in large clusters that a rolling restart causes triggers and 
> exacerbates this behavior. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12976) NullPointerException in SharedPool-Worker

2016-11-30 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12976:
-
Description: 
After an update from 3.0.8 to 3.0.9 we are getting the following exception on 
every node for the query:
{noformat}
SELECT * FROM keyspace.table WHERE partition = 0 AND expiration_time < 
1480519469368;
{noformat}
on the following table:
{noformat}
CREATE TABLE keyspace.table (
partition int,
expiration_time timestamp,
phone text,
PRIMARY KEY (partition, expiration_time, phone)
) WITH CLUSTERING ORDER BY (expiration_time ASC, phone ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 360
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
{noformat}

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
{noformat}

  was:
After an update from 3.0.8 to 3.0.9 we are getting the following exception on 
every node for the query:
{noformat}
SELECT * FROM keyspace.table WHERE partition = 0 AND expiration_time < 
1480519469368;

on the following table:

CREATE TABLE keyspace.table (
partition int,
expiration_time timestamp,
phone text,
PRIMARY KEY (partition, expiration_time, phone)
) WITH CLUSTERING ORDER BY (expiration_time ASC, phone ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 360
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
{noformat}

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 

[jira] [Updated] (CASSANDRA-12976) NullPointerException in SharedPool-Worker

2016-11-30 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12976:
-
Description: 
After an update from 3.0.8 to 3.0.9 we are getting the following exception on 
every node for the query:
{noformat}
SELECT * FROM keyspace.table WHERE partition = 0 AND expiration_time < 
1480519469368;

on the following table:

CREATE TABLE keyspace.table (
partition int,
expiration_time timestamp,
phone text,
PRIMARY KEY (partition, expiration_time, phone)
) WITH CLUSTERING ORDER BY (expiration_time ASC, phone ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 360
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
{noformat}

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
{noformat}

  was:
After an update from 3.0.8 to 3.0.9 we are getting the following exception once 
per minute on every node:

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 

[jira] [Updated] (CASSANDRA-11115) Thrift removal

2016-11-30 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5:
-
 Reviewer: Aleksey Yeschenko
Fix Version/s: (was: 4.x)
   4.0
   Status: Patch Available  (was: Open)

Attaching patch on current trunk below:
| [5|https://github.com/pcmanus/cassandra/commits/5] | 
[utests|http://cassci.datastax.com/job/pcmanus-5-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-5-dtest] |

There is a few different commits but they are more iterations of things I 
removed so it might be easier to look at the change as a whole.

By and large, this remove things, lots of them, and a few minor related 
cleanups. There is 2 small points worthy of notice however:
* {{StorageService.describeRing()}}, which we still expose through a JMX method 
was using a thrift class, on which the {{toString()}} was called on for the JMX 
method. To preserve backward compatibility, I re-created an equivalent (though 
simpler) class ({{TokenRange}}), which should mantain the same {{toString()}} 
output.
* {{ConfigHelper}} was fun because it was still using some thrift and I know 
next to nothing on hadoop. From what I can tell, most of it was left-over from 
when we had thrift-based hadoop clients, with the exception of the user for the 
{{INPUT_KEYRANGE_CONFIG}}, where we were using a thrift object and thrift 
complex serialization to basically encode 2 strings (token strings at that, so 
with only hexadecimal characters). I replaced that by something much simpler, 
but the one I'm not 100% sure is if that breaks some backward compatibility. I 
don't think it does, but I'm again not knowledgeable of this enough to be 100% 
assertive.

Also, I want to look at the method visibility in {{SchemaKeyspace}} discussed 
on CASSANDRA-12716, and have 2-3 points on my TODO list to check if we can 
simplify, so I may add one commit in the next few days, but as it'll be pretty 
minor in any case, attaching so review is not blocked on that.


> Thrift removal
> --
>
> Key: CASSANDRA-5
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 4.0
>
>
> Thrift removal [has been announced for 
> 4.0|http://mail-archives.apache.org/mod_mbox/cassandra-user/201601.mbox/%3ccaldd-zgagnldu3pqbd6wp0jb0x73qjdr9phpxmmo+gq+2e5...@mail.gmail.com%3E].
>  This ticket is meant to serve as a general task for that removal, but also 
> to track issue related to that, either things that we should do in 3.x to 
> make that removal as smooth as possible, or sub-tasks that it makes sense to 
> separate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12976) NullPointerException in SharedPool-Worker

2016-11-30 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12976:
-
Description: 
After an update from 3.0.8 to 3.0.9 we are getting the following exception once 
per minute on every node:

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
{noformat}

  was:
After an update from 3.0.8 to 3.0.9 we are getting the following exception once 
per minute on every node:

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
 

[jira] [Commented] (CASSANDRA-12829) DELETE query with an empty IN clause can delete more than expected

2016-11-30 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708928#comment-15708928
 ] 

Benjamin Lerer commented on CASSANDRA-12829:


{quote}
As regards {{IN}} restriction, it's all quite simple to fix. However, I see a 
bit inconsistency. {{IN}} with just one value is simplified to {{EQ}} relation. 
{{IN}} with more than 1 value or 0 values will remain {{IN}}. And because 
{{IN}} restrictions are not supported with conditional deletions, currently 
we'll disallow 0 and > 1 values, while 1 value will work just as {{EQ}}. 
{quote}

{{IN}} restrictions with only one value are nearly always treated like {{EQ}} 
at the CQL level. So, I think that it is fine.

> DELETE query with an empty IN clause can delete more than expected
> --
>
> Key: CASSANDRA-12829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12829
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Arch Linux x64, kernel 4.7.6, Cassandra 3.9 downloaded 
> from the website
>Reporter: Jason T. Bradshaw
>Assignee: Alex Petrov
>
> When deleting from a table with a certain structure and using an *in* clause 
> with an empty list, the *in* clause with an empty list can be ignored, 
> resulting in deleting more than is expected.
> *Setup:*
> {code}
> cqlsh> create table test (a text, b text, id uuid, primary key ((a, b), id));
> cqlsh> insert into test (a, b, id) values ('a', 'b', 
> ----);
> cqlsh> insert into test (a, b, id) values ('b', 'c', 
> ----);
> cqlsh> insert into test (a, b, id) values ('a', 'c', 
> ----);
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Expected:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Actual:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  b | c | ----
> (1 rows)
> {code}
> Instead of deleting nothing, as the final empty *in* clause would imply, it 
> instead deletes everything that matches the first two clauses, acting as if 
> the following query had been issued instead:
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c');
> {code}
> This seems to be related to the presence of a tuple clustering key, as I 
> could not reproduce it without one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12829) DELETE query with an empty IN clause can delete more than expected

2016-11-30 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708951#comment-15708951
 ] 

Alex Petrov commented on CASSANDRA-12829:
-

Yup, they're optimised away in {{SingleColumnRelation#newINRestriction}}. Great 
then. Thanks!

> DELETE query with an empty IN clause can delete more than expected
> --
>
> Key: CASSANDRA-12829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12829
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Arch Linux x64, kernel 4.7.6, Cassandra 3.9 downloaded 
> from the website
>Reporter: Jason T. Bradshaw
>Assignee: Alex Petrov
>
> When deleting from a table with a certain structure and using an *in* clause 
> with an empty list, the *in* clause with an empty list can be ignored, 
> resulting in deleting more than is expected.
> *Setup:*
> {code}
> cqlsh> create table test (a text, b text, id uuid, primary key ((a, b), id));
> cqlsh> insert into test (a, b, id) values ('a', 'b', 
> ----);
> cqlsh> insert into test (a, b, id) values ('b', 'c', 
> ----);
> cqlsh> insert into test (a, b, id) values ('a', 'c', 
> ----);
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Expected:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Actual:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  b | c | ----
> (1 rows)
> {code}
> Instead of deleting nothing, as the final empty *in* clause would imply, it 
> instead deletes everything that matches the first two clauses, acting as if 
> the following query had been issued instead:
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c');
> {code}
> This seems to be related to the presence of a tuple clustering key, as I 
> could not reproduce it without one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12829) DELETE query with an empty IN clause can delete more than expected

2016-11-30 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708858#comment-15708858
 ] 

Alex Petrov commented on CASSANDRA-12829:
-

Thanks for noticing this. Turns out that my code did same thing, just was much 
more complex to parse. Your suggestion is very good. 

As regards {{IN}} restriction, it's all quite simple to fix. However, I see a 
bit inconsistency. {{IN}} with just one value is simplified to {{EQ}} relation. 
{{IN}} with more than 1 value or 0 values will remain {{IN}}. And because 
{{IN}} restrictions are not supported with conditional deletions, currently 
we'll disallow {{0}} and {{> 1}} values, while {{1}} value will work just as 
{{EQ}}. 

Do you think such behaviour would be acceptable? Are empty IN restrictions 
actually useful or will just cause edge-cases and unclear behaviour?

> DELETE query with an empty IN clause can delete more than expected
> --
>
> Key: CASSANDRA-12829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12829
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Arch Linux x64, kernel 4.7.6, Cassandra 3.9 downloaded 
> from the website
>Reporter: Jason T. Bradshaw
>Assignee: Alex Petrov
>
> When deleting from a table with a certain structure and using an *in* clause 
> with an empty list, the *in* clause with an empty list can be ignored, 
> resulting in deleting more than is expected.
> *Setup:*
> {code}
> cqlsh> create table test (a text, b text, id uuid, primary key ((a, b), id));
> cqlsh> insert into test (a, b, id) values ('a', 'b', 
> ----);
> cqlsh> insert into test (a, b, id) values ('b', 'c', 
> ----);
> cqlsh> insert into test (a, b, id) values ('a', 'c', 
> ----);
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Expected:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  a | c | ----
>  b | c | ----
>  a | b | ----
> (3 rows)
> {code}
> *Actual:*
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c') and id in ();
> cqlsh> select * from test;
>  a | b | id
> ---+---+--
>  b | c | ----
> (1 rows)
> {code}
> Instead of deleting nothing, as the final empty *in* clause would imply, it 
> instead deletes everything that matches the first two clauses, acting as if 
> the following query had been issued instead:
> {code}
> cqlsh> delete from test where a = 'a' and b in ('a', 'b', 'c');
> {code}
> This seems to be related to the presence of a tuple clustering key, as I 
> could not reproduce it without one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12977) column expire to null can still be retrieved using not null value in where clause

2016-11-30 Thread ruilonghe1988 (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ruilonghe1988 updated CASSANDRA-12977:
--
   Attachment: attatchment.txt
Reproduced In: 2.1.x
 Reviewer: Carl Yeksigian

> column expire to null can still be retrieved using not null value in where 
> clause
> -
>
> Key: CASSANDRA-12977
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12977
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cql  5.0.1
> cassandra 2.1.5
>Reporter: ruilonghe1988
> Attachments: attatchment.txt, attatchment.txt
>
>
> 1. first create table:
> create table device_share(
> device_id text primary key,
> share_status text,
> share_expire boolean
> );
> CREATE INDEX expireIndex ON device_share (share_expire);
> create index statusIndex ON device_share (share_status);
> 2.insert a new record:
> insert into device_share(device_id,share_status,share_expire) values 
> ('d1','ready',false);
> 3. update the share_expire value to fase with ttl 20
> update device_share using ttl 20 set share_expire = false where device_id = 
> 'd1';
> 4.after 20 seconds, can retrieve the record with condition where share_expire 
> = false, but the record in the console show the share_expire is null.
> cqlsh:test> select * from device_share where device_id ='d1' and 
> share_status='ready' and share_expire = false allow filtering;
>  device_id | share_expire | share_status
> ---+--+--
> d1 | null |ready
> is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12977) column expire to null can still be retrieved using not null value in where clause

2016-11-30 Thread ruilonghe1988 (JIRA)
ruilonghe1988 created CASSANDRA-12977:
-

 Summary: column expire to null can still be retrieved using not 
null value in where clause
 Key: CASSANDRA-12977
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12977
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: cql  5.0.1
cassandra 2.1.5
Reporter: ruilonghe1988
 Attachments: attatchment.txt

1. first create table:
create table device_share(
device_id text primary key,
share_status text,
share_expire boolean
);
CREATE INDEX expireIndex ON device_share (share_expire);
create index statusIndex ON device_share (share_status);

2.insert a new record:
insert into device_share(device_id,share_status,share_expire) values 
('d1','ready',false);


3. update the share_expire value to fase with ttl 20
update device_share using ttl 20 set share_expire = false where device_id = 
'd1';

4.after 20 seconds, can retrieve the record with condition where share_expire = 
false, but the record in the console show the share_expire is null.

cqlsh:test> select * from device_share where device_id ='d1' and 
share_status='ready' and share_expire = false allow filtering;

 device_id | share_expire | share_status
---+--+--
d1 | null |ready

is this a bug?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12976) NullPointerException in SharedPool-Worker

2016-11-30 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12976:
-
Description: 
After an update from 3.0.8 to 3.0.9 we are getting the following exception once 
per minute on every node:

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
{noformat}

Full repair fails also with the following error:
{noformat}
[2016-11-30 15:07:02,565] Some repair failed
[2016-11-30 15:07:02,566] Repair command #7 finished in 1 second
error: Repair job has failed with the error message: [2016-11-30 15:07:02,565] 
Some repair failed
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: 
[2016-11-30 15:07:02,565] Some repair failed
at 
org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:115)
at 
org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)
{noformat}

  was:
After an update from 3.0.8 to 3.0.9 we are getting the following exception once 
per second on every node:

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 

[jira] [Created] (CASSANDRA-12976) NullPointerException in SharedPool-Worker

2016-11-30 Thread Mike (JIRA)
Mike created CASSANDRA-12976:


 Summary: NullPointerException in SharedPool-Worker
 Key: CASSANDRA-12976
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12976
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 3.0.9
Reporter: Mike


After an update from 3.0.8 to 3.0.9 we are getting the following exception once 
per second on every node:

{noformat}
WARN  [SharedPool-Worker-1] 2016-11-30 14:48:00,852 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
{noformat}

Full repair fails also with the following error:
{noformat}
[2016-11-30 15:07:02,565] Some repair failed
[2016-11-30 15:07:02,566] Repair command #7 finished in 1 second
error: Repair job has failed with the error message: [2016-11-30 15:07:02,565] 
Some repair failed
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: 
[2016-11-30 15:07:02,565] Some repair failed
at 
org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:115)
at 
org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12962) SASI: Index are rebuilt on restart

2016-11-30 Thread Corentin Chary (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Corentin Chary updated CASSANDRA-12962:
---
 Priority: Minor  (was: Major)
Fix Version/s: 3.x
  Description: 
Apparently when cassandra any index that does not index a value in *every* live 
SSTable gets rebuild. The offending code can be found in the constructor of 
SASIIndex.

You can easilly reproduce it:
{code}
CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1'}  AND durable_writes = true;

CREATE TABLE test.test (
a text PRIMARY KEY,
b text,
c text
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE CUSTOM INDEX test_b_idx ON test.test (b) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
CREATE CUSTOM INDEX test_c_idx ON test.test (c) USING 
'org.apache.cassandra.index.sasi.SASIIndex';

INSERT INTO test.test (a, b) VALUES ('a', 'b');
{code}

Log (I added additional traces):

{code}
INFO  [main] 2016-11-28 15:32:21,191 ColumnFamilyStore.java:406 - Initializing 
test.test
DEBUG [SSTableBatchOpen:1] 2016-11-28 15:32:21,192 SSTableReader.java:505 - 
Opening 
/mnt/ssd/tmp/data/data/test/test-229e6380b57711e68407158fde22e121/mc-1-big 
(0.034KiB)
DEBUG [main] 2016-11-28 15:32:21,194 SASIIndex.java:118 - index: 
org.apache.cassandra.schema.IndexMetadata@2f661b1a[id=6b00489b-7010-396e-9348-9f32f5167f88,name=test_b_idx,kind=CUSTOM,options={class_name=org.a\
pache.cassandra.index.sasi.SASIIndex, target=b}], base CFS(Keyspace='test', 
ColumnFamily='test'), tracker org.apache.cassandra.db.lifecycle.Tracker@15900b83
INFO  [main] 2016-11-28 15:32:21,194 DataTracker.java:152 - 
SSTableIndex.open(column: b, minTerm: value, maxTerm: value, minKey: key, 
maxKey: key, sstable: BigTableReader(path='/mnt/ssd/tmp/data/data/test/test\
-229e6380b57711e68407158fde22e121/mc-1-big-Data.db'))
DEBUG [main] 2016-11-28 15:32:21,195 SASIIndex.java:129 - Rebuilding SASI 
Indexes: {}
DEBUG [main] 2016-11-28 15:32:21,195 ColumnFamilyStore.java:895 - Enqueuing 
flush of IndexInfo: 0.386KiB (0%) on-heap, 0.000KiB (0%) off-heap
DEBUG [PerDiskMemtableFlushWriter_0:1] 2016-11-28 15:32:21,204 
Memtable.java:465 - Writing Memtable-IndexInfo@748981977(0.054KiB serialized 
bytes, 1 ops, 0%/0% of on/off-heap limit), flushed range = (min(-9223\
372036854775808), max(9223372036854775807)]
DEBUG [PerDiskMemtableFlushWriter_0:1] 2016-11-28 15:32:21,204 
Memtable.java:494 - Completed flushing 
/mnt/ssd/tmp/data/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-4256-big-Data.db
 (0.035KiB) for\
 commitlog position CommitLogPosition(segmentId=1480343535479, position=15652)
DEBUG [MemtableFlushWriter:1] 2016-11-28 15:32:21,224 
ColumnFamilyStore.java:1200 - Flushed to 
[BigTableReader(path='/mnt/ssd/tmp/data/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-4256-big-Data.db\
')] (1 sstables, 4.838KiB), biggest 4.838KiB, smallest 4.838KiB
DEBUG [main] 2016-11-28 15:32:21,224 SASIIndex.java:118 - index: 
org.apache.cassandra.schema.IndexMetadata@12f3d291[id=45fcb286-b87a-3d18-a04b-b899a9880c91,name=test_c_idx,kind=CUSTOM,options={class_name=org.a\
pache.cassandra.index.sasi.SASIIndex, target=c}], base CFS(Keyspace='test', 
ColumnFamily='test'), tracker org.apache.cassandra.db.lifecycle.Tracker@15900b83
DEBUG [main] 2016-11-28 15:32:21,224 SASIIndex.java:121 - to rebuild: index: 
BigTableReader(path='/mnt/ssd/tmp/data/data/test/test-229e6380b57711e68407158fde22e121/mc-1-big-Data.db'),
 sstable: org.apache.cassa\
ndra.index.sasi.conf.ColumnIndex@6cbb6b0e
DEBUG [main] 2016-11-28 15:32:21,224 SASIIndex.java:129 - Rebuilding SASI 
Indexes: 
{BigTableReader(path='/mnt/ssd/tmp/data/data/test/test-229e6380b57711e68407158fde22e121/mc-1-big-Data.db')={c=org.apache.cassa\
ndra.index.sasi.conf.ColumnIndex@6cbb6b0e}}
DEBUG [main] 2016-11-28 15:32:21,225 ColumnFamilyStore.java:895 - Enqueuing 
flush of IndexInfo: 0.386KiB (0%) on-heap, 0.000KiB (0%) off-heap
DEBUG [PerDiskMemtableFlushWriter_0:2] 2016-11-28 15:32:21,235 
Memtable.java:465 - Writing Memtable-IndexInfo@951411443(0.054KiB serialized 
bytes, 1 ops, 0%/0% of on/off-heap limit), flushed range = (min(-9223\
372036854775808), max(9223372036854775807)]
DEBUG [PerDiskMemtableFlushWriter_0:2] 2016-11-28 

[jira] [Updated] (CASSANDRA-12900) Resurrect or remove HeapPool (unslabbed_heap_buffers)

2016-11-30 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12900:

   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.10
   3.0.11
   Status: Resolved  (was: Patch Available)

committed, thanks!

> Resurrect or remove HeapPool (unslabbed_heap_buffers)
> -
>
> Key: CASSANDRA-12900
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12900
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0.11, 3.10
>
>
> Seems this code has been commented out since CASSANDRA-8099 - we should 
> either remove the option or fix the code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[02/10] cassandra git commit: Reenable HeapPool

2016-11-30 Thread marcuse
Reenable HeapPool

Patch by marcuse; reviewed by Branimir Lambov for CASSANDRA-12900


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8cb9693a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8cb9693a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8cb9693a

Branch: refs/heads/cassandra-3.11
Commit: 8cb9693a6b334498ca7edd42e4a934c11b581f2c
Parents: d70b336
Author: Marcus Eriksson 
Authored: Tue Nov 15 15:03:51 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:24:43 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 19 -
 .../apache/cassandra/utils/memory/HeapPool.java | 77 
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 7 files changed, 15 insertions(+), 133 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 32bd821..58a29e7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
  * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)
  * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535)
+ * Reenable HeapPool (CASSANDRA-12900)
 Merged from 2.2:
  * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
  * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/Memtable.java
--
diff --git a/src/java/org/apache/cassandra/db/Memtable.java 
b/src/java/org/apache/cassandra/db/Memtable.java
index 3c77092..1a7d6cb 100644
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@ -249,10 +249,6 @@ public class Memtable implements Comparable
 allocator.onHeap().allocate(overhead, opGroup);
 initialSize = 8;
 }
-else
-{
-allocator.reclaimer().reclaimImmediately(cloneKey);
-}
 }
 
 long[] pair = previous.addAllWithSizeDelta(update, opGroup, indexer);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
--
diff --git 
a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java 
b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
index 2be882e..7f2de82 100644
--- a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
+++ b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
@@ -244,7 +244,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 long dataSize;
 long heapSize;
 long colUpdateTimeDelta = Long.MAX_VALUE;
-final MemtableAllocator.DataReclaimer reclaimer;
 List inserted; // TODO: replace with walk of aborted BTree
 
 private RowUpdater(AtomicBTreePartition updating, MemtableAllocator 
allocator, OpOrder.Group writeOp, UpdateTransaction indexer)
@@ -254,7 +253,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.writeOp = writeOp;
 this.indexer = indexer;
 this.nowInSec = FBUtilities.nowInSeconds();
-this.reclaimer = allocator.reclaimer();
 }
 
 private Row.Builder builder(Clustering clustering)
@@ -296,7 +294,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 if (inserted == null)
 inserted = new ArrayList<>();
 inserted.add(reconciled);
-discard(existing);
 
 return reconciled;
 }
@@ -306,22 +303,7 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.dataSize = 0;
 this.heapSize = 0;
 if (inserted != null)
-{
-for (Row row : inserted)
-abort(row);
 inserted.clear();
-}
-reclaimer.cancel();
-}
-
-protected void abort(Row abort)
-{
-reclaimer.reclaimImmediately(abort);
-}
-
-protected void discard(Row discard)
-{
-reclaimer.reclaim(discard);
 }
 
 public boolean 

[09/10] cassandra git commit: Merge branch 'cassandra-3.11' into cassandra-3.X

2016-11-30 Thread marcuse
Merge branch 'cassandra-3.11' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfcd06dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfcd06dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfcd06dd

Branch: refs/heads/trunk
Commit: dfcd06dda9383ca3a49a4ed1770b10673c63071c
Parents: 2ff97fe ea69be6
Author: Marcus Eriksson 
Authored: Wed Nov 30 14:31:16 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:31:16 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 20 -
 .../apache/cassandra/utils/memory/HeapPool.java | 84 ++--
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 .../org/apache/cassandra/tools/ToolsTester.java |  2 +-
 8 files changed, 26 insertions(+), 132 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfcd06dd/CHANGES.txt
--



[10/10] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-11-30 Thread marcuse
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee7f3c3b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee7f3c3b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee7f3c3b

Branch: refs/heads/trunk
Commit: ee7f3c3b530870c7011fc57f06bcade76c2ef9c7
Parents: cba3c01 dfcd06d
Author: Marcus Eriksson 
Authored: Wed Nov 30 14:32:08 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:32:08 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 20 -
 .../apache/cassandra/utils/memory/HeapPool.java | 84 ++--
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 .../org/apache/cassandra/tools/ToolsTester.java |  2 +-
 8 files changed, 26 insertions(+), 132 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee7f3c3b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee7f3c3b/src/java/org/apache/cassandra/db/Memtable.java
--



[04/10] cassandra git commit: Reenable HeapPool

2016-11-30 Thread marcuse
Reenable HeapPool

Patch by marcuse; reviewed by Branimir Lambov for CASSANDRA-12900


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8cb9693a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8cb9693a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8cb9693a

Branch: refs/heads/trunk
Commit: 8cb9693a6b334498ca7edd42e4a934c11b581f2c
Parents: d70b336
Author: Marcus Eriksson 
Authored: Tue Nov 15 15:03:51 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:24:43 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 19 -
 .../apache/cassandra/utils/memory/HeapPool.java | 77 
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 7 files changed, 15 insertions(+), 133 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 32bd821..58a29e7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
  * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)
  * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535)
+ * Reenable HeapPool (CASSANDRA-12900)
 Merged from 2.2:
  * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
  * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/Memtable.java
--
diff --git a/src/java/org/apache/cassandra/db/Memtable.java 
b/src/java/org/apache/cassandra/db/Memtable.java
index 3c77092..1a7d6cb 100644
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@ -249,10 +249,6 @@ public class Memtable implements Comparable
 allocator.onHeap().allocate(overhead, opGroup);
 initialSize = 8;
 }
-else
-{
-allocator.reclaimer().reclaimImmediately(cloneKey);
-}
 }
 
 long[] pair = previous.addAllWithSizeDelta(update, opGroup, indexer);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
--
diff --git 
a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java 
b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
index 2be882e..7f2de82 100644
--- a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
+++ b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
@@ -244,7 +244,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 long dataSize;
 long heapSize;
 long colUpdateTimeDelta = Long.MAX_VALUE;
-final MemtableAllocator.DataReclaimer reclaimer;
 List inserted; // TODO: replace with walk of aborted BTree
 
 private RowUpdater(AtomicBTreePartition updating, MemtableAllocator 
allocator, OpOrder.Group writeOp, UpdateTransaction indexer)
@@ -254,7 +253,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.writeOp = writeOp;
 this.indexer = indexer;
 this.nowInSec = FBUtilities.nowInSeconds();
-this.reclaimer = allocator.reclaimer();
 }
 
 private Row.Builder builder(Clustering clustering)
@@ -296,7 +294,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 if (inserted == null)
 inserted = new ArrayList<>();
 inserted.add(reconciled);
-discard(existing);
 
 return reconciled;
 }
@@ -306,22 +303,7 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.dataSize = 0;
 this.heapSize = 0;
 if (inserted != null)
-{
-for (Row row : inserted)
-abort(row);
 inserted.clear();
-}
-reclaimer.cancel();
-}
-
-protected void abort(Row abort)
-{
-reclaimer.reclaimImmediately(abort);
-}
-
-protected void discard(Row discard)
-{
-reclaimer.reclaim(discard);
 }
 
 public boolean abortEarly()

[07/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2016-11-30 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea69be62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea69be62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea69be62

Branch: refs/heads/trunk
Commit: ea69be62c84e51bbfa465204a8d4373a0d553553
Parents: f00e431 8cb9693
Author: Marcus Eriksson 
Authored: Wed Nov 30 14:30:45 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:30:45 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 20 -
 .../apache/cassandra/utils/memory/HeapPool.java | 84 ++--
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 .../org/apache/cassandra/tools/ToolsTester.java |  2 +-
 8 files changed, 26 insertions(+), 132 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/CHANGES.txt
--
diff --cc CHANGES.txt
index b238018,58a29e7..ea91a4e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -111,12 -2,23 +111,13 @@@ Merged from 3.0
   * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
   * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)
   * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535)
+  * Reenable HeapPool (CASSANDRA-12900)
 -Merged from 2.2:
 - * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
 - * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
 - * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 -
 -
 -3.0.10
 - * Disallow offheap_buffers memtable allocation (CASSANDRA-11039)
 - * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
   * Pass root cause to CorruptBlockException when uncompression failed 
(CASSANDRA-12889)
 - * Fix partition count log during compaction (CASSANDRA-12184)
   * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
   * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 - * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
   * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 - * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Avoid deadlock due to MV lock contention (CASSANDRA-12689)
   * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
   * Include SSTable filename in compacting large row message (CASSANDRA-12384)
   * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/db/Memtable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
--
diff --cc src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
index c7113d4,7f2de82..c9c6006
--- a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
+++ b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
@@@ -366,24 -303,9 +363,8 @@@ public class AtomicBTreePartition exten
  this.dataSize = 0;
  this.heapSize = 0;
  if (inserted != null)
- {
- for (Row row : inserted)
- abort(row);
  inserted.clear();
- }
- reclaimer.cancel();
- }
- 
- protected void abort(Row abort)
- {
- reclaimer.reclaimImmediately(abort);
  }
- 
- protected void discard(Row discard)
- {
- reclaimer.reclaim(discard);
- }
--
  public boolean abortEarly()
  {
  return updating.ref != ref;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/utils/memory/HeapPool.java
--
diff --cc src/java/org/apache/cassandra/utils/memory/HeapPool.java
index 46f4111,57242c4..abcc241
--- a/src/java/org/apache/cassandra/utils/memory/HeapPool.java
+++ b/src/java/org/apache/cassandra/utils/memory/HeapPool.java
@@@ -25,68 -29,27 +29,28 @@@ public class HeapPool extends MemtableP
  super(maxOnHeapMemory, 0, cleanupThreshold, 

[01/10] cassandra git commit: Reenable HeapPool

2016-11-30 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 d70b336de -> 8cb9693a6
  refs/heads/cassandra-3.11 f00e43167 -> ea69be62c
  refs/heads/cassandra-3.X 2ff97fec3 -> dfcd06dda
  refs/heads/trunk cba3c0141 -> ee7f3c3b5


Reenable HeapPool

Patch by marcuse; reviewed by Branimir Lambov for CASSANDRA-12900


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8cb9693a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8cb9693a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8cb9693a

Branch: refs/heads/cassandra-3.0
Commit: 8cb9693a6b334498ca7edd42e4a934c11b581f2c
Parents: d70b336
Author: Marcus Eriksson 
Authored: Tue Nov 15 15:03:51 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:24:43 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 19 -
 .../apache/cassandra/utils/memory/HeapPool.java | 77 
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 7 files changed, 15 insertions(+), 133 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 32bd821..58a29e7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
  * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)
  * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535)
+ * Reenable HeapPool (CASSANDRA-12900)
 Merged from 2.2:
  * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
  * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/Memtable.java
--
diff --git a/src/java/org/apache/cassandra/db/Memtable.java 
b/src/java/org/apache/cassandra/db/Memtable.java
index 3c77092..1a7d6cb 100644
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@ -249,10 +249,6 @@ public class Memtable implements Comparable
 allocator.onHeap().allocate(overhead, opGroup);
 initialSize = 8;
 }
-else
-{
-allocator.reclaimer().reclaimImmediately(cloneKey);
-}
 }
 
 long[] pair = previous.addAllWithSizeDelta(update, opGroup, indexer);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
--
diff --git 
a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java 
b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
index 2be882e..7f2de82 100644
--- a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
+++ b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
@@ -244,7 +244,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 long dataSize;
 long heapSize;
 long colUpdateTimeDelta = Long.MAX_VALUE;
-final MemtableAllocator.DataReclaimer reclaimer;
 List inserted; // TODO: replace with walk of aborted BTree
 
 private RowUpdater(AtomicBTreePartition updating, MemtableAllocator 
allocator, OpOrder.Group writeOp, UpdateTransaction indexer)
@@ -254,7 +253,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.writeOp = writeOp;
 this.indexer = indexer;
 this.nowInSec = FBUtilities.nowInSeconds();
-this.reclaimer = allocator.reclaimer();
 }
 
 private Row.Builder builder(Clustering clustering)
@@ -296,7 +294,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 if (inserted == null)
 inserted = new ArrayList<>();
 inserted.add(reconciled);
-discard(existing);
 
 return reconciled;
 }
@@ -306,22 +303,7 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.dataSize = 0;
 this.heapSize = 0;
 if (inserted != null)
-{
-for (Row row : inserted)
-abort(row);
 inserted.clear();
-}
-reclaimer.cancel();
-}
-
-protected 

[05/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2016-11-30 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea69be62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea69be62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea69be62

Branch: refs/heads/cassandra-3.X
Commit: ea69be62c84e51bbfa465204a8d4373a0d553553
Parents: f00e431 8cb9693
Author: Marcus Eriksson 
Authored: Wed Nov 30 14:30:45 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:30:45 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 20 -
 .../apache/cassandra/utils/memory/HeapPool.java | 84 ++--
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 .../org/apache/cassandra/tools/ToolsTester.java |  2 +-
 8 files changed, 26 insertions(+), 132 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/CHANGES.txt
--
diff --cc CHANGES.txt
index b238018,58a29e7..ea91a4e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -111,12 -2,23 +111,13 @@@ Merged from 3.0
   * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
   * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)
   * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535)
+  * Reenable HeapPool (CASSANDRA-12900)
 -Merged from 2.2:
 - * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
 - * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
 - * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 -
 -
 -3.0.10
 - * Disallow offheap_buffers memtable allocation (CASSANDRA-11039)
 - * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
   * Pass root cause to CorruptBlockException when uncompression failed 
(CASSANDRA-12889)
 - * Fix partition count log during compaction (CASSANDRA-12184)
   * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
   * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 - * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
   * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 - * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Avoid deadlock due to MV lock contention (CASSANDRA-12689)
   * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
   * Include SSTable filename in compacting large row message (CASSANDRA-12384)
   * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/db/Memtable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
--
diff --cc src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
index c7113d4,7f2de82..c9c6006
--- a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
+++ b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
@@@ -366,24 -303,9 +363,8 @@@ public class AtomicBTreePartition exten
  this.dataSize = 0;
  this.heapSize = 0;
  if (inserted != null)
- {
- for (Row row : inserted)
- abort(row);
  inserted.clear();
- }
- reclaimer.cancel();
- }
- 
- protected void abort(Row abort)
- {
- reclaimer.reclaimImmediately(abort);
  }
- 
- protected void discard(Row discard)
- {
- reclaimer.reclaim(discard);
- }
--
  public boolean abortEarly()
  {
  return updating.ref != ref;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/utils/memory/HeapPool.java
--
diff --cc src/java/org/apache/cassandra/utils/memory/HeapPool.java
index 46f4111,57242c4..abcc241
--- a/src/java/org/apache/cassandra/utils/memory/HeapPool.java
+++ b/src/java/org/apache/cassandra/utils/memory/HeapPool.java
@@@ -25,68 -29,27 +29,28 @@@ public class HeapPool extends MemtableP
  super(maxOnHeapMemory, 0, 

[08/10] cassandra git commit: Merge branch 'cassandra-3.11' into cassandra-3.X

2016-11-30 Thread marcuse
Merge branch 'cassandra-3.11' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfcd06dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfcd06dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfcd06dd

Branch: refs/heads/cassandra-3.X
Commit: dfcd06dda9383ca3a49a4ed1770b10673c63071c
Parents: 2ff97fe ea69be6
Author: Marcus Eriksson 
Authored: Wed Nov 30 14:31:16 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:31:16 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 20 -
 .../apache/cassandra/utils/memory/HeapPool.java | 84 ++--
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 .../org/apache/cassandra/tools/ToolsTester.java |  2 +-
 8 files changed, 26 insertions(+), 132 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfcd06dd/CHANGES.txt
--



[06/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2016-11-30 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea69be62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea69be62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea69be62

Branch: refs/heads/cassandra-3.11
Commit: ea69be62c84e51bbfa465204a8d4373a0d553553
Parents: f00e431 8cb9693
Author: Marcus Eriksson 
Authored: Wed Nov 30 14:30:45 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:30:45 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 20 -
 .../apache/cassandra/utils/memory/HeapPool.java | 84 ++--
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 .../org/apache/cassandra/tools/ToolsTester.java |  2 +-
 8 files changed, 26 insertions(+), 132 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/CHANGES.txt
--
diff --cc CHANGES.txt
index b238018,58a29e7..ea91a4e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -111,12 -2,23 +111,13 @@@ Merged from 3.0
   * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
   * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)
   * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535)
+  * Reenable HeapPool (CASSANDRA-12900)
 -Merged from 2.2:
 - * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
 - * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
 - * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 -
 -
 -3.0.10
 - * Disallow offheap_buffers memtable allocation (CASSANDRA-11039)
 - * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
   * Pass root cause to CorruptBlockException when uncompression failed 
(CASSANDRA-12889)
 - * Fix partition count log during compaction (CASSANDRA-12184)
   * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
   * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 - * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
   * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 - * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Avoid deadlock due to MV lock contention (CASSANDRA-12689)
   * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
   * Include SSTable filename in compacting large row message (CASSANDRA-12384)
   * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/db/Memtable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
--
diff --cc src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
index c7113d4,7f2de82..c9c6006
--- a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
+++ b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
@@@ -366,24 -303,9 +363,8 @@@ public class AtomicBTreePartition exten
  this.dataSize = 0;
  this.heapSize = 0;
  if (inserted != null)
- {
- for (Row row : inserted)
- abort(row);
  inserted.clear();
- }
- reclaimer.cancel();
- }
- 
- protected void abort(Row abort)
- {
- reclaimer.reclaimImmediately(abort);
  }
- 
- protected void discard(Row discard)
- {
- reclaimer.reclaim(discard);
- }
--
  public boolean abortEarly()
  {
  return updating.ref != ref;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea69be62/src/java/org/apache/cassandra/utils/memory/HeapPool.java
--
diff --cc src/java/org/apache/cassandra/utils/memory/HeapPool.java
index 46f4111,57242c4..abcc241
--- a/src/java/org/apache/cassandra/utils/memory/HeapPool.java
+++ b/src/java/org/apache/cassandra/utils/memory/HeapPool.java
@@@ -25,68 -29,27 +29,28 @@@ public class HeapPool extends MemtableP
  super(maxOnHeapMemory, 0, 

[03/10] cassandra git commit: Reenable HeapPool

2016-11-30 Thread marcuse
Reenable HeapPool

Patch by marcuse; reviewed by Branimir Lambov for CASSANDRA-12900


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8cb9693a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8cb9693a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8cb9693a

Branch: refs/heads/cassandra-3.X
Commit: 8cb9693a6b334498ca7edd42e4a934c11b581f2c
Parents: d70b336
Author: Marcus Eriksson 
Authored: Tue Nov 15 15:03:51 2016 +0100
Committer: Marcus Eriksson 
Committed: Wed Nov 30 14:24:43 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Memtable.java  |  4 -
 .../db/partitions/AtomicBTreePartition.java | 19 -
 .../apache/cassandra/utils/memory/HeapPool.java | 77 
 .../utils/memory/MemtableAllocator.java | 36 -
 .../cassandra/utils/memory/NativeAllocator.java |  6 --
 .../cassandra/utils/memory/SlabAllocator.java   |  5 --
 7 files changed, 15 insertions(+), 133 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 32bd821..58a29e7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * LocalToken ensures token values are cloned on heap (CASSANDRA-12651)
  * AnticompactionRequestSerializer serializedSize is incorrect 
(CASSANDRA-12934)
  * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535)
+ * Reenable HeapPool (CASSANDRA-12900)
 Merged from 2.2:
  * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
  * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/Memtable.java
--
diff --git a/src/java/org/apache/cassandra/db/Memtable.java 
b/src/java/org/apache/cassandra/db/Memtable.java
index 3c77092..1a7d6cb 100644
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@ -249,10 +249,6 @@ public class Memtable implements Comparable
 allocator.onHeap().allocate(overhead, opGroup);
 initialSize = 8;
 }
-else
-{
-allocator.reclaimer().reclaimImmediately(cloneKey);
-}
 }
 
 long[] pair = previous.addAllWithSizeDelta(update, opGroup, indexer);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cb9693a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
--
diff --git 
a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java 
b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
index 2be882e..7f2de82 100644
--- a/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
+++ b/src/java/org/apache/cassandra/db/partitions/AtomicBTreePartition.java
@@ -244,7 +244,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 long dataSize;
 long heapSize;
 long colUpdateTimeDelta = Long.MAX_VALUE;
-final MemtableAllocator.DataReclaimer reclaimer;
 List inserted; // TODO: replace with walk of aborted BTree
 
 private RowUpdater(AtomicBTreePartition updating, MemtableAllocator 
allocator, OpOrder.Group writeOp, UpdateTransaction indexer)
@@ -254,7 +253,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.writeOp = writeOp;
 this.indexer = indexer;
 this.nowInSec = FBUtilities.nowInSeconds();
-this.reclaimer = allocator.reclaimer();
 }
 
 private Row.Builder builder(Clustering clustering)
@@ -296,7 +294,6 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 if (inserted == null)
 inserted = new ArrayList<>();
 inserted.add(reconciled);
-discard(existing);
 
 return reconciled;
 }
@@ -306,22 +303,7 @@ public class AtomicBTreePartition extends 
AbstractBTreePartition
 this.dataSize = 0;
 this.heapSize = 0;
 if (inserted != null)
-{
-for (Row row : inserted)
-abort(row);
 inserted.clear();
-}
-reclaimer.cancel();
-}
-
-protected void abort(Row abort)
-{
-reclaimer.reclaimImmediately(abort);
-}
-
-protected void discard(Row discard)
-{
-reclaimer.reclaim(discard);
 }
 
 public boolean 

[jira] [Updated] (CASSANDRA-12666) dtest failure in paging_test.TestPagingData.test_paging_with_filtering_on_partition_key

2016-11-30 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-12666:

   Resolution: Fixed
Fix Version/s: (was: 3.10)
   3.11
   3.x
   Status: Resolved  (was: Ready to Commit)

Committed as f00e43167ab11f58af20439a300bdf82664abdb0 in 3.11 and 3.X, and 
cba3c0141acbe5f327622c87253c0e8316918867 in trunk.

> dtest failure in 
> paging_test.TestPagingData.test_paging_with_filtering_on_partition_key
> ---
>
> Key: CASSANDRA-12666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12666
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>Priority: Critical
>  Labels: dtest
> Fix For: 3.x, 3.11
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key
> {code}
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [Native-Transport-Requests-3] 2016-09-17 00:50:11,543 Message.java:622 
> - Unexpected exception during request; channel = [id: 0x467a4afe, 
> L:/127.0.0.3:9042 - R:/127.0.0.1:59115]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.dht.IncludingExcludingBounds.split(IncludingExcludingBounds.java:45)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.getRestrictedRanges(StorageProxy.java:2368)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$RangeIterator.(StorageProxy.java:1951)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:2235)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.PartitionRangeReadCommand.execute(PartitionRangeReadCommand.java:184)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:66)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.PartitionRangeQueryPager.fetchPage(PartitionRangeQueryPager.java:36)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:328)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:375)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:250)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:78)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:216)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:247) 
> ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:232) 
> ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}
> Related failures:
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key_on_clustering_columns/
> 

[3/6] cassandra git commit: Use correct bounds for all-data range when filtering

2016-11-30 Thread blambov
Use correct bounds for all-data range when filtering

Patch by Alex Petrov; reviewed by Branimir Lambov for CASSANDRA-12666.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f00e4316
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f00e4316
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f00e4316

Branch: refs/heads/trunk
Commit: f00e43167ab11f58af20439a300bdf82664abdb0
Parents: 8de24ca
Author: Alex Petrov 
Authored: Sun Sep 18 11:09:47 2016 +0200
Committer: Branimir Lambov 
Committed: Wed Nov 30 14:39:33 2016 +0200

--
 CHANGES.txt   |  1 +
 .../cql3/restrictions/StatementRestrictions.java  | 10 +-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f00e4316/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 72d6a1f..b238018 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
  * Remove timing window in test case (CASSANDRA-12875)
  * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
  * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f00e4316/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java 
b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
index 53ac68c..2d04633 100644
--- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
+++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
@@ -624,11 +624,6 @@ public final class StatementRestrictions
  */
 private ByteBuffer getPartitionKeyBound(Bound b, QueryOptions options)
 {
-// Deal with unrestricted partition key components (special-casing is 
required to deal with 2i queries on the
-// first component of a composite partition key) queries that filter 
on the partition key.
-if (partitionKeyRestrictions.needFiltering(cfm))
-return ByteBufferUtil.EMPTY_BYTE_BUFFER;
-
 // We deal with IN queries for keys in other places, so we know 
buildBound will return only one result
 return partitionKeyRestrictions.bounds(b, options).get(0);
 }
@@ -654,6 +649,11 @@ public final class StatementRestrictions
 private AbstractBounds 
getPartitionKeyBounds(IPartitioner p,
 
QueryOptions options)
 {
+// Deal with unrestricted partition key components (special-casing is 
required to deal with 2i queries on the
+// first component of a composite partition key) queries that filter 
on the partition key.
+if (partitionKeyRestrictions.needFiltering(cfm))
+return new Range<>(p.getMinimumToken().minKeyBound(), 
p.getMinimumToken().maxKeyBound());
+
 ByteBuffer startKeyBytes = getPartitionKeyBound(Bound.START, options);
 ByteBuffer finishKeyBytes = getPartitionKeyBound(Bound.END, options);
 



[6/6] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-11-30 Thread blambov
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cba3c014
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cba3c014
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cba3c014

Branch: refs/heads/trunk
Commit: cba3c0141acbe5f327622c87253c0e8316918867
Parents: bcb6762 2ff97fe
Author: Branimir Lambov 
Authored: Wed Nov 30 14:48:24 2016 +0200
Committer: Branimir Lambov 
Committed: Wed Nov 30 14:49:11 2016 +0200

--
 CHANGES.txt   |  1 +
 .../cql3/restrictions/StatementRestrictions.java  | 10 +-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cba3c014/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cba3c014/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
--



[4/6] cassandra git commit: Merge branch 'cassandra-3.11' into cassandra-3.X

2016-11-30 Thread blambov
Merge branch 'cassandra-3.11' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ff97fec
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ff97fec
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ff97fec

Branch: refs/heads/trunk
Commit: 2ff97fec30023b8eb45d9ade82fc6a659486f1c6
Parents: 0475922 f00e431
Author: Branimir Lambov 
Authored: Wed Nov 30 14:46:39 2016 +0200
Committer: Branimir Lambov 
Committed: Wed Nov 30 14:47:22 2016 +0200

--
 CHANGES.txt   |  1 +
 .../cql3/restrictions/StatementRestrictions.java  | 10 +-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ff97fec/CHANGES.txt
--
diff --cc CHANGES.txt
index 47b7c2a,b238018..64ed71c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.12
 + * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
 + * Add support for arithmetic operators (CASSANDRA-11935)
 +
 +
  3.10
+  * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
   * Remove timing window in test case (CASSANDRA-12875)
   * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
   * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)



[1/6] cassandra git commit: Use correct bounds for all-data range when filtering

2016-11-30 Thread blambov
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 8de24ca68 -> f00e43167
  refs/heads/cassandra-3.X 047592238 -> 2ff97fec3
  refs/heads/trunk bcb676223 -> cba3c0141


Use correct bounds for all-data range when filtering

Patch by Alex Petrov; reviewed by Branimir Lambov for CASSANDRA-12666.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f00e4316
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f00e4316
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f00e4316

Branch: refs/heads/cassandra-3.11
Commit: f00e43167ab11f58af20439a300bdf82664abdb0
Parents: 8de24ca
Author: Alex Petrov 
Authored: Sun Sep 18 11:09:47 2016 +0200
Committer: Branimir Lambov 
Committed: Wed Nov 30 14:39:33 2016 +0200

--
 CHANGES.txt   |  1 +
 .../cql3/restrictions/StatementRestrictions.java  | 10 +-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f00e4316/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 72d6a1f..b238018 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
  * Remove timing window in test case (CASSANDRA-12875)
  * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
  * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f00e4316/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java 
b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
index 53ac68c..2d04633 100644
--- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
+++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
@@ -624,11 +624,6 @@ public final class StatementRestrictions
  */
 private ByteBuffer getPartitionKeyBound(Bound b, QueryOptions options)
 {
-// Deal with unrestricted partition key components (special-casing is 
required to deal with 2i queries on the
-// first component of a composite partition key) queries that filter 
on the partition key.
-if (partitionKeyRestrictions.needFiltering(cfm))
-return ByteBufferUtil.EMPTY_BYTE_BUFFER;
-
 // We deal with IN queries for keys in other places, so we know 
buildBound will return only one result
 return partitionKeyRestrictions.bounds(b, options).get(0);
 }
@@ -654,6 +649,11 @@ public final class StatementRestrictions
 private AbstractBounds 
getPartitionKeyBounds(IPartitioner p,
 
QueryOptions options)
 {
+// Deal with unrestricted partition key components (special-casing is 
required to deal with 2i queries on the
+// first component of a composite partition key) queries that filter 
on the partition key.
+if (partitionKeyRestrictions.needFiltering(cfm))
+return new Range<>(p.getMinimumToken().minKeyBound(), 
p.getMinimumToken().maxKeyBound());
+
 ByteBuffer startKeyBytes = getPartitionKeyBound(Bound.START, options);
 ByteBuffer finishKeyBytes = getPartitionKeyBound(Bound.END, options);
 



[5/6] cassandra git commit: Merge branch 'cassandra-3.11' into cassandra-3.X

2016-11-30 Thread blambov
Merge branch 'cassandra-3.11' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ff97fec
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ff97fec
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ff97fec

Branch: refs/heads/cassandra-3.X
Commit: 2ff97fec30023b8eb45d9ade82fc6a659486f1c6
Parents: 0475922 f00e431
Author: Branimir Lambov 
Authored: Wed Nov 30 14:46:39 2016 +0200
Committer: Branimir Lambov 
Committed: Wed Nov 30 14:47:22 2016 +0200

--
 CHANGES.txt   |  1 +
 .../cql3/restrictions/StatementRestrictions.java  | 10 +-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ff97fec/CHANGES.txt
--
diff --cc CHANGES.txt
index 47b7c2a,b238018..64ed71c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.12
 + * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
 + * Add support for arithmetic operators (CASSANDRA-11935)
 +
 +
  3.10
+  * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
   * Remove timing window in test case (CASSANDRA-12875)
   * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
   * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)



[2/6] cassandra git commit: Use correct bounds for all-data range when filtering

2016-11-30 Thread blambov
Use correct bounds for all-data range when filtering

Patch by Alex Petrov; reviewed by Branimir Lambov for CASSANDRA-12666.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f00e4316
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f00e4316
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f00e4316

Branch: refs/heads/cassandra-3.X
Commit: f00e43167ab11f58af20439a300bdf82664abdb0
Parents: 8de24ca
Author: Alex Petrov 
Authored: Sun Sep 18 11:09:47 2016 +0200
Committer: Branimir Lambov 
Committed: Wed Nov 30 14:39:33 2016 +0200

--
 CHANGES.txt   |  1 +
 .../cql3/restrictions/StatementRestrictions.java  | 10 +-
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f00e4316/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 72d6a1f..b238018 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
  * Remove timing window in test case (CASSANDRA-12875)
  * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
  * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f00e4316/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java 
b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
index 53ac68c..2d04633 100644
--- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
+++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
@@ -624,11 +624,6 @@ public final class StatementRestrictions
  */
 private ByteBuffer getPartitionKeyBound(Bound b, QueryOptions options)
 {
-// Deal with unrestricted partition key components (special-casing is 
required to deal with 2i queries on the
-// first component of a composite partition key) queries that filter 
on the partition key.
-if (partitionKeyRestrictions.needFiltering(cfm))
-return ByteBufferUtil.EMPTY_BYTE_BUFFER;
-
 // We deal with IN queries for keys in other places, so we know 
buildBound will return only one result
 return partitionKeyRestrictions.bounds(b, options).get(0);
 }
@@ -654,6 +649,11 @@ public final class StatementRestrictions
 private AbstractBounds 
getPartitionKeyBounds(IPartitioner p,
 
QueryOptions options)
 {
+// Deal with unrestricted partition key components (special-casing is 
required to deal with 2i queries on the
+// first component of a composite partition key) queries that filter 
on the partition key.
+if (partitionKeyRestrictions.needFiltering(cfm))
+return new Range<>(p.getMinimumToken().minKeyBound(), 
p.getMinimumToken().maxKeyBound());
+
 ByteBuffer startKeyBytes = getPartitionKeyBound(Bound.START, options);
 ByteBuffer finishKeyBytes = getPartitionKeyBound(Bound.END, options);
 



[jira] [Commented] (CASSANDRA-12969) Index: index can significantly slow down boot

2016-11-30 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708470#comment-15708470
 ] 

Sam Tunnicliffe commented on CASSANDRA-12969:
-

Thanks, that is a sensible optimisation. I've pushed 3.X & trunk branches with 
the patch for CI. Assuming the test results look OK (which I'm sure they will), 
I'll commit once those are done.

> Index: index can significantly slow down boot
> -
>
> Key: CASSANDRA-12969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Corentin Chary
> Fix For: 3.x
>
> Attachments: 0004-index-do-not-re-insert-values-in-IndexInfo.patch
>
>
> During startup, each existing index is opened and marked as built by adding 
> an entry in "IndexInfo" and forcing a flush. Because of that we end up 
> flushing one sstable per index. On systems on HDD this can take minutes for 
> nothing.
> Thw following patch allows to avoid creating useless new sstables if the 
> index was already marked as built and will greatly reduce the startup time 
> (and improve availability during restarts).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12666) dtest failure in paging_test.TestPagingData.test_paging_with_filtering_on_partition_key

2016-11-30 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-12666:

Status: Ready to Commit  (was: Patch Available)

> dtest failure in 
> paging_test.TestPagingData.test_paging_with_filtering_on_partition_key
> ---
>
> Key: CASSANDRA-12666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12666
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>Priority: Critical
>  Labels: dtest
> Fix For: 3.10
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key
> {code}
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [Native-Transport-Requests-3] 2016-09-17 00:50:11,543 Message.java:622 
> - Unexpected exception during request; channel = [id: 0x467a4afe, 
> L:/127.0.0.3:9042 - R:/127.0.0.1:59115]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.dht.IncludingExcludingBounds.split(IncludingExcludingBounds.java:45)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.getRestrictedRanges(StorageProxy.java:2368)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$RangeIterator.(StorageProxy.java:1951)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:2235)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.PartitionRangeReadCommand.execute(PartitionRangeReadCommand.java:184)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:66)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.PartitionRangeQueryPager.fetchPage(PartitionRangeQueryPager.java:36)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:328)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:375)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:250)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:78)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:216)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:247) 
> ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:232) 
> ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}
> Related failures:
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key_on_clustering_columns/
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key_on_clustering_columns_with_contains/
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key_on_counter_columns/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12956) CL is not replayed on custom 2i exception

2016-11-30 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708453#comment-15708453
 ] 

Alex Petrov edited comment on CASSANDRA-12956 at 11/30/16 12:29 PM:


Patch for {{3.0}} is quite different and is much bigger. Main problem is that 
there's no transactionality on the same level as in {{3.X}}. {{3.0}} memtables 
are flushed and renamed to non-tmp names, readers are returned. We need a bit 
better granularity, since after we may have to abort all the flushed sstables 
if 2i failed. I've changed it a bit in {{3.x}} fashion, although since we flush 
to just one sstable, I thought that extracting {{txn}} to the top level will 
not give us anything.

Both patches introduce the second latch. I'm usually not the biggest fan of two 
threads that have to wait for one another, but here the ordering is an issue. 
Problem is that post-flush executor is single-threaded (for ordering), and 
flush executor is multi-threaded, so we can't return future backed with that 
multi-threaded executor as it will break order. On the other hand, if we move 
2i flush to flush executor, we'll have to sequentially wait for 2i, then all 
memtables. Current approach allows to keep these actions parallel. 

We only need to synchronise the non-cf 2i flush with memtable holding data for 
current cf. All the cf-index memtables will be in sync with data one anyways 
since they're combined in the transaction. 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12956-3.X]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.X-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.X-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/12956-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.0-dtest/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12956-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-trunk-dtest/]|



was (Author: ifesdjeen):
Patch for {{3.0}} is quite different and is much bigger. Main problem is that 
there's no transactionality on the same level as in {{3.X}}. {{3.0}} memtables 
are flushed and renamed to non-tmp names, readers are returned. We need a bit 
better granularity, since after we may have to abort all the flushed sstables 
if 2i failed.

Both patches introduce the second latch. I'm usually not the biggest fan of two 
threads that have to wait for one another, but here the ordering is an issue. 
Problem is that post-flush executor is single-threaded (for ordering), and 
flush executor is multi-threaded, so we can't return future backed with that 
multi-threaded executor as it will break order. On the other hand, if we move 
2i flush to flush executor, we'll have to sequentially wait for 2i, then all 
memtables. Current approach allows to keep these actions parallel. 

We only need to synchronise the non-cf 2i flush with memtable holding data for 
current cf. All the cf-index memtables will be in sync with data one anyways 
since they're combined in the transaction. 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12956-3.X]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.X-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.X-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/12956-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.0-dtest/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12956-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-trunk-dtest/]|


> CL is not replayed on custom 2i exception
> -
>
> Key: CASSANDRA-12956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12956
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Priority: Critical
>
> If during the node shutdown / drain the custom (non-cf) 2i throws an 
> exception, CommitLog will get correctly preserved (segments won't get 
> discarded because segment tracking is correct). 
> However, when it gets replayed on node startup,  we're making a decision 
> whether or not to replay the commit log. CL segment starts getting replayed, 
> since there are non-discarded segments and during this process we're checking 
> whether there every [individual 
> 

[jira] [Commented] (CASSANDRA-12956) CL is not replayed on custom 2i exception

2016-11-30 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708453#comment-15708453
 ] 

Alex Petrov commented on CASSANDRA-12956:
-

Patch for {{3.0}} is quite different and is much bigger. Main problem is that 
there's no transactionality on the same level as in {{3.X}}. {{3.0}} memtables 
are flushed and renamed to non-tmp names, readers are returned. We need a bit 
better granularity, since after we may have to abort all the flushed sstables 
if 2i failed.

Both patches introduce the second latch. I'm usually not the biggest fan of two 
threads that have to wait for one another, but here the ordering is an issue. 
Problem is that post-flush executor is single-threaded (for ordering), and 
flush executor is multi-threaded, so we can't return future backed with that 
multi-threaded executor as it will break order. On the other hand, if we move 
2i flush to flush executor, we'll have to sequentially wait for 2i, then all 
memtables. Current approach allows to keep these actions parallel. 

We only need to synchronise the non-cf 2i flush with memtable holding data for 
current cf. All the cf-index memtables will be in sync with data one anyways 
since they're combined in the transaction. 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12956-3.X]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.X-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.X-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/12956-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-3.0-dtest/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12956-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12956-trunk-dtest/]|


> CL is not replayed on custom 2i exception
> -
>
> Key: CASSANDRA-12956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12956
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Priority: Critical
>
> If during the node shutdown / drain the custom (non-cf) 2i throws an 
> exception, CommitLog will get correctly preserved (segments won't get 
> discarded because segment tracking is correct). 
> However, when it gets replayed on node startup,  we're making a decision 
> whether or not to replay the commit log. CL segment starts getting replayed, 
> since there are non-discarded segments and during this process we're checking 
> whether there every [individual 
> mutation|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L215]
>  in commit log is already committed or no. Information about the sstables is 
> taken from [live sstables on 
> disk|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L250-L256].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12975) Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException:

2016-11-30 Thread JianwenSun (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708291#comment-15708291
 ] 

JianwenSun edited comment on CASSANDRA-12975 at 11/30/16 11:21 AM:
---

i do another upgrade test , 2.0.9 -> 2.1.13 ->  3.0.5

i just start a clean v2.0.9 server without any custem tables, and stop it, 
after that i start a v2.1.13 server with the same configurations .that's seems 
ok.
but when i stop the v2.1.13 server and start v3.0.5 one. The error show again. 
I do nothing only start the old version server and start a new one.

Help plz 



Exception (java.lang.RuntimeException) encountered during startup: 
org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): 
expected a valid value (number, String, array, object, 'true', 'false' or 
'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected 
character ('K' (code 75)): expected a valid value (number, String, array, 
object, 'true', 'false' or 'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
at 
org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:442)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:365)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$234(LegacySchemaMigrator.java:237)
at 
org.apache.cassandra.schema.LegacySchemaMigrator$$Lambda$71/1361596573.accept(Unknown
 Source)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$231(LegacySchemaMigrator.java:177)
at 
org.apache.cassandra.schema.LegacySchemaMigrator$$Lambda$68/709923110.accept(Unknown
 Source)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
Caused by: org.codehaus.jackson.JsonParseException: Unexpected character ('K' 
(code 75)): expected a valid value (number, String, array, object, 'true', 
'false' or 'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1432)
at 
org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:385)
at 
org.codehaus.jackson.impl.JsonParserMinimalBase._reportUnexpectedChar(JsonParserMinimalBase.java:306)
at 
org.codehaus.jackson.impl.ReaderBasedParser._handleUnexpectedValue(ReaderBasedParser.java:1192)
at 
org.codehaus.jackson.impl.ReaderBasedParser.nextToken(ReaderBasedParser.java:479)
at 
org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2761)
at 
org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2709)
at 
org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1854)
at 
org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:557)
... 17 more
ERROR 11:10:57 Exception encountered during startup
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected 
character ('K' (code 75)): expected a valid value (number, String, array, 
object, 'true', 'false' or 'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
at 
org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:442)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:365)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 

[jira] [Commented] (CASSANDRA-12975) Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException:

2016-11-30 Thread JianwenSun (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708291#comment-15708291
 ] 

JianwenSun commented on CASSANDRA-12975:


i do another upgrade test , 2.0.9 -> 2.1.13 ->  3.0.5

i just start a clear v2.0.9 server without any custem tables, and stop it, 
after that i start a v2.1.13 server with the same configurations .that's seems 
ok.
but when i stop the v2.1.13 server and start v3.0.5 one. The error show again. 
I do nothing only start the old version server and start a new one.

Help plz 



Exception (java.lang.RuntimeException) encountered during startup: 
org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): 
expected a valid value (number, String, array, object, 'true', 'false' or 
'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected 
character ('K' (code 75)): expected a valid value (number, String, array, 
object, 'true', 'false' or 'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
at 
org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:442)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:365)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$234(LegacySchemaMigrator.java:237)
at 
org.apache.cassandra.schema.LegacySchemaMigrator$$Lambda$71/1361596573.accept(Unknown
 Source)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$231(LegacySchemaMigrator.java:177)
at 
org.apache.cassandra.schema.LegacySchemaMigrator$$Lambda$68/709923110.accept(Unknown
 Source)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
at 
org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
Caused by: org.codehaus.jackson.JsonParseException: Unexpected character ('K' 
(code 75)): expected a valid value (number, String, array, object, 'true', 
'false' or 'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1432)
at 
org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:385)
at 
org.codehaus.jackson.impl.JsonParserMinimalBase._reportUnexpectedChar(JsonParserMinimalBase.java:306)
at 
org.codehaus.jackson.impl.ReaderBasedParser._handleUnexpectedValue(ReaderBasedParser.java:1192)
at 
org.codehaus.jackson.impl.ReaderBasedParser.nextToken(ReaderBasedParser.java:479)
at 
org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2761)
at 
org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2709)
at 
org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1854)
at 
org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:557)
... 17 more
ERROR 11:10:57 Exception encountered during startup
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected 
character ('K' (code 75)): expected a valid value (number, String, array, 
object, 'true', 'false' or 'null')
 at [Source: java.io.StringReader@23a4eab1; line: 1, column: 2]
at 
org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:442)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:365)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 

cassandra git commit: Ninja commit trivial followup to #12716

2016-11-30 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4a2464192 -> bcb676223


Ninja commit trivial followup to #12716


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bcb67622
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bcb67622
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bcb67622

Branch: refs/heads/trunk
Commit: bcb676223d24ae6acdff7c8df70f9926b46dfe0b
Parents: 4a24641
Author: Sylvain Lebresne 
Authored: Wed Nov 30 11:10:23 2016 +0100
Committer: Sylvain Lebresne 
Committed: Wed Nov 30 11:10:23 2016 +0100

--
 .../org/apache/cassandra/io/sstable/format/big/BigFormat.java | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcb67622/src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java 
b/src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java
index 980eed0..ac7801c 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java
@@ -132,8 +132,7 @@ public class BigFormat implements SSTableFormat
 isLatestVersion = version.compareTo(current_version) == 0;
 correspondingMessagingVersion = MessagingService.VERSION_30;
 
-hasCommitLogLowerBound = (version.compareTo("lb") >= 0 && 
version.compareTo("ma") < 0)
- || version.compareTo("mb") >= 0;
+hasCommitLogLowerBound = version.compareTo("mb") >= 0;
 hasCommitLogIntervals = version.compareTo("mc") >= 0;
 }
 



[jira] [Commented] (CASSANDRA-11107) Add native_transport_address and native_transport_broadcast_address yaml options

2016-11-30 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708131#comment-15708131
 ] 

Sylvain Lebresne commented on CASSANDRA-11107:
--

bq. Let me know what you think, [~slebresne].

Hum, I guess it's probably late enough in 3.0/3.X cycle to not bother with 
anything too complex. I guess the main problem is 4.0 where using {{rpc_*}} 
will look really weird and arbitrary. But I'm fine making this a 4.0 patch only 
by introducing the new names there as basically a renaming of the old ones 
where:
* for the yaml, since it's a major upgrade, maybe it's fine to just start 
refusing the old names and adding the new ones direclty.
* for the {{peers}}/{{local}} tables, we can add {{native_transport_address}} 
but keep {{rpc_address}} (which will have the exact same value) temporarily as 
deprecated so drivers have time to update.
* for gossip, I believe we can just rename the {{ApplicationState}} enum but 
call it a day otherwise.


> Add native_transport_address and native_transport_broadcast_address yaml 
> options
> 
>
> Key: CASSANDRA-11107
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11107
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: n0rad
>Assignee: Joel Knighton
>Priority: Minor
>
> I'm starting cassandra on a container with this /etc/hosts
> {quote}
> 127.0.0.1rkt-235c219a-f0dc-4958-9e03-5afe2581bbe1 localhost
> ::1  rkt-235c219a-f0dc-4958-9e03-5afe2581bbe1 localhost
> {quote}
> I have the default configuration except :
> {quote}
>  - seeds: "10.1.1.1"
> listen_address : 10.1.1.1
> {quote}
> cassandra will start listening on *127.0.0.1:9042*
> if I set *rpc_address:10.1.1.1* , even if *start_rpc: false*, cassandra will 
> listen on 10.1.1.1
> Since rpc is not started, I assumed that *rpc_address* and 
> *broadcast_rpc_address* will be ignored
> It took me a while to figure that. There may be something to do around this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12716) Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12716:
-
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

bq. There are some unused methods/fields/constants here and there (starting 
with {{LegacyLayout}}), and instances where we can now reduce method visibility 
(e.g. many in {{SchemaKeyspace}} that no longer need to be visible as 
{{LegacySchemaMigrator(Test)}} are gone for good).

I have an incoming patch for CASSANDRA-5 that remove (almost all) 
{{LegacyLayout}} and I'll look at the {{SchemaKeyspace}} method visibility 
there while I'm at it. Committed in the meantime, thanks.


> Remove pre-3.0 compatibility code for 4.0
> -
>
> Key: CASSANDRA-12716
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12716
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 4.0
>
>
> CASSANDRA-8099 make subsequent changes to internal formats so that we have 
> quite a bit of backward compatibility code a bit over the place. Due to that, 
> but also as a natural evolution, I believe we always had a tacit agreement 
> that 3.0/3.X would be a mandatory step on the upgrade to 4.X, so that we can 
> remove pre-3.0 compatibility (that is, we won't support going from any 2.x 
> release directly to 4.0, you'll have to upgrade to at least some 3.0 release 
> first).
> I think it's time to create the 4.0 branch and remove that pre-3.0 backward 
> compatibility code, which should clean code quite a bit, and that's the goal 
> of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[03/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.1/test/foo-0094ac203e7411e59149ef9f87394ca6/test-foo-tmplink-ka-4-Index.db
--
diff --git 
a/test/data/migration-sstables/2.1/test/foo-0094ac203e7411e59149ef9f87394ca6/test-foo-tmplink-ka-4-Index.db
 
b/test/data/migration-sstables/2.1/test/foo-0094ac203e7411e59149ef9f87394ca6/test-foo-tmplink-ka-4-Index.db
deleted file mode 100644
index 5d71315..000
Binary files 
a/test/data/migration-sstables/2.1/test/foo-0094ac203e7411e59149ef9f87394ca6/test-foo-tmplink-ka-4-Index.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-CompressionInfo.db
--
diff --git 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-CompressionInfo.db
 
b/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-CompressionInfo.db
deleted file mode 100644
index f7a81f0..000
Binary files 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-CompressionInfo.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Data.db
--
diff --git 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Data.db
 
b/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Data.db
deleted file mode 100644
index 2d5e60a..000
Binary files 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Data.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Digest.adler32
--
diff --git 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Digest.adler32
 
b/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Digest.adler32
deleted file mode 100644
index deffbd1..000
--- 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Digest.adler32
+++ /dev/null
@@ -1 +0,0 @@
-2055934203
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Filter.db
--
diff --git 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Filter.db
 
b/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Filter.db
deleted file mode 100644
index a749417..000
Binary files 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Filter.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Index.db
--
diff --git 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Index.db
 
b/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Index.db
deleted file mode 100644
index d3923ab..000
Binary files 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Index.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Statistics.db
--
diff --git 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Statistics.db
 
b/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Statistics.db
deleted file mode 100644
index 664bfa5..000
Binary files 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Statistics.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Summary.db
--
diff --git 
a/test/data/migration-sstables/2.2/keyspace1/test-dfcc85801bc811e5aa694b06169f4ffa/la-1-big-Summary.db
 

[08/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/db/filter/RowFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/RowFilter.java 
b/src/java/org/apache/cassandra/db/filter/RowFilter.java
index 4c0608f..5baf783 100644
--- a/src/java/org/apache/cassandra/db/filter/RowFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/RowFilter.java
@@ -509,16 +509,12 @@ public abstract class RowFilter implements 
Iterable
 {
 public void serialize(Expression expression, DataOutputPlus out, 
int version) throws IOException
 {
-if (version >= MessagingService.VERSION_30)
-out.writeByte(expression.kind().ordinal());
+out.writeByte(expression.kind().ordinal());
 
 // Custom expressions include neither a column or operator, 
but all
-// other expressions do. Also, custom expressions are 3.0+ 
only, so
-// the column & operator will always be the first things 
written for
-// any pre-3.0 version
+// other expressions do.
 if (expression.kind() == Kind.CUSTOM)
 {
-assert version >= MessagingService.VERSION_30;
 
IndexMetadata.serializer.serialize(((CustomExpression)expression).targetIndex, 
out, version);
 ByteBufferUtil.writeWithShortLength(expression.value, out);
 return;
@@ -526,7 +522,6 @@ public abstract class RowFilter implements 
Iterable
 
 if (expression.kind() == Kind.USER)
 {
-assert version >= MessagingService.VERSION_30;
 UserExpression.serialize((UserExpression)expression, out, 
version);
 return;
 }
@@ -541,15 +536,8 @@ public abstract class RowFilter implements 
Iterable
 break;
 case MAP_EQUALITY:
 MapEqualityExpression mexpr = 
(MapEqualityExpression)expression;
-if (version < MessagingService.VERSION_30)
-{
-
ByteBufferUtil.writeWithShortLength(mexpr.getIndexValue(), out);
-}
-else
-{
-ByteBufferUtil.writeWithShortLength(mexpr.key, 
out);
-ByteBufferUtil.writeWithShortLength(mexpr.value, 
out);
-}
+ByteBufferUtil.writeWithShortLength(mexpr.key, out);
+ByteBufferUtil.writeWithShortLength(mexpr.value, out);
 break;
 case THRIFT_DYN_EXPR:
 
ByteBufferUtil.writeWithShortLength(((ThriftExpression)expression).value, out);
@@ -559,62 +547,33 @@ public abstract class RowFilter implements 
Iterable
 
 public Expression deserialize(DataInputPlus in, int version, 
CFMetaData metadata) throws IOException
 {
-Kind kind = null;
-ByteBuffer name;
-Operator operator;
-ColumnDefinition column;
+Kind kind = Kind.values()[in.readByte()];
 
-if (version >= MessagingService.VERSION_30)
+// custom expressions (3.0+ only) do not contain a column or 
operator, only a value
+if (kind == Kind.CUSTOM)
 {
-kind = Kind.values()[in.readByte()];
-// custom expressions (3.0+ only) do not contain a column 
or operator, only a value
-if (kind == Kind.CUSTOM)
-{
-return new CustomExpression(metadata,
-
IndexMetadata.serializer.deserialize(in, version, metadata),
-
ByteBufferUtil.readWithShortLength(in));
-}
-
-if (kind == Kind.USER)
-{
-return UserExpression.deserialize(in, version, 
metadata);
-}
+return new CustomExpression(metadata,
+IndexMetadata.serializer.deserialize(in, version, 
metadata),
+ByteBufferUtil.readWithShortLength(in));
 }
 
-name = ByteBufferUtil.readWithShortLength(in);
-operator = Operator.readFrom(in);
-column = metadata.getColumnDefinition(name);
+if (kind == Kind.USER)
+return UserExpression.deserialize(in, version, metadata);
+
+ByteBuffer name = ByteBufferUtil.readWithShortLength(in);
+Operator operator = Operator.readFrom(in);
+

[10/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/db/RangeSliceVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/db/RangeSliceVerbHandler.java 
b/src/java/org/apache/cassandra/db/RangeSliceVerbHandler.java
deleted file mode 100644
index 55826f5..000
--- a/src/java/org/apache/cassandra/db/RangeSliceVerbHandler.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.db;
-
-import org.apache.cassandra.io.IVersionedSerializer;
-
-public class RangeSliceVerbHandler extends ReadCommandVerbHandler
-{
-@Override
-protected IVersionedSerializer serializer()
-{
-return ReadResponse.rangeSliceSerializer;
-}
-}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index d8051fe..0bda184 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -37,7 +37,6 @@ import org.apache.cassandra.db.transform.Transformation;
 import org.apache.cassandra.dht.AbstractBounds;
 import org.apache.cassandra.index.Index;
 import org.apache.cassandra.index.IndexNotAvailableException;
-import org.apache.cassandra.io.ForwardingVersionedSerializer;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
@@ -64,43 +63,6 @@ public abstract class ReadCommand extends MonitorableImpl 
implements ReadQuery
 protected static final Logger logger = 
LoggerFactory.getLogger(ReadCommand.class);
 public static final IVersionedSerializer serializer = new 
Serializer();
 
-// For READ verb: will either dispatch on 'serializer' for 3.0 or 
'legacyReadCommandSerializer' for earlier version.
-// Can be removed (and replaced by 'serializer') once we drop pre-3.0 
backward compatibility.
-public static final IVersionedSerializer readSerializer = new 
ForwardingVersionedSerializer()
-{
-protected IVersionedSerializer delegate(int version)
-{
-return version < MessagingService.VERSION_30
-? legacyReadCommandSerializer : serializer;
-}
-};
-
-// For RANGE_SLICE verb: will either dispatch on 'serializer' for 3.0 or 
'legacyRangeSliceCommandSerializer' for earlier version.
-// Can be removed (and replaced by 'serializer') once we drop pre-3.0 
backward compatibility.
-public static final IVersionedSerializer rangeSliceSerializer 
= new ForwardingVersionedSerializer()
-{
-protected IVersionedSerializer delegate(int version)
-{
-return version < MessagingService.VERSION_30
-? legacyRangeSliceCommandSerializer : serializer;
-}
-};
-
-// For PAGED_RANGE verb: will either dispatch on 'serializer' for 3.0 or 
'legacyPagedRangeCommandSerializer' for earlier version.
-// Can be removed (and replaced by 'serializer') once we drop pre-3.0 
backward compatibility.
-public static final IVersionedSerializer pagedRangeSerializer 
= new ForwardingVersionedSerializer()
-{
-protected IVersionedSerializer delegate(int version)
-{
-return version < MessagingService.VERSION_30
-? legacyPagedRangeCommandSerializer : serializer;
-}
-};
-
-public static final IVersionedSerializer 
legacyRangeSliceCommandSerializer = new LegacyRangeSliceCommandSerializer();
-public static final IVersionedSerializer 
legacyPagedRangeCommandSerializer = new LegacyPagedRangeCommandSerializer();
-public static final IVersionedSerializer 
legacyReadCommandSerializer = new LegacyReadCommandSerializer();
-
 private final Kind kind;
 private final CFMetaData metadata;
 private final int nowInSec;
@@ -580,7 +542,7 @@ public abstract class ReadCommand extends MonitorableImpl 

[07/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java
index ce42126..ad0f3c9 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java
@@ -54,18 +54,12 @@ public abstract class SSTableSimpleIterator extends 
AbstractIterator
 
 public static SSTableSimpleIterator create(CFMetaData metadata, 
DataInputPlus in, SerializationHeader header, SerializationHelper helper, 
DeletionTime partitionDeletion)
 {
-if (helper.version < MessagingService.VERSION_30)
-return new OldFormatIterator(metadata, in, helper, 
partitionDeletion);
-else
-return new CurrentFormatIterator(metadata, in, header, helper);
+return new CurrentFormatIterator(metadata, in, header, helper);
 }
 
 public static SSTableSimpleIterator createTombstoneOnly(CFMetaData 
metadata, DataInputPlus in, SerializationHeader header, SerializationHelper 
helper, DeletionTime partitionDeletion)
 {
-if (helper.version < MessagingService.VERSION_30)
-return new OldFormatTombstoneIterator(metadata, in, helper, 
partitionDeletion);
-else
-return new CurrentFormatTombstoneIterator(metadata, in, header, 
helper);
+return new CurrentFormatTombstoneIterator(metadata, in, header, 
helper);
 }
 
 public abstract Row readStaticRow() throws IOException;
@@ -136,106 +130,4 @@ public abstract class SSTableSimpleIterator extends 
AbstractIterator
 }
 }
 }
-
-private static class OldFormatIterator extends SSTableSimpleIterator
-{
-private final UnfilteredDeserializer deserializer;
-
-private OldFormatIterator(CFMetaData metadata, DataInputPlus in, 
SerializationHelper helper, DeletionTime partitionDeletion)
-{
-super(metadata, in, helper);
-// We use an UnfilteredDeserializer because even though we don't 
need all it's fanciness, it happens to handle all
-// the details we need for reading the old format.
-this.deserializer = UnfilteredDeserializer.create(metadata, in, 
null, helper, partitionDeletion, false);
-}
-
-public Row readStaticRow() throws IOException
-{
-if (metadata.isCompactTable())
-{
-// For static compact tables, in the old format, static 
columns are intermingled with the other columns, so we
-// need to extract them. Which imply 2 passes (one to extract 
the static, then one for other value).
-if (metadata.isStaticCompactTable())
-{
-assert in instanceof RewindableDataInput;
-RewindableDataInput file = (RewindableDataInput)in;
-DataPosition mark = file.mark();
-Row staticRow = 
LegacyLayout.extractStaticColumns(metadata, file, 
metadata.partitionColumns().statics);
-file.reset(mark);
-
-// We've extracted the static columns, so we must ignore 
them on the 2nd pass
-
((UnfilteredDeserializer.OldFormatDeserializer)deserializer).setSkipStatic();
-return staticRow;
-}
-else
-{
-return Rows.EMPTY_STATIC_ROW;
-}
-}
-
-return deserializer.hasNext() && deserializer.nextIsStatic()
- ? (Row)deserializer.readNext()
- : Rows.EMPTY_STATIC_ROW;
-
-}
-
-protected Unfiltered computeNext()
-{
-while (true)
-{
-try
-{
-if (!deserializer.hasNext())
-return endOfData();
-
-Unfiltered unfiltered = deserializer.readNext();
-if (metadata.isStaticCompactTable() && unfiltered.kind() 
== Unfiltered.Kind.ROW)
-{
-Row row = (Row) unfiltered;
-ColumnDefinition def = 
metadata.getColumnDefinition(LegacyLayout.encodeClustering(metadata, 
row.clustering()));
-if (def != null && def.isStatic())
-continue;
-}
-return unfiltered;
-}
-catch (IOException e)
-{
-throw new IOError(e);
-}
-}
-}
-
-}
-
-private static class OldFormatTombstoneIterator extends OldFormatIterator
-{
-private OldFormatTombstoneIterator(CFMetaData metadata, 

[02/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/unit/org/apache/cassandra/db/rows/DigestBackwardCompatibilityTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/rows/DigestBackwardCompatibilityTest.java 
b/test/unit/org/apache/cassandra/db/rows/DigestBackwardCompatibilityTest.java
deleted file mode 100644
index a72d397..000
--- 
a/test/unit/org/apache/cassandra/db/rows/DigestBackwardCompatibilityTest.java
+++ /dev/null
@@ -1,179 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.db.rows;
-
-import java.nio.ByteBuffer;
-import java.security.MessageDigest;
-
-import org.junit.Test;
-
-import org.apache.cassandra.Util;
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.config.ColumnDefinition;
-import org.apache.cassandra.cql3.CQLTester;
-import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.partitions.*;
-import org.apache.cassandra.db.context.CounterContext;
-import org.apache.cassandra.net.MessagingService;
-import org.apache.cassandra.utils.ByteBufferUtil;
-import org.apache.cassandra.utils.CounterId;
-import org.apache.cassandra.utils.FBUtilities;
-
-import static org.junit.Assert.assertEquals;
-
-/**
- * Test that digest for pre-3.0 versions are properly computed (they match the 
value computed on pre-3.0 nodes).
- *
- * The concreted 'hard-coded' digests this file tests against have been 
generated on a 2.2 node using basically
- * the same test file but with 2 modifications:
- *   1. readAndDigest is modified to work on 2.2 (the actual modification is 
in the method as a comment)
- *   2. the assertions are replace by simple println() of the generated digest.
- *
- * Note that we only compare against 2.2 since digests should be fixed between 
version before 3.0 (this would be a bug
- * of previous version otherwise).
- */
-public class DigestBackwardCompatibilityTest extends CQLTester
-{
-private ByteBuffer readAndDigest(String partitionKey)
-{
-/*
- * In 2.2, this must be replaced by:
- *   ColumnFamily partition = 
getCurrentColumnFamilyStore().getColumnFamily(QueryFilter.getIdentityFilter(Util.dk(partitionKey),
 currentTable(), System.currentTimeMillis()));
- *   return ColumnFamily.digest(partition);
- */
-
-ReadCommand cmd = Util.cmd(getCurrentColumnFamilyStore(), 
partitionKey).build();
-ImmutableBTreePartition partition = 
Util.getOnlyPartitionUnfiltered(cmd);
-MessageDigest digest = FBUtilities.threadLocalMD5Digest();
-UnfilteredRowIterators.digest(cmd, partition.unfilteredIterator(), 
digest, MessagingService.VERSION_22);
-return ByteBuffer.wrap(digest.digest());
-}
-
-private void assertDigest(String expected, ByteBuffer actual)
-{
-String toTest = ByteBufferUtil.bytesToHex(actual);
-assertEquals(String.format("[digest from 2.2] %s != %s [digest from 
3.0]", expected, toTest), expected, toTest);
-}
-
-@Test
-public void testCQLTable() throws Throwable
-{
-createTable("CREATE TABLE %s (k text, t int, v1 text, v2 int, PRIMARY 
KEY (k, t))");
-
-String key = "someKey";
-
-for (int i = 0; i < 10; i++)
-execute("INSERT INTO %s(k, t, v1, v2) VALUES (?, ?, ?, ?) USING 
TIMESTAMP ? AND TTL ?", key, i, "v" + i, i, 1L, 200);
-
-// ColumnFamily(table_0 
[0::false:0@1!200,0:v1:false:2@1!200,0:v2:false:4@1!200,1::false:0@1!200,1:v1:false:2@1!200,1:v2:false:4@1!200,2::false:0@1!200,2:v1:false:2@1!200,2:v2:false:4@1!200,3::false:0@1!200,3:v1:false:2@1!200,3:v2:false:4@1!200,4::false:0@1!200,4:v1:false:2@1!200,4:v2:false:4@1!200,5::false:0@1!200,5:v1:false:2@1!200,5:v2:false:4@1!200,6::false:0@1!200,6:v1:false:2@1!200,6:v2:false:4@1!200,7::false:0@1!200,7:v1:false:2@1!200,7:v2:false:4@1!200,8::false:0@1!200,8:v1:false:2@1!200,8:v2:false:4@1!200,9::false:0@1!200,9:v1:false:2@1!200,9:v2:false:4@1!200,])
-assertDigest("aa608035cf6574a97061b5c166b64939", readAndDigest(key));
-
-// This is a cell deletion
-execute("DELETE v1 FROM %s USING TIMESTAMP ? WHERE k = ? AND t = ?", 
2L, key, 

[01/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3fabc3350 -> 4a2464192


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/unit/org/apache/cassandra/schema/LegacySchemaMigratorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/schema/LegacySchemaMigratorTest.java 
b/test/unit/org/apache/cassandra/schema/LegacySchemaMigratorTest.java
deleted file mode 100644
index 239a90d..000
--- a/test/unit/org/apache/cassandra/schema/LegacySchemaMigratorTest.java
+++ /dev/null
@@ -1,845 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.schema;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.*;
-import java.util.stream.Collectors;
-
-import com.google.common.collect.ImmutableList;
-import org.junit.Test;
-
-import org.apache.cassandra.SchemaLoader;
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.config.ColumnDefinition;
-import org.apache.cassandra.config.Schema;
-import org.apache.cassandra.config.SchemaConstants;
-import org.apache.cassandra.cql3.CQLTester;
-import org.apache.cassandra.cql3.ColumnIdentifier;
-import org.apache.cassandra.cql3.FieldIdentifier;
-import org.apache.cassandra.cql3.functions.*;
-import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.rows.Row;
-import org.apache.cassandra.db.marshal.*;
-import org.apache.cassandra.index.TargetParser;
-import org.apache.cassandra.thrift.ThriftConversion;
-import org.apache.cassandra.utils.*;
-
-import static java.lang.String.format;
-import static junit.framework.Assert.assertEquals;
-import static junit.framework.Assert.assertFalse;
-import static junit.framework.Assert.assertTrue;
-import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal;
-import static org.apache.cassandra.utils.ByteBufferUtil.bytes;
-import static org.apache.cassandra.utils.FBUtilities.json;
-
-@SuppressWarnings("deprecation")
-public class LegacySchemaMigratorTest
-{
-private static final long TIMESTAMP = 143590899400L;
-
-private static final String KEYSPACE_PREFIX = "LegacySchemaMigratorTest";
-
-/*
- * 1. Write a variety of different keyspaces/tables/types/function in the 
legacy manner, using legacy schema tables
- * 2. Run the migrator
- * 3. Read all the keyspaces from the new schema tables
- * 4. Make sure that we've read *exactly* the same set of 
keyspaces/tables/types/functions
- * 5. Validate that the legacy schema tables are now empty
- */
-@Test
-public void testMigrate() throws IOException
-{
-CQLTester.cleanupAndLeaveDirs();
-
-Keyspaces expected = keyspacesToMigrate();
-
-// write the keyspaces into the legacy tables
-expected.forEach(LegacySchemaMigratorTest::legacySerializeKeyspace);
-
-// run the migration
-LegacySchemaMigrator.migrate();
-
-// read back all the metadata from the new schema tables
-Keyspaces actual = SchemaKeyspace.fetchNonSystemKeyspaces();
-
-// need to load back CFMetaData of those tables (CFS instances will 
still be loaded)
-loadLegacySchemaTables();
-
-// verify that nothing's left in the old schema tables
-for (CFMetaData table : LegacySchemaMigrator.LegacySchemaTables)
-{
-String query = format("SELECT * FROM %s.%s", 
SchemaConstants.SYSTEM_KEYSPACE_NAME, table.cfName);
-//noinspection ConstantConditions
-assertTrue(executeOnceInternal(query).isEmpty());
-}
-
-// make sure that we've read *exactly* the same set of 
keyspaces/tables/types/functions
-assertEquals(expected.diff(actual).toString(), expected, actual);
-
-// check that the build status of all indexes has been updated to use 
the new
-// format of index name: the index_name column of system.IndexInfo 
used to
-// contain table_name.index_name. Now it should contain just the 
index_name.
-expected.forEach(LegacySchemaMigratorTest::verifyIndexBuildStatus);
-}
-
-private static FieldIdentifier field(String field)
-{
-return 

[11/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
Remove pre-3.0 compatibility code for 4.0

patch by Sylvain Lebresne; reviewed by Aleksey Yeschenko for CASSANDRA-12716


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4a246419
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4a246419
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4a246419

Branch: refs/heads/trunk
Commit: 4a2464192e9e69457f5a5ecf26c094f9298bf069
Parents: 3fabc33
Author: Sylvain Lebresne 
Authored: Tue Sep 27 15:26:15 2016 +0200
Committer: Sylvain Lebresne 
Committed: Wed Nov 30 10:23:18 2016 +0100

--
 CHANGES.txt |1 +
 NEWS.txt|4 +
 .../cassandra/auth/CassandraRoleManager.java|   10 -
 .../batchlog/LegacyBatchlogMigrator.java|  199 
 .../org/apache/cassandra/config/CFMetaData.java |8 -
 .../restrictions/StatementRestrictions.java |3 -
 .../apache/cassandra/db/ColumnFamilyStore.java  |   27 +-
 .../org/apache/cassandra/db/Directories.java|   40 +-
 .../org/apache/cassandra/db/LegacyLayout.java   |  488 +---
 src/java/org/apache/cassandra/db/Memtable.java  |   12 +-
 src/java/org/apache/cassandra/db/Mutation.java  |   51 +-
 .../cassandra/db/MutationVerbHandler.java   |   19 +-
 .../cassandra/db/PartitionRangeReadCommand.java |6 +-
 .../cassandra/db/RangeSliceVerbHandler.java |   29 -
 .../org/apache/cassandra/db/ReadCommand.java| 1061 +
 .../org/apache/cassandra/db/ReadResponse.java   |  264 +
 .../org/apache/cassandra/db/RowIndexEntry.java  |  189 +--
 .../org/apache/cassandra/db/Serializers.java|  183 ---
 .../db/SinglePartitionReadCommand.java  |4 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |  229 +---
 .../cassandra/db/UnfilteredDeserializer.java|  658 ++-
 .../columniterator/AbstractSSTableIterator.java |   44 +-
 .../db/columniterator/SSTableIterator.java  |6 +-
 .../columniterator/SSTableReversedIterator.java |   18 +-
 .../db/commitlog/CommitLogArchiver.java |2 +-
 .../db/commitlog/CommitLogDescriptor.java   |   47 +-
 .../cassandra/db/commitlog/CommitLogReader.java |   44 +-
 .../db/compaction/CompactionManager.java|4 +-
 .../cassandra/db/compaction/Upgrader.java   |2 +-
 .../cassandra/db/compaction/Verifier.java   |3 +-
 .../writers/DefaultCompactionWriter.java|2 +-
 .../writers/MajorLeveledCompactionWriter.java   |2 +-
 .../writers/MaxSSTableSizeWriter.java   |2 +-
 .../SplittingSizeTieredCompactionWriter.java|2 +-
 .../apache/cassandra/db/filter/RowFilter.java   |  103 +-
 .../db/partitions/PartitionUpdate.java  |   60 +-
 .../UnfilteredPartitionIterators.java   |9 +-
 .../UnfilteredRowIteratorWithLowerBound.java|5 +-
 .../db/rows/UnfilteredRowIterators.java |   10 +-
 .../apache/cassandra/dht/AbstractBounds.java|5 +
 src/java/org/apache/cassandra/gms/Gossiper.java |6 -
 .../cassandra/hints/LegacyHintsMigrator.java|  244 
 .../io/ForwardingVersionedSerializer.java   |   57 -
 .../io/compress/CompressionMetadata.java|   11 +-
 .../io/sstable/AbstractSSTableSimpleWriter.java |9 +-
 .../apache/cassandra/io/sstable/Component.java  |   94 +-
 .../apache/cassandra/io/sstable/Descriptor.java |  264 ++---
 .../apache/cassandra/io/sstable/IndexInfo.java  |   78 +-
 .../cassandra/io/sstable/IndexSummary.java  |   29 +-
 .../io/sstable/IndexSummaryRedistribution.java  |   16 +-
 .../apache/cassandra/io/sstable/SSTable.java|   48 +-
 .../cassandra/io/sstable/SSTableLoader.java |2 +-
 .../io/sstable/SSTableSimpleIterator.java   |  112 +-
 .../cassandra/io/sstable/SSTableTxnWriter.java  |   10 +-
 .../sstable/format/RangeAwareSSTableWriter.java |4 +-
 .../io/sstable/format/SSTableFormat.java|8 -
 .../io/sstable/format/SSTableReader.java|   94 +-
 .../io/sstable/format/SSTableWriter.java|   16 +-
 .../cassandra/io/sstable/format/Version.java|   22 -
 .../io/sstable/format/big/BigFormat.java|  125 +-
 .../io/sstable/format/big/BigTableWriter.java   |6 +-
 .../io/sstable/metadata/CompactionMetadata.java |   13 -
 .../metadata/LegacyMetadataSerializer.java  |  163 ---
 .../io/sstable/metadata/StatsMetadata.java  |   44 +-
 .../io/util/CompressedChunkReader.java  |5 +-
 .../io/util/DataIntegrityMetadata.java  |6 +-
 .../cassandra/net/IncomingTcpConnection.java|   28 +-
 .../org/apache/cassandra/net/MessageOut.java|2 +-
 .../apache/cassandra/net/MessagingService.java  |   73 +-
 .../cassandra/net/OutboundTcpConnection.java|   35 +-
 .../apache/cassandra/repair/RepairJobDesc.java  |   27 +-
 

[06/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
--
diff --git a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java 
b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
deleted file mode 100644
index d0fc151..000
--- a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
+++ /dev/null
@@ -1,1099 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.schema;
-
-import java.nio.ByteBuffer;
-import java.util.*;
-import java.util.stream.Collectors;
-
-import com.google.common.collect.HashMultimap;
-import com.google.common.collect.ImmutableList;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import org.apache.cassandra.config.*;
-import org.apache.cassandra.cql3.ColumnIdentifier;
-import org.apache.cassandra.cql3.FieldIdentifier;
-import org.apache.cassandra.cql3.QueryProcessor;
-import org.apache.cassandra.cql3.UntypedResultSet;
-import org.apache.cassandra.cql3.functions.FunctionName;
-import org.apache.cassandra.cql3.functions.UDAggregate;
-import org.apache.cassandra.cql3.functions.UDFunction;
-import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.compaction.AbstractCompactionStrategy;
-import org.apache.cassandra.db.marshal.*;
-import org.apache.cassandra.db.rows.RowIterator;
-import org.apache.cassandra.db.rows.UnfilteredRowIterators;
-import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.utils.FBUtilities;
-
-import static java.lang.String.format;
-import static org.apache.cassandra.utils.ByteBufferUtil.bytes;
-import static org.apache.cassandra.utils.FBUtilities.fromJsonMap;
-
-/**
- * This majestic class performs migration from legacy (pre-3.0) 
system.schema_* schema tables to the new and glorious
- * system_schema keyspace.
- *
- * The goal is to not lose any information in the migration - including the 
timestamps.
- */
-@SuppressWarnings("deprecation")
-public final class LegacySchemaMigrator
-{
-private LegacySchemaMigrator()
-{
-}
-
-private static final Logger logger = 
LoggerFactory.getLogger(LegacySchemaMigrator.class);
-
-static final List LegacySchemaTables =
-ImmutableList.of(SystemKeyspace.LegacyKeyspaces,
- SystemKeyspace.LegacyColumnfamilies,
- SystemKeyspace.LegacyColumns,
- SystemKeyspace.LegacyTriggers,
- SystemKeyspace.LegacyUsertypes,
- SystemKeyspace.LegacyFunctions,
- SystemKeyspace.LegacyAggregates);
-
-public static void migrate()
-{
-// read metadata from the legacy schema tables
-Collection keyspaces = readSchema();
-
-// if already upgraded, or starting a new 3.0 node, abort early
-if (keyspaces.isEmpty())
-{
-unloadLegacySchemaTables();
-return;
-}
-
-// write metadata to the new schema tables
-logger.info("Moving {} keyspaces from legacy schema tables to the new 
schema keyspace ({})",
-keyspaces.size(),
-SchemaConstants.SCHEMA_KEYSPACE_NAME);
-
keyspaces.forEach(LegacySchemaMigrator::storeKeyspaceInNewSchemaTables);
-
keyspaces.forEach(LegacySchemaMigrator::migrateBuiltIndexesForKeyspace);
-
-// flush the new tables before truncating the old ones
-SchemaKeyspace.flush();
-
-// truncate the original tables (will be snapshotted now, and will 
have been snapshotted by pre-flight checks)
-logger.info("Truncating legacy schema tables");
-truncateLegacySchemaTables();
-
-// remove legacy schema tables from Schema, so that their presence 
doesn't give the users any wrong ideas
-unloadLegacySchemaTables();
-
-logger.info("Completed migration of legacy schema tables");
-}
-
-private static void migrateBuiltIndexesForKeyspace(Keyspace keyspace)
-{
-
keyspace.tables.forEach(LegacySchemaMigrator::migrateBuiltIndexesForTable);
-}
-
-private 

[04/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_compact/legacy_tables-legacy_ka_clust_compact-ka-1-TOC.txt
--
diff --git 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_compact/legacy_tables-legacy_ka_clust_compact-ka-1-TOC.txt
 
b/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_compact/legacy_tables-legacy_ka_clust_compact-ka-1-TOC.txt
deleted file mode 100644
index 7f7fe79..000
--- 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_compact/legacy_tables-legacy_ka_clust_compact-ka-1-TOC.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-Filter.db
-TOC.txt
-Statistics.db
-Summary.db
-Index.db
-Data.db
-Digest.sha1
-CompressionInfo.db

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-CompressionInfo.db
--
diff --git 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-CompressionInfo.db
 
b/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-CompressionInfo.db
deleted file mode 100644
index 3c7291c..000
Binary files 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-CompressionInfo.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Data.db
--
diff --git 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Data.db
 
b/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Data.db
deleted file mode 100644
index 3566e5a..000
Binary files 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Data.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Digest.sha1
--
diff --git 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Digest.sha1
 
b/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Digest.sha1
deleted file mode 100644
index a679541..000
--- 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Digest.sha1
+++ /dev/null
@@ -1 +0,0 @@
-2539906592
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Filter.db
--
diff --git 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Filter.db
 
b/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Filter.db
deleted file mode 100644
index c3cb27c..000
Binary files 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Filter.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Index.db
--
diff --git 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Index.db
 
b/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Index.db
deleted file mode 100644
index 51ddf91..000
Binary files 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Index.db
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Statistics.db
--
diff --git 
a/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Statistics.db
 
b/test/data/legacy-sstables/ka/legacy_tables/legacy_ka_clust_counter/legacy_tables-legacy_ka_clust_counter-ka-1-Statistics.db

[09/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/db/Serializers.java
--
diff --git a/src/java/org/apache/cassandra/db/Serializers.java 
b/src/java/org/apache/cassandra/db/Serializers.java
deleted file mode 100644
index d6aac64..000
--- a/src/java/org/apache/cassandra/db/Serializers.java
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.db;
-
-import java.io.*;
-import java.nio.ByteBuffer;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.db.marshal.AbstractType;
-import org.apache.cassandra.db.marshal.CompositeType;
-import org.apache.cassandra.io.ISerializer;
-import org.apache.cassandra.io.sstable.IndexInfo;
-import org.apache.cassandra.io.sstable.format.big.BigFormat;
-import org.apache.cassandra.io.util.DataInputPlus;
-import org.apache.cassandra.io.util.DataOutputPlus;
-import org.apache.cassandra.io.sstable.format.Version;
-import org.apache.cassandra.utils.ByteBufferUtil;
-
-/**
- * Holds references on serializers that depend on the table definition.
- */
-public class Serializers
-{
-private final CFMetaData metadata;
-
-private Map 
otherVersionClusteringSerializers;
-
-private final IndexInfo.Serializer latestVersionIndexSerializer;
-
-public Serializers(CFMetaData metadata)
-{
-this.metadata = metadata;
-this.latestVersionIndexSerializer = new 
IndexInfo.Serializer(BigFormat.latestVersion,
- 
indexEntryClusteringPrefixSerializer(BigFormat.latestVersion, 
SerializationHeader.makeWithoutStats(metadata)));
-}
-
-IndexInfo.Serializer indexInfoSerializer(Version version, 
SerializationHeader header)
-{
-// null header indicates streaming from pre-3.0 sstables
-if (version.equals(BigFormat.latestVersion) && header != null)
-return latestVersionIndexSerializer;
-
-if (otherVersionClusteringSerializers == null)
-otherVersionClusteringSerializers = new ConcurrentHashMap<>();
-IndexInfo.Serializer serializer = 
otherVersionClusteringSerializers.get(version);
-if (serializer == null)
-{
-serializer = new IndexInfo.Serializer(version,
-  
indexEntryClusteringPrefixSerializer(version, header));
-otherVersionClusteringSerializers.put(version, serializer);
-}
-return serializer;
-}
-
-// TODO: Once we drop support for old (pre-3.0) sstables, we can drop this 
method and inline the calls to
-// ClusteringPrefix.serializer directly. At which point this whole class 
probably becomes
-// unecessary (since IndexInfo.Serializer won't depend on the metadata 
either).
-private ISerializer 
indexEntryClusteringPrefixSerializer(Version version, SerializationHeader 
header)
-{
-if (!version.storeRows() || header ==  null) //null header indicates 
streaming from pre-3.0 sstables
-{
-return oldFormatSerializer(version);
-}
-
-return new NewFormatSerializer(version, header.clusteringTypes());
-}
-
-private ISerializer oldFormatSerializer(Version version)
-{
-return new ISerializer()
-{
-List clusteringTypes = 
SerializationHeader.makeWithoutStats(metadata).clusteringTypes();
-
-public void serialize(ClusteringPrefix clustering, DataOutputPlus 
out) throws IOException
-{
-//we deserialize in the old format and serialize in the new 
format
-ClusteringPrefix.serializer.serialize(clustering, out,
-  
version.correspondingMessagingVersion(),
-  clusteringTypes);
-}
-
-@Override
-public void skip(DataInputPlus in) throws IOException
-{
-  

[05/11] cassandra git commit: Remove pre-3.0 compatibility code for 4.0

2016-11-30 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java 
b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
index b405fad..019e053 100644
--- a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
+++ b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
@@ -215,7 +215,7 @@ public class SSTableMetadataViewer
 
 try (DataInputStream iStream = new DataInputStream(new 
FileInputStream(summariesFile)))
 {
-Pair firstLast = new 
IndexSummary.IndexSummarySerializer().deserializeFirstLastKey(iStream, 
partitioner, descriptor.version.hasSamplingLevel());
+Pair firstLast = new 
IndexSummary.IndexSummarySerializer().deserializeFirstLastKey(iStream, 
partitioner);
 out.printf("First token: %s (key=%s)%n", 
firstLast.left.getToken(), keyType.getString(firstLast.left.getKey()));
 out.printf("Last token: %s (key=%s)%n", 
firstLast.right.getToken(), keyType.getString(firstLast.right.getKey()));
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/tools/SSTableRepairedAtSetter.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableRepairedAtSetter.java 
b/src/java/org/apache/cassandra/tools/SSTableRepairedAtSetter.java
index 413ec4d..b97960a 100644
--- a/src/java/org/apache/cassandra/tools/SSTableRepairedAtSetter.java
+++ b/src/java/org/apache/cassandra/tools/SSTableRepairedAtSetter.java
@@ -82,21 +82,20 @@ public class SSTableRepairedAtSetter
 for (String fname: fileNames)
 {
 Descriptor descriptor = Descriptor.fromFilename(fname);
-if (descriptor.version.hasRepairedAt())
+if (!descriptor.version.isCompatible())
 {
-if (setIsRepaired)
-{
-FileTime f = Files.getLastModifiedTime(new 
File(descriptor.filenameFor(Component.DATA)).toPath());
-
descriptor.getMetadataSerializer().mutateRepairedAt(descriptor, f.toMillis());
-}
-else
-{
-
descriptor.getMetadataSerializer().mutateRepairedAt(descriptor, 
ActiveRepairService.UNREPAIRED_SSTABLE);
-}
+System.err.println("SSTable " + fname + " is in a old and 
unsupported format");
+continue;
+}
+
+if (setIsRepaired)
+{
+FileTime f = Files.getLastModifiedTime(new 
File(descriptor.filenameFor(Component.DATA)).toPath());
+
descriptor.getMetadataSerializer().mutateRepairedAt(descriptor, f.toMillis());
 }
 else
 {
-System.err.println("SSTable " + fname + " does not have 
repaired property, run upgradesstables");
+
descriptor.getMetadataSerializer().mutateRepairedAt(descriptor, 
ActiveRepairService.UNREPAIRED_SSTABLE);
 }
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/tools/StandaloneSplitter.java
--
diff --git a/src/java/org/apache/cassandra/tools/StandaloneSplitter.java 
b/src/java/org/apache/cassandra/tools/StandaloneSplitter.java
index 1e57ff4..9db 100644
--- a/src/java/org/apache/cassandra/tools/StandaloneSplitter.java
+++ b/src/java/org/apache/cassandra/tools/StandaloneSplitter.java
@@ -70,12 +70,11 @@ public class StandaloneSplitter
 continue;
 }
 
-Pair pair = 
SSTable.tryComponentFromFilename(file.getParentFile(), file.getName());
-if (pair == null) {
+Descriptor desc = SSTable.tryDescriptorFromFilename(file);
+if (desc == null) {
 System.out.println("Skipping non sstable file " + file);
 continue;
 }
-Descriptor desc = pair.left;
 
 if (ksName == null)
 ksName = desc.ksname;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a246419/src/java/org/apache/cassandra/utils/BloomFilter.java
--
diff --git a/src/java/org/apache/cassandra/utils/BloomFilter.java 
b/src/java/org/apache/cassandra/utils/BloomFilter.java
index 4ff07b7..bc52c09 100644
--- a/src/java/org/apache/cassandra/utils/BloomFilter.java
+++ b/src/java/org/apache/cassandra/utils/BloomFilter.java
@@ -37,18 +37,12 @@ public class BloomFilter extends WrappedSharedCloseable 
implements IFilter
 
 public 

[jira] [Commented] (CASSANDRA-12666) dtest failure in paging_test.TestPagingData.test_paging_with_filtering_on_partition_key

2016-11-30 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708080#comment-15708080
 ] 

Alex Petrov commented on CASSANDRA-12666:
-

You're right, that check was redundant. +1 

> dtest failure in 
> paging_test.TestPagingData.test_paging_with_filtering_on_partition_key
> ---
>
> Key: CASSANDRA-12666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12666
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>Priority: Critical
>  Labels: dtest
> Fix For: 3.10
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key
> {code}
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [Native-Transport-Requests-3] 2016-09-17 00:50:11,543 Message.java:622 
> - Unexpected exception during request; channel = [id: 0x467a4afe, 
> L:/127.0.0.3:9042 - R:/127.0.0.1:59115]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.dht.IncludingExcludingBounds.split(IncludingExcludingBounds.java:45)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.getRestrictedRanges(StorageProxy.java:2368)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$RangeIterator.(StorageProxy.java:1951)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:2235)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.PartitionRangeReadCommand.execute(PartitionRangeReadCommand.java:184)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:66)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.PartitionRangeQueryPager.fetchPage(PartitionRangeQueryPager.java:36)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:328)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:375)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:250)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:78)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:216)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:247) 
> ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:232) 
> ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}
> Related failures:
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key_on_clustering_columns/
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key_on_clustering_columns_with_contains/
> http://cassci.datastax.com/job/trunk_novnode_dtest/480/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_partition_key_on_counter_columns/



--
This message was sent by Atlassian JIRA

[jira] [Updated] (CASSANDRA-12969) Index: index can significantly slow down boot

2016-11-30 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12969:
---
Component/s: (was: Core)
 CQL

> Index: index can significantly slow down boot
> -
>
> Key: CASSANDRA-12969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Corentin Chary
> Fix For: 3.x
>
> Attachments: 0004-index-do-not-re-insert-values-in-IndexInfo.patch
>
>
> During startup, each existing index is opened and marked as built by adding 
> an entry in "IndexInfo" and forcing a flush. Because of that we end up 
> flushing one sstable per index. On systems on HDD this can take minutes for 
> nothing.
> Thw following patch allows to avoid creating useless new sstables if the 
> index was already marked as built and will greatly reduce the startup time 
> (and improve availability during restarts).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12969) Index: index can significantly slow down boot

2016-11-30 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12969:
---
Issue Type: Improvement  (was: Bug)

> Index: index can significantly slow down boot
> -
>
> Key: CASSANDRA-12969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Corentin Chary
> Fix For: 3.x
>
> Attachments: 0004-index-do-not-re-insert-values-in-IndexInfo.patch
>
>
> During startup, each existing index is opened and marked as built by adding 
> an entry in "IndexInfo" and forcing a flush. Because of that we end up 
> flushing one sstable per index. On systems on HDD this can take minutes for 
> nothing.
> Thw following patch allows to avoid creating useless new sstables if the 
> index was already marked as built and will greatly reduce the startup time 
> (and improve availability during restarts).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12969) Index: index can significantly slow down boot

2016-11-30 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12969:
---
Reviewer: Sam Tunnicliffe
  Status: Patch Available  (was: Open)

> Index: index can significantly slow down boot
> -
>
> Key: CASSANDRA-12969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12969
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Corentin Chary
> Fix For: 3.x
>
> Attachments: 0004-index-do-not-re-insert-values-in-IndexInfo.patch
>
>
> During startup, each existing index is opened and marked as built by adding 
> an entry in "IndexInfo" and forcing a flush. Because of that we end up 
> flushing one sstable per index. On systems on HDD this can take minutes for 
> nothing.
> Thw following patch allows to avoid creating useless new sstables if the 
> index was already marked as built and will greatly reduce the startup time 
> (and improve availability during restarts).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-11-30 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu reassigned CASSANDRA-8398:


Assignee: Dikang Gu

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 2.1.x
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)