[jira] [Comment Edited] (CASSANDRA-8343) Secondary index creation causes moves/bootstraps to fail

2016-02-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154892#comment-15154892
 ] 

Paulo Motta edited comment on CASSANDRA-8343 at 2/19/16 10:31 PM:
--

Thanks for the input [~slebresne]!

I created a simpler version of the patch that basically increases the incoming 
socket timeout on the sending side to {{3 * streaming_socket_timeout}} when it 
reaches the {{WAIT_COMPLETE}} state. I also close the outgoing message handler 
on the sender side after the "complete" message is sent, and similarly close 
the incoming message handler on the receiving side after receiving the 
"complete" message since they are no longer necessary.

This should give the reader more time (3 hours with current default 
{{streaming_socket_timeout}} of 1 hour) to process the received data, rebuild 
indexes, etc. If even this larger timeout is reached, the session is failed a 
message is logged asking the user to increase the value of 
{{streaming_socket_timeout}} (as suggested by Sylvain) and the stream session 
is failed. 

I think this should be a good enough approach for the time being and we can 
revisit adding a new {{KeepAlive}} message if necessary when bumping the 
streaming message version later. We could also revisit that in the context of 
CASSANDRA-8621 (streaming retry on socket timeout).

Below is new patch and tests:
||2.2||dtest||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-8343]|[branch|https://github.com/riptano/cassandra-dtest/compare/master...pauloricardomg:8343]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-8343-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-8343-dtest/lastCompletedBuild/testReport/]|

WDYT [~yukim]?


was (Author: pauloricardomg):
Thanks for the input [~slebresne]!

I created a simpler version of the patch that does not involve adding a new 
message to the streaming protocol but may tolerate an index rebuild larger than 
{{streaming_socket_timeout}}.

 The basic idea is to tolerate up to 3 socket timeouts on the sender side if 
it's on the {{WAIT_COMPLETE}} state, giving a total of 
{{3*streaming_socket_timeout}} for the receiver to process the data which would 
be 3 hours with the current default, which IMO should be sufficient for most 
scenarios. In case there are more than 3 consecutive socket timeouts an 
informative message is logged asking the user to increase the value of 
{{streaming_socket_timeout}} (as suggested by Sylvain) and the stream session 
is failed. Below is an example of this new approach:

{noformat}
debug.log:DEBUG [STREAM-OUT-/127.0.0.2] 2016-02-19 18:00:09,947 
ConnectionHandler.java:351 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Sending Complete
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:10,948 
StreamSession.java:221 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Received socket timeout but will ignore it because stream session is on 
WAIT_COMPLETE state, so other peer might be still processing received data 
beforereplying to this node. Ignored 1 out of 3 times.
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:11,948 
StreamSession.java:221 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Received socket timeout but will ignore it because stream session is on 
WAIT_COMPLETE state, so other peer might be still processing received data 
beforereplying to this node. Ignored 2 out of 3 times.
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:12,096 
ConnectionHandler.java:273 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Received Complete
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:12,096 
ConnectionHandler.java:112 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Closing stream connection handler on /127.0.0.2
{noformat}

IMO this should be a good enough approach for the time being and we can revisit 
adding a new {{KeepAlive}} message if necessary when bumping the streaming 
message version later. We could also revisit that in the context of 
CASSANDRA-8621 (streaming retry on socket timeout).

I also closed the outgoing message handler on the sender side after the 
"complete" message is sent, and similarly closed the incoming message handler 
on the receiving side after receiving the "complete" message since they are no 
longer necessary.

Below is new patch and tests:
||2.2||dtest||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-8343]|[branch|https://github.com/riptano/cassandra-dtest/compare/master...pauloricardomg:8343]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-8343-testall/lastCompletedBuild/testReport/]|

[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-19 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154987#comment-15154987
 ] 

Adam Holmberg commented on CASSANDRA-11053:
---

Curious, with the introduction of Cython, are you going to give the option to 
build, target a specific platform, or build for many and include extensions 
pre-built?

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-19 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154980#comment-15154980
 ] 

Adam Holmberg commented on CASSANDRA-11053:
---

I have visited message coalescing in the driver a couple times, and I couldn't 
make it matter. It was in a more controlled environment than AWS, but I was 
trying to emulate network delay using local intervention.

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154931#comment-15154931
 ] 

Gábor Auth edited comment on CASSANDRA-11198 at 2/19/16 9:54 PM:
-

Oh, the traced SELECT is enough? Attached.


was (Author: gabor.auth):
Oh, the traced SELECT is enough?

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Attachments: CASSANDRA-11198.trace
>
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Auth updated CASSANDRA-11198:
---
Attachment: CASSANDRA-11198.trace

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Attachments: CASSANDRA-11198.trace
>
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154931#comment-15154931
 ] 

Gábor Auth commented on CASSANDRA-11198:


Oh, the traced SELECT is enough?

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11199) rolling_upgrade_with_internode_ssl_test flaps, timing out waiting for node to start

2016-02-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154925#comment-15154925
 ] 

Russ Hatch commented on CASSANDRA-11199:


I *think* this might be another manifestation of the yaml config problem where 
udf's are trying to be enabled on clusters that aren't on a udf-supporting 
versions (or maybe in mixed version clusters where some nodes aren't on a 
udf-supporting version). Not 100% on that however.

> rolling_upgrade_with_internode_ssl_test flaps, timing out waiting for node to 
> start
> ---
>
> Key: CASSANDRA-11199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11199
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>
> Here's an example of this failure:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/junit/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/
> And here are the two particular test I've seen flap:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/
> I haven't reproduced this locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11197) upgrade bootstrap tests flap when migration tasks fail

2016-02-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154914#comment-15154914
 ] 

Russ Hatch commented on CASSANDRA-11197:


I think this should probably get escalated for a closer look from a dev. With 
active log scanning now in place, it appears this is cropping up right after 
trying to bootstrap a new node during an upgrade. Seems it could be a 
legitimate problem.

> upgrade bootstrap tests flap when migration tasks fail
> --
>
> Key: CASSANDRA-11197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11197
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
>
> I've seen these tests flap:
> {code}
> upgrade_tests/upgrade_through_versions_test.py:ProtoV4Upgrade_3_1_UpTo_3_2_HEAD.bootstrap_test
> upgrade_tests/upgrade_through_versions_test.py:ProtoV4Upgrade_3_3_UpTo_Trunk_HEAD.bootstrap_test
> upgrade_tests/upgrade_through_versions_test.py:ProtoV3Upgrade_3_0_UpTo_3_1_HEAD.bootstrap_multidc_test
> upgrade_tests/upgrade_through_versions_test.py:ProtoV3Upgrade_3_2_UpTo_3_3_HEAD.bootstrap_multidc_test
> upgrade_tests/upgrade_through_versions_test.py:ProtoV3Upgrade_3_3_UpTo_Trunk_HEAD.bootstrap_multidc_test
> upgrade_tests/upgrade_through_versions_test.py:ProtoV4Upgrade_3_3_UpTo_Trunk_HEAD.bootstrap_multidc_test
> {code}
> There may be more upgrade paths that flap, I'm not sure. All the failures 
> I've seen look like this:
> {code}
> Unexpected error in node5 node log: ['ERROR [main] 2016-02-18 20:05:13,012 
> MigrationManager.java:164 - Migration task failed to complete\nERROR [main] 
> 2016-02-18 20:05:14,012 MigrationManager.java:164 - Migration task failed to 
> complete']
> {code}
> [~rhatch] Do these look familiar at all?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8110) Make streaming backwards compatible

2016-02-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-8110:
---
Labels: gsoc2016 mentor  (was: )

> Make streaming backwards compatible
> ---
>
> Key: CASSANDRA-8110
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8110
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Marcus Eriksson
>  Labels: gsoc2016, mentor
> Fix For: 3.x
>
>
> To be able to seamlessly upgrade clusters we need to make it possible to 
> stream files between nodes with different StreamMessage.CURRENT_VERSION



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8928) Add downgradesstables

2016-02-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-8928:
---
Labels: gsoc2016 mentor  (was: )

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jeremy Hanna
>Priority: Minor
>  Labels: gsoc2016, mentor
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5772) Support streaming of SSTables created in older version's format

2016-02-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-5772:
---
Component/s: Streaming and Messaging

> Support streaming of SSTables created in older version's format
> ---
>
> Key: CASSANDRA-5772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5772
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Affects Versions: 2.0 beta 1
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
>  Labels: streaming
> Fix For: 2.0 beta 2
>
> Attachments: 0001-support-streaming-sstable-of-older-versions.patch
>
>
> New streaming protocol is capable of sending and receiving older SSTables.
> Implement the ability to stream older versions so that we can avoid error 
> like the one described in CASSANDRA-5104.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8343) Secondary index creation causes moves/bootstraps to fail

2016-02-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154892#comment-15154892
 ] 

Paulo Motta commented on CASSANDRA-8343:


Thanks for the input [~slebresne]!

I created a simpler version of the patch that does not involve adding a new 
message to the streaming protocol but may tolerate an index rebuild larger than 
{{streaming_socket_timeout}}.

 The basic idea is to tolerate up to 3 socket timeouts on the sender side if 
it's on the {{WAIT_COMPLETE}} state, giving a total of 
{{3*streaming_socket_timeout}} for the receiver to process the data which would 
be 3 hours with the current default, which IMO should be sufficient for most 
scenarios. In case there are more than 3 consecutive socket timeouts an 
informative message is logged asking the user to increase the value of 
{{streaming_socket_timeout}} (as suggested by Sylvain) and the stream session 
is failed. Below is an example of this new approach:

{noformat}
debug.log:DEBUG [STREAM-OUT-/127.0.0.2] 2016-02-19 18:00:09,947 
ConnectionHandler.java:351 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Sending Complete
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:10,948 
StreamSession.java:221 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Received socket timeout but will ignore it because stream session is on 
WAIT_COMPLETE state, so other peer might be still processing received data 
beforereplying to this node. Ignored 1 out of 3 times.
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:11,948 
StreamSession.java:221 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Received socket timeout but will ignore it because stream session is on 
WAIT_COMPLETE state, so other peer might be still processing received data 
beforereplying to this node. Ignored 2 out of 3 times.
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:12,096 
ConnectionHandler.java:273 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Received Complete
debug.log:DEBUG [STREAM-IN-/127.0.0.2] 2016-02-19 18:00:12,096 
ConnectionHandler.java:112 - [Stream #c2b5c6c0-d74b-11e5-a490-dfb88388eec7] 
Closing stream connection handler on /127.0.0.2
{noformat}

IMO this should be a good enough approach for the time being and we can revisit 
adding a new {{KeepAlive}} message if necessary when bumping the streaming 
message version later. We could also revisit that in the context of 
CASSANDRA-8621 (streaming retry on socket timeout).

I also closed the outgoing message handler on the sender side after the 
"complete" message is sent, and similarly closed the incoming message handler 
on the receiving side after receiving the "complete" message since they are no 
longer necessary.

Below is new patch and tests:
||2.2||dtest||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-8343]|[branch|https://github.com/riptano/cassandra-dtest/compare/master...pauloricardomg:8343]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-8343-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-8343-dtest/lastCompletedBuild/testReport/]|

WDYT of this approach [~yukim]?

> Secondary index creation causes moves/bootstraps to fail
> 
>
> Key: CASSANDRA-8343
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8343
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Frisch
>Assignee: Paulo Motta
>
> Node moves/bootstraps are failing if the stream timeout is set to a value in 
> which secondary index creation cannot complete.  This happens because at the 
> end of the very last stream the StreamInSession.closeIfFinished() function 
> calls maybeBuildSecondaryIndexes on every column family.  If the stream time 
> + all CF's index creation takes longer than your stream timeout then the 
> socket closes from the sender's side, the receiver of the stream tries to 
> write to said socket because it's not null, an IOException is thrown but not 
> caught in closeIfFinished(), the exception is caught somewhere and not 
> logged, AbstractStreamSession.close() is never called, and the CountDownLatch 
> is never decremented.  This causes the move/bootstrap to continue forever 
> until the node is restarted.
> This problem of stream time + secondary index creation time exists on 
> decommissioning/unbootstrap as well but since it's on the sending side the 
> timeout triggers the onFailure() callback which does decrement the 
> CountDownLatch leading to completion.
> A cursory glance at the 2.0 code leads me to believe this problem would exist 
> there as well.
> Temporary workaround: set a really high/infinite stream timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11199) rolling_upgrade_with_internode_ssl_test flaps, timing out waiting for node to start

2016-02-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11199:


 Summary: rolling_upgrade_with_internode_ssl_test flaps, timing out 
waiting for node to start
 Key: CASSANDRA-11199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11199
 Project: Cassandra
  Issue Type: Bug
Reporter: Jim Witschey
Assignee: DS Test Eng


Here's an example of this failure:

http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/junit/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/

And here are the two particular test I've seen flap:

http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/
http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/

I haven't reproduced this locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154884#comment-15154884
 ] 

Carl Yeksigian commented on CASSANDRA-11198:


{quote}
(from CASSANDRA-10910)
I cannot reproduce it from console.

I'm using the cluster through the Datastax Java Driver, the "TRACING ON;" is 
the rs.getExecutionInfo() or something else?
{quote}

In cqlsh (which it looks like you are using for the output you are generating), 
you can just use that line to start tracing and execute the {{SELECT}} query 
again.

If not, you can use [the driver with {{.enableTracing()}} and 
{{.getExecutionInfo()}}|http://docs.datastax.com/en/developer/java-driver/2.1/java-driver/tracing_t.html].

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian reassigned CASSANDRA-11198:
--

Assignee: Carl Yeksigian

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154872#comment-15154872
 ] 

Gábor Auth commented on CASSANDRA-10910:


Ok!  :)

https://issues.apache.org/jira/browse/CASSANDRA-11198

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Auth updated CASSANDRA-11198:
---
Reproduced In:   (was: 3.3)

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Auth updated CASSANDRA-11198:
---
Reproduced In: 3.3

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11198) Materialized view inconsistency

2016-02-19 Thread JIRA
Gábor Auth created CASSANDRA-11198:
--

 Summary: Materialized view inconsistency
 Key: CASSANDRA-11198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
 Project: Cassandra
  Issue Type: Bug
Reporter: Gábor Auth


Here is a materialized view:
{code}
> DESCRIBE MATERIALIZED VIEW unit_by_transport ;

CREATE MATERIALIZED VIEW unit_by_transport AS
SELECT *
FROM unit
WHERE transportid IS NOT NULL AND type IS NOT NULL
PRIMARY KEY (transportid, id)
WITH CLUSTERING ORDER BY (id ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
{code}

I cannot reproduce this but sometimes and somehow happened the same issue 
(https://issues.apache.org/jira/browse/CASSANDRA-10910):
{code}
> SELECT transportid, id, type FROM unit_by_transport WHERE 
> transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> id=99c05a70-d686-11e5-a169-97287061d5d1;

 transportid  | id   | 
type
--+--+--
 24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 | 
null

(1 rows)

> SELECT transportid, id, type FROM unit WHERE 
> id=99c05a70-d686-11e5-a169-97287061d5d1;

 transportid | id | type
-++--

(0 rows)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154865#comment-15154865
 ] 

Gábor Auth commented on CASSANDRA-10910:


I cannot reproduce it from console.

I'm using the cluster through the Datastax Java Driver, the "TRACING ON;" is 
the rs.getExecutionInfo() or something else?

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-10910.

Resolution: Fixed

Actually, since there has been a fix released for the more basic issue (this 
one isn't as easily reproducible), let's create a new ticket to track it.

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.3, 3.0.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian reopened CASSANDRA-10910:


> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154856#comment-15154856
 ] 

Carl Yeksigian commented on CASSANDRA-10910:


Can you try tracing the query and post the result?
{noformat}
TRACING ON;
{noformat}

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10371) Decommissioned nodes can remain in gossip

2016-02-19 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10371:
--
Reviewer: Jason Brown

> Decommissioned nodes can remain in gossip
> -
>
> Key: CASSANDRA-10371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Brandon Williams
>Assignee: Joel Knighton
>Priority: Minor
>
> This may apply to other dead states as well.  Dead states should be expired 
> after 3 days.  In the case of decom we attach a timestamp to let the other 
> nodes know when it should be expired.  It has been observed that sometimes a 
> subset of nodes in the cluster never expire the state, and through heap 
> analysis of these nodes it is revealed that the epstate.isAlive check returns 
> true when it should return false, which would allow the state to be evicted.  
> This may have been affected by CASSANDRA-8336.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11166) Inconsistent behavior on Tombstones

2016-02-19 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154851#comment-15154851
 ] 

Anubhav Kale commented on CASSANDRA-11166:
--

Any thoughts on this ?

> Inconsistent behavior on Tombstones
> ---
>
> Key: CASSANDRA-11166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11166
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anubhav Kale
>Priority: Minor
>
> I noticed an inconsistent behavior on deletes. Not sure if it is intentional. 
> The summary is:
> If a table is created with TTL or if rows are inserted in a table using TTL, 
> when its time to expire the row, tombstone is generated (as expected) and 
> cfstats, cqlsh tracing and sstable2json show it.
> However, if one executes a delete from table query followed by a select *, 
> neither cql tracing nor cfstats shows a tombstone being present. However, 
> sstable2json shows a tombstone.
> Is this situation treated differently on purpose ? In such a situation, does 
> Cassandra not have to scan tombstones (seems odd) ?
> Also as a data point, if one executes a delete  from table, 
> cqlsh tracing, nodetool cfstats, and sstable2json all show a consistent 
> result (tombstone being present).
> As a end user, I'd assume that deleting a row either via TTL or explicitly 
> should show me a tombstone. Is this expectation reasonable ? If not, can this 
> behavior be clearly documented ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11197) upgrade bootstrap tests flap when migration tasks fail

2016-02-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11197:


 Summary: upgrade bootstrap tests flap when migration tasks fail
 Key: CASSANDRA-11197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11197
 Project: Cassandra
  Issue Type: Bug
Reporter: Jim Witschey
Assignee: DS Test Eng


I've seen these tests flap:

{code}
upgrade_tests/upgrade_through_versions_test.py:ProtoV4Upgrade_3_1_UpTo_3_2_HEAD.bootstrap_test
upgrade_tests/upgrade_through_versions_test.py:ProtoV4Upgrade_3_3_UpTo_Trunk_HEAD.bootstrap_test

upgrade_tests/upgrade_through_versions_test.py:ProtoV3Upgrade_3_0_UpTo_3_1_HEAD.bootstrap_multidc_test
upgrade_tests/upgrade_through_versions_test.py:ProtoV3Upgrade_3_2_UpTo_3_3_HEAD.bootstrap_multidc_test
upgrade_tests/upgrade_through_versions_test.py:ProtoV3Upgrade_3_3_UpTo_Trunk_HEAD.bootstrap_multidc_test
upgrade_tests/upgrade_through_versions_test.py:ProtoV4Upgrade_3_3_UpTo_Trunk_HEAD.bootstrap_multidc_test
{code}

There may be more upgrade paths that flap, I'm not sure. All the failures I've 
seen look like this:

{code}
Unexpected error in node5 node log: ['ERROR [main] 2016-02-18 20:05:13,012 
MigrationManager.java:164 - Migration task failed to complete\nERROR [main] 
2016-02-18 20:05:14,012 MigrationManager.java:164 - Migration task failed to 
complete']
{code}

[~rhatch] Do these look familiar at all?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11168) Hint Metrics are updated even if hinted_hand-offs=false

2016-02-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11168:

Reviewer: Joel Knighton

> Hint Metrics are updated even if hinted_hand-offs=false
> ---
>
> Key: CASSANDRA-11168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11168
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Attachments: 0001-Hinted-Handoff-Fix.patch
>
>
> In our PROD logs, we noticed a lot of hint metrics even though we have 
> disabled hinted handoffs.
> The reason is StorageProxy.ShouldHint has an inverted if condition. 
> We should also wrap the if (hintWindowExpired) block in if 
> (DatabaseDescriptor.hintedHandoffEnabled()).
> The fix is easy, and I can provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11168) Hint Metrics are updated even if hinted_hand-offs=false

2016-02-19 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-11168:
-
Attachment: 0001-Hinted-Handoff-Fix.patch

> Hint Metrics are updated even if hinted_hand-offs=false
> ---
>
> Key: CASSANDRA-11168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11168
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Attachments: 0001-Hinted-Handoff-Fix.patch
>
>
> In our PROD logs, we noticed a lot of hint metrics even though we have 
> disabled hinted handoffs.
> The reason is StorageProxy.ShouldHint has an inverted if condition. 
> We should also wrap the if (hintWindowExpired) block in if 
> (DatabaseDescriptor.hintedHandoffEnabled()).
> The fix is easy, and I can provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11196) tuple_notation_test upgrade tests flaps

2016-02-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11196:


 Summary: tuple_notation_test upgrade tests flaps
 Key: CASSANDRA-11196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11196
 Project: Cassandra
  Issue Type: Bug
Reporter: Jim Witschey
Assignee: DS Test Eng


{{tuple_notation_test}} in the {{upgrade_tests.cql_tests}} module flaps on a 
number of different upgrade paths. Here are some of the tests that flap:

{code}
upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
{code}

Here's an example failure:

http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD/tuple_notation_test/

All the failures I've seen fail with this error:

{code}

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7276) Include keyspace and table names in logs where possible

2016-02-19 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154819#comment-15154819
 ] 

Anubhav Kale commented on CASSANDRA-7276:
-

Submitted. I tested this locally by forcing exceptions through code changes. 

> Include keyspace and table names in logs where possible
> ---
>
> Key: CASSANDRA-7276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7276
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.1.x
>
> Attachments: 0001-Better-Logging-for-KS-and-CF.patch, 
> 0001-Logging-for-Keyspace-and-Tables.patch, 2.1-CASSANDRA-7276-v1.txt, 
> cassandra-2.1-7276-compaction.txt, cassandra-2.1-7276.txt, 
> cassandra-2.1.9-7276-v2.txt, cassandra-2.1.9-7276.txt
>
>
> Most error messages and stacktraces give you no clue as to what keyspace or 
> table was causing the problem.  For example:
> {noformat}
> ERROR [MutationStage:61648] 2014-05-20 12:05:45,145 CassandraDaemon.java 
> (line 198) Exception in thread Thread[MutationStage:61648,5,main]
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Unknown Source)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:63)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:98)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
> at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
> at 
> org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:328)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:200)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:368)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:333)
> at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:206)
> at 
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> We should try to include info on the keyspace and column family in the error 
> messages or logs whenever possible.  This includes reads, writes, 
> compactions, flushes, repairs, and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7276) Include keyspace and table names in logs where possible

2016-02-19 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-7276:

Attachment: 0001-Better-Logging-for-KS-and-CF.patch

> Include keyspace and table names in logs where possible
> ---
>
> Key: CASSANDRA-7276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7276
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.1.x
>
> Attachments: 0001-Better-Logging-for-KS-and-CF.patch, 
> 0001-Logging-for-Keyspace-and-Tables.patch, 2.1-CASSANDRA-7276-v1.txt, 
> cassandra-2.1-7276-compaction.txt, cassandra-2.1-7276.txt, 
> cassandra-2.1.9-7276-v2.txt, cassandra-2.1.9-7276.txt
>
>
> Most error messages and stacktraces give you no clue as to what keyspace or 
> table was causing the problem.  For example:
> {noformat}
> ERROR [MutationStage:61648] 2014-05-20 12:05:45,145 CassandraDaemon.java 
> (line 198) Exception in thread Thread[MutationStage:61648,5,main]
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Unknown Source)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:63)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:98)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
> at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
> at 
> org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:328)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:200)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:368)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:333)
> at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:206)
> at 
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> We should try to include info on the keyspace and column family in the error 
> messages or logs whenever possible.  This includes reads, writes, 
> compactions, flushes, repairs, and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154787#comment-15154787
 ] 

Gábor Auth edited comment on CASSANDRA-10910 at 2/19/16 8:28 PM:
-

"Consistency level set to ALL."

It was the highest CL... :(

How can I repair the materialized view? I've started a full repair now:
{code}
[2016-02-19 20:03:45,096] Starting repair command #211, repairing keyspace 
test20160215 with repair options (parallelism: parallel, primary range: false, 
incremental: false, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: 
[], # of ranges: 216)
{code}

Finished:
{code}
> /home/cassandra/datastax-ddc-3.3.0/bin/nodetool repair -full test20160215
[...]
[2016-02-19 20:26:32,057] Repair command #1 finished in 4 minutes 46 seconds
{code}

Same issue with ALL consistency level... :(


was (Author: gabor.auth):
"Consistency level set to ALL."

It was the highest CL... :(

How can I repair the materialized view? I've started a full repair now:
{code}
[2016-02-19 20:03:45,096] Starting repair command #211, repairing keyspace 
test20160215 with repair options (parallelism: parallel, primary range: false, 
incremental: false, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: 
[], # of ranges: 216)
{code}


> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11195) static_columns_paging_test upgrade dtest flapping

2016-02-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11195:


 Summary: static_columns_paging_test upgrade dtest flapping
 Key: CASSANDRA-11195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
 Project: Cassandra
  Issue Type: Bug
Reporter: Jim Witschey
Assignee: DS Test Eng


On some upgrade paths, {{static_columns_paging_test}} is flapping:

http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_2_2_HEAD_UpTo_Trunk/static_columns_paging_test/history/
http://cassci.datastax.com/job/upgrade_tests-all/8/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_2_2_UpTo_Trunk/static_columns_paging_test/history/
http://cassci.datastax.com/job/upgrade_tests-all/8/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_2_2_UpTo_3_3_HEAD/static_columns_paging_test/history

The failures indicate there is missing data. I have not reproduced the failure 
locally. I've only seen the failures on 2-node clusters with RF=1, not on the 
3-node runs with RF=3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9935) Repair fails with RuntimeException

2016-02-19 Thread Jean-Francois Gosselin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154789#comment-15154789
 ] 

Jean-Francois Gosselin commented on CASSANDRA-9935:
---

We are doing range repair with https://github.com/spotify/cassandra-reaper . We 
don't use incremental repair .  We also see the issue with :  nodetool repair 
-pr

> Repair fails with RuntimeException
> --
>
> Key: CASSANDRA-9935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9935
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.8, Debian Wheezy
>Reporter: mlowicki
>Assignee: Yuki Morishita
> Fix For: 2.1.x
>
> Attachments: db1.sync.lati.osa.cassandra.log, 
> db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, 
> system.log.10.210.3.221, system.log.10.210.3.230
>
>
> We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade 
> to 2.1.8 it started to work faster but now it fails with:
> {code}
> ...
> [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde 
> for range (-5474076923322749342,-5468600594078911162] finished
> [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde 
> for range (-8631877858109464676,-8624040066373718932] finished
> [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde 
> for range (-5372806541854279315,-5369354119480076785] finished
> [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde 
> for range (8166489034383821955,8168408930184216281] finished
> [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde 
> for range (6084602890817326921,6088328703025510057] finished
> [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde 
> for range (-781874602493000830,-781745173070807746] finished
> [2015-07-29 20:44:03,957] Repair command #4 finished
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
> {code}
> After running:
> {code}
> nodetool repair --partitioner-range --parallel --in-local-dc sync
> {code}
> Last records in logs regarding repair are:
> {code}
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range 
> (-7695808664784761779,-7693529816291585568] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range 
> (806371695398849,8065203836608925992] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range 
> (-5474076923322749342,-5468600594078911162] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range 
> (-8631877858109464676,-8624040066373718932] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range 
> (-5372806541854279315,-5369354119480076785] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range 
> (8166489034383821955,8168408930184216281] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range 
> (6084602890817326921,6088328703025510057] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range 
> (-781874602493000830,-781745173070807746] finished
> {code}
> but a bit above I see (at least two times in attached log):
> {code}
> ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - 
> Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range 
> (5765414319217852786,5781018794516851576] failed with error 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.7.0_80]
> at 

[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154787#comment-15154787
 ] 

Gábor Auth commented on CASSANDRA-10910:


"Consistency level set to ALL."

It was the highest CL... :(

How can I repair the materialized view? I've started a full repair now:
{code}
[2016-02-19 20:03:45,096] Starting repair command #211, repairing keyspace 
test20160215 with repair options (parallelism: parallel, primary range: false, 
incremental: false, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: 
[], # of ranges: 216)
{code}


> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: cqlsh: Display milliseconds when datetime overflows

2016-02-19 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 a28389087 -> 70c8a53de
  refs/heads/trunk 07df8a22d -> 29066afea


cqlsh: Display milliseconds when datetime overflows

patch by Adam Holmberg; reviewed by Paulo Motta for CASSANDRA-10625


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70c8a53d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70c8a53d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70c8a53d

Branch: refs/heads/cassandra-3.0
Commit: 70c8a53de4881a3ccbcf5df7a68f44a57b103f12
Parents: a283890
Author: Adam Holmberg 
Authored: Tue Nov 24 14:15:05 2015 -0600
Committer: Joshua McKenzie 
Committed: Fri Feb 19 14:55:15 2016 -0500

--
 CHANGES.txt  |  1 +
 bin/cqlsh.py | 23 ---
 2 files changed, 21 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70c8a53d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ae9d545..f0aa996 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -39,6 +39,7 @@ Merged from 2.1:
  * Avoid major compaction mixing repaired and unrepaired sstables in DTCS 
(CASSANDRA-3)
  * Make it clear what DTCS timestamp_resolution is used for (CASSANDRA-11041)
  * (cqlsh) Support timezone conversion using pytz (CASSANDRA-10397)
+ * (cqlsh) Display milliseconds when datetime overflows (CASSANDRA-10625)
 
 
 3.0.3

http://git-wip-us.apache.org/repos/asf/cassandra/blob/70c8a53d/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 6cc98c3..a4dd253 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -148,10 +148,13 @@ except ImportError, e:
 
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.cluster import Cluster
+from cassandra.marshal import int64_unpack
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
 from cassandra.policies import WhiteListRoundRobinPolicy
 from cassandra.query import SimpleStatement, ordered_dict_factory, 
TraceUnavailable
+from cassandra.type_codes import DateType
+from cassandra.util import datetime_from_timestamp
 
 # cqlsh should run correctly when run out of a Cassandra source tree,
 # out of an unpacked Cassandra tarball, and after a proper package install.
@@ -600,10 +603,24 @@ def insert_driver_hooks():
 
 
 def extend_cql_deserialization():
-"""
-The python driver returns BLOBs as string, but we expect them as bytearrays
-"""
+# The python driver returns BLOBs as string, but we expect them as 
bytearrays
 cassandra.cqltypes.BytesType.deserialize = staticmethod(lambda byts, 
protocol_version: bytearray(byts))
+
+class DateOverFlowWarning(RuntimeWarning):
+pass
+
+# Native datetime types blow up outside of datetime.[MIN|MAX]_YEAR. We 
will fall back to an int timestamp
+def deserialize_date_fallback_int(byts, protocol_version):
+timestamp_ms = int64_unpack(byts)
+try:
+return datetime_from_timestamp(timestamp_ms / 1000.0)
+except OverflowError:
+warnings.warn(DateOverFlowWarning("Some timestamps are larger than 
Python datetime can represent. Timestamps are displayed in milliseconds from 
epoch."))
+return timestamp_ms
+
+cassandra.cqltypes.DateType.deserialize = 
staticmethod(deserialize_date_fallback_int)
+
+# Return cassandra.cqltypes.EMPTY instead of None for empty values
 cassandra.cqltypes.CassandraType.support_empty_values = True
 
 



[2/3] cassandra git commit: cqlsh: Display milliseconds when datetime overflows

2016-02-19 Thread jmckenzie
cqlsh: Display milliseconds when datetime overflows

patch by Adam Holmberg; reviewed by Paulo Motta for CASSANDRA-10625


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70c8a53d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70c8a53d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70c8a53d

Branch: refs/heads/trunk
Commit: 70c8a53de4881a3ccbcf5df7a68f44a57b103f12
Parents: a283890
Author: Adam Holmberg 
Authored: Tue Nov 24 14:15:05 2015 -0600
Committer: Joshua McKenzie 
Committed: Fri Feb 19 14:55:15 2016 -0500

--
 CHANGES.txt  |  1 +
 bin/cqlsh.py | 23 ---
 2 files changed, 21 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70c8a53d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ae9d545..f0aa996 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -39,6 +39,7 @@ Merged from 2.1:
  * Avoid major compaction mixing repaired and unrepaired sstables in DTCS 
(CASSANDRA-3)
  * Make it clear what DTCS timestamp_resolution is used for (CASSANDRA-11041)
  * (cqlsh) Support timezone conversion using pytz (CASSANDRA-10397)
+ * (cqlsh) Display milliseconds when datetime overflows (CASSANDRA-10625)
 
 
 3.0.3

http://git-wip-us.apache.org/repos/asf/cassandra/blob/70c8a53d/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 6cc98c3..a4dd253 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -148,10 +148,13 @@ except ImportError, e:
 
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.cluster import Cluster
+from cassandra.marshal import int64_unpack
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
 from cassandra.policies import WhiteListRoundRobinPolicy
 from cassandra.query import SimpleStatement, ordered_dict_factory, 
TraceUnavailable
+from cassandra.type_codes import DateType
+from cassandra.util import datetime_from_timestamp
 
 # cqlsh should run correctly when run out of a Cassandra source tree,
 # out of an unpacked Cassandra tarball, and after a proper package install.
@@ -600,10 +603,24 @@ def insert_driver_hooks():
 
 
 def extend_cql_deserialization():
-"""
-The python driver returns BLOBs as string, but we expect them as bytearrays
-"""
+# The python driver returns BLOBs as string, but we expect them as 
bytearrays
 cassandra.cqltypes.BytesType.deserialize = staticmethod(lambda byts, 
protocol_version: bytearray(byts))
+
+class DateOverFlowWarning(RuntimeWarning):
+pass
+
+# Native datetime types blow up outside of datetime.[MIN|MAX]_YEAR. We 
will fall back to an int timestamp
+def deserialize_date_fallback_int(byts, protocol_version):
+timestamp_ms = int64_unpack(byts)
+try:
+return datetime_from_timestamp(timestamp_ms / 1000.0)
+except OverflowError:
+warnings.warn(DateOverFlowWarning("Some timestamps are larger than 
Python datetime can represent. Timestamps are displayed in milliseconds from 
epoch."))
+return timestamp_ms
+
+cassandra.cqltypes.DateType.deserialize = 
staticmethod(deserialize_date_fallback_int)
+
+# Return cassandra.cqltypes.EMPTY instead of None for empty values
 cassandra.cqltypes.CassandraType.support_empty_values = True
 
 



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-02-19 Thread jmckenzie
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29066afe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29066afe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29066afe

Branch: refs/heads/trunk
Commit: 29066afea36c4a6cb85522155402be2373ac52d2
Parents: 07df8a2 70c8a53
Author: Joshua McKenzie 
Authored: Fri Feb 19 14:56:55 2016 -0500
Committer: Joshua McKenzie 
Committed: Fri Feb 19 14:56:55 2016 -0500

--
 CHANGES.txt  |  1 +
 bin/cqlsh.py | 23 ---
 2 files changed, 21 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29066afe/CHANGES.txt
--
diff --cc CHANGES.txt
index c12cc50,f0aa996..c17f85f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -66,13 -38,11 +66,14 @@@ Merged from 2.1
   * Gossiper#isEnabled is not thread safe (CASSANDRA-6)
   * Avoid major compaction mixing repaired and unrepaired sstables in DTCS 
(CASSANDRA-3)
   * Make it clear what DTCS timestamp_resolution is used for (CASSANDRA-11041)
 - * (cqlsh) Support timezone conversion using pytz (CASSANDRA-10397)
+  * (cqlsh) Display milliseconds when datetime overflows (CASSANDRA-10625)
  
  
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
   * Remove double initialization of newly added tables (CASSANDRA-11027)
   * Filter keys searcher results by target range (CASSANDRA-11104)
   * Fix deserialization of legacy read commands (CASSANDRA-11087)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29066afe/bin/cqlsh.py
--



[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2016-02-19 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154760#comment-15154760
 ] 

Carl Yeksigian commented on CASSANDRA-10910:


[~gabor.auth]: what consistency level is this at? Can you do the same query at 
a higher CL, or repair the view and see whether the inconsistency persists?

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10458) cqlshrc: add option to always use ssl

2016-02-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154748#comment-15154748
 ] 

Paulo Motta commented on CASSANDRA-10458:
-

Moving back to patch available until CASSANDRA-11124 is committed since this is 
dependent on that (should be done soon).

> cqlshrc: add option to always use ssl
> -
>
> Key: CASSANDRA-10458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10458
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Wringe
>Assignee: Stefan Podkowinski
>  Labels: lhf
>
> I am currently running on a system in which my cassandra cluster is only 
> accessible over tls.
> The cqlshrc file is used to specify the host, the certificates and other 
> configurations, but one option its missing is to always connect over ssl.
> I would like to be able to call 'cqlsh' instead of always having to specify 
> 'cqlsh --ssl'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: ninja 10397 - cqlsh: Fix sub second precision support

2016-02-19 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk acb2ab072 -> 07df8a22d


ninja 10397 - cqlsh: Fix sub second precision support


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07df8a22
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07df8a22
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07df8a22

Branch: refs/heads/trunk
Commit: 07df8a22d1a9c048cdbe3f9484c09f1bf118629e
Parents: acb2ab0
Author: Stefania Alborghetti 
Authored: Thu Feb 18 23:03:28 2016 -0300
Committer: Joshua McKenzie 
Committed: Fri Feb 19 14:44:52 2016 -0500

--
 pylib/cqlshlib/formatting.py | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07df8a22/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index 6b88bc7..f88d936 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -238,14 +238,18 @@ formatter_for('int')(format_integer_type)
 
 @formatter_for('datetime')
 def format_value_timestamp(val, colormap, date_time_format, quote=False, **_):
-bval = strftime(date_time_format.timestamp_format, 
calendar.timegm(val.utctimetuple()), timezone=date_time_format.timezone)
+bval = strftime(date_time_format.timestamp_format,
+calendar.timegm(val.utctimetuple()),
+microseconds=val.microsecond,
+timezone=date_time_format.timezone)
 if quote:
 bval = "'%s'" % bval
 return colorme(bval, colormap, 'timestamp')
 
 
-def strftime(time_format, seconds, timezone=None):
-ret_dt = datetime_from_timestamp(seconds).replace(tzinfo=UTC())
+def strftime(time_format, seconds, microseconds=0, timezone=None):
+ret_dt = datetime_from_timestamp(seconds) + 
datetime.timedelta(microseconds=microseconds)
+ret_dt = ret_dt.replace(tzinfo=UTC())
 if timezone:
 ret_dt = ret_dt.astimezone(timezone)
 return ret_dt.strftime(time_format)



[jira] [Commented] (CASSANDRA-11167) NPE when creating serializing ErrorMessage for Exception with null message

2016-02-19 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154733#comment-15154733
 ] 

Carl Yeksigian commented on CASSANDRA-11167:


+1

When we locate a source of null messages, we should fix it, but it would be 
better to not throw a NPE while serializing that error. Also, a look at some of 
our exception hierarchies didn't reveal anything obvious breaking.

> NPE when creating serializing ErrorMessage for Exception with null message
> --
>
> Key: CASSANDRA-11167
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11167
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In {{ErrorMessage.encode()}} and {{encodedSize()}}, we do not handle the 
> exception having a {{null}} message.  This can result in an error like the 
> following:
> {noformat}
> ERROR [SharedPool-Worker-1] 2016-02-10 17:41:29,793  Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc2c6499a, 
> /127.0.0.1:53299 => /127.0.0.1:9042]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.db.TypeSizes.encodedUTF8Length(TypeSizes.java:46) 
> ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.CBUtil.sizeOfString(CBUtil.java:132) 
> ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.messages.ErrorMessage$1.encodedSize(ErrorMessage.java:215)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.messages.ErrorMessage$1.encodedSize(ErrorMessage.java:44)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:328)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:314)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:629)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:686)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:622)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:445)
>  ~[cassandra-all-3.0.3.874.jar:3.0.3.874]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11194) materialized views - support explode() on collections

2016-02-19 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-11194:
--

 Summary: materialized views - support explode() on collections
 Key: CASSANDRA-11194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11194
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jon Haddad


I'm working on a database design to model a product catalog.  Products can 
belong to categories.  Categories can belong to multiple sub categories (think 
about Amazon's complex taxonomies).

My category table would look like this, giving me individual categories & their 
parents:

{code}
CREATE TABLE category (
category_id uuid primary key,
name text,
parents set
);
{code}

To get a list of all the children of a particular category, I need a table that 
looks like the following:

{code}
CREATE TABLE categories_by_parent (
parent_id uuid,
category_id uuid,
name text,
primary key (parent_id, category_id)
);
{code}

The important thing to note here is that a single category can have multiple 
parents.

I'd like to propose support for collections in materialized views via an 
explode() function that would create 1 row per item in the collection.  For 
instance, I'll insert the following 3 rows (2 parents, 1 child) into the 
category table:

{code}
insert into category (category_id, name, parents) values 
(009fe0e1-5b09-4efc-a92d-c03720324a4f, 'Parent', null);

insert into category (category_id, name, parents) values 
(1f2914de-0adf-4afc-b7ad-ddd8dc876ab1, 'Parent2', null);

insert into category (category_id, name, parents) values 
(1f93bc07-9874-42a5-a7d1-b741dc9c509c, 'Child', 
{009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 });

cqlsh:test> select * from category;

 category_id  | name| parents
--+-+--
 009fe0e1-5b09-4efc-a92d-c03720324a4f |  Parent |   
  null
 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 | Parent2 |   
  null
 1f93bc07-9874-42a5-a7d1-b741dc9c509c |   Child | 
{009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1}

(3 rows)

{code}

Given the following CQL to select the child category, utilizing an explode 
function, I would expect to get back 2 rows, 1 for each parent:

{code}
select category_id, name, explode(parents) as parent_id from category where 
category_id = 1f93bc07-9874-42a5-a7d1-b741dc9c509c;

category_id  | name  | parent_id
--+---+--
1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
009fe0e1-5b09-4efc-a92d-c03720324a4f
1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
1f2914de-0adf-4afc-b7ad-ddd8dc876ab1

(2 rows)
{code}

This functionality would ideally apply to materialized views, since the ability 
to control partitioning here would allow us to efficiently query our MV for all 
categories belonging to a parent in a complex taxonomy.

{code}
CREATE MATERIALIZED VIEW categories_by_parent as
SELECT explode(parents) as parent_id,
category_id, name FROM category WHERE parents IS NOT NULL
{code}

The explode() function is available in Spark Dataframes and my proposed 
function has the same behavior: 
http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.explode




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9935) Repair fails with RuntimeException

2016-02-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154700#comment-15154700
 ] 

Yuki Morishita commented on CASSANDRA-9935:
---

[~jfgosselin] Just want to check, how are you running repair?
What repair options are you using?
Have you run incremental repair?

> Repair fails with RuntimeException
> --
>
> Key: CASSANDRA-9935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9935
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.8, Debian Wheezy
>Reporter: mlowicki
>Assignee: Yuki Morishita
> Fix For: 2.1.x
>
> Attachments: db1.sync.lati.osa.cassandra.log, 
> db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, 
> system.log.10.210.3.221, system.log.10.210.3.230
>
>
> We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade 
> to 2.1.8 it started to work faster but now it fails with:
> {code}
> ...
> [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde 
> for range (-5474076923322749342,-5468600594078911162] finished
> [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde 
> for range (-8631877858109464676,-8624040066373718932] finished
> [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde 
> for range (-5372806541854279315,-5369354119480076785] finished
> [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde 
> for range (8166489034383821955,8168408930184216281] finished
> [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde 
> for range (6084602890817326921,6088328703025510057] finished
> [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde 
> for range (-781874602493000830,-781745173070807746] finished
> [2015-07-29 20:44:03,957] Repair command #4 finished
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
> {code}
> After running:
> {code}
> nodetool repair --partitioner-range --parallel --in-local-dc sync
> {code}
> Last records in logs regarding repair are:
> {code}
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range 
> (-7695808664784761779,-7693529816291585568] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range 
> (806371695398849,8065203836608925992] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range 
> (-5474076923322749342,-5468600594078911162] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range 
> (-8631877858109464676,-8624040066373718932] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range 
> (-5372806541854279315,-5369354119480076785] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range 
> (8166489034383821955,8168408930184216281] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range 
> (6084602890817326921,6088328703025510057] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range 
> (-781874602493000830,-781745173070807746] finished
> {code}
> but a bit above I see (at least two times in attached log):
> {code}
> ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - 
> Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range 
> (5765414319217852786,5781018794516851576] failed with error 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) 
> [na:1.7.0_80]
>

[jira] [Created] (CASSANDRA-11193) Missing binary dependencies for running Cassandra in embedded mode

2016-02-19 Thread DOAN DuyHai (JIRA)
DOAN DuyHai created CASSANDRA-11193:
---

 Summary: Missing binary dependencies for running Cassandra in 
embedded mode
 Key: CASSANDRA-11193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11193
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: Cassandra 3.3
Reporter: DOAN DuyHai


When running Cassandra in embedded mode (pulling the *cassandra-all-3.3.jar* 
from Maven) and activating *UDF*, I face the following exceptions when trying 
to create an UDF:

{noformat}
18:13:57.922 [main] DEBUG ACHILLES_DDL_SCRIPT - SCRIPT : CREATE 
FUNCTION convertToLong(input text) RETURNS NULL ON NULL INPUT RETURNS bigint 
LANGUAGE java AS $$return Long.parseLong(input);$$;
18:13:57.970 [SharedPool-Worker-1] ERROR o.apache.cassandra.transport.Message - 
Unexpected exception during request; channel = [id: 0x03f52731, 
/192.168.1.16:55224 => /192.168.1.16:9240]
java.lang.NoClassDefFoundError: org/objectweb/asm/ClassVisitor
at 
org.apache.cassandra.cql3.functions.JavaBasedUDFunction.(JavaBasedUDFunction.java:79)
 ~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.cql3.functions.UDFunction.create(UDFunction.java:223) 
~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.cql3.statements.CreateFunctionStatement.announceMigration(CreateFunctionStatement.java:162)
 ~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93)
 ~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
 ~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:222) 
~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
 ~[cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
 [cassandra-all-3.3.jar:3.3]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
 [cassandra-all-3.3.jar:3.3]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_60-ea]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 [cassandra-all-3.3.jar:3.3]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-3.3.jar:3.3]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
Caused by: java.lang.ClassNotFoundException: org.objectweb.asm.ClassVisitor
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
~[na:1.8.0_60-ea]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
~[na:1.8.0_60-ea]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) 
~[na:1.8.0_60-ea]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
~[na:1.8.0_60-ea]
... 18 common frames omitted
{noformat}

 The stack-trace is quite explicit, some classes from the objectweb/asm are 
missing. By looking into the {{$CASSANDRA_HOME/lib folder}}:

{noformat}
 19:44:07 :/opt/apps/apache-cassandra-3.2/lib]
% ll
total 48768
-rw-r--r--@  1 archinnovinfo  wheel   234K Jan  7 22:42 ST4-4.0.8.jar
-rw-r--r--@  1 archinnovinfo  wheel85K Jan  7 22:42 airline-0.6.jar
-rw-r--r--@  1 archinnovinfo  wheel   164K Jan  7 22:42 antlr-runtime-3.5.2.jar
-rw-r--r--@  1 archinnovinfo  wheel   5.1M Jan  7 22:42 apache-cassandra-3.2.jar
-rw-r--r--@  1 archinnovinfo  wheel   189K Jan  7 22:42 
apache-cassandra-clientutil-3.2.jar
-rw-r--r--@  1 archinnovinfo  wheel   1.8M Jan  7 22:42 
apache-cassandra-thrift-3.2.jar
-rw-r--r--@  1 archinnovinfo  wheel52K Jan  7 22:42 asm-5.0.4.jar
-rw-r--r--@  1 archinnovinfo  wheel   2.2M Jan  7 22:42 
cassandra-driver-core-3.0.0-beta1-bb1bce4-SNAPSHOT-shaded.jar
-rw-r--r--@  1 archinnovinfo  wheel   224K Jan  7 22:42 
cassandra-driver-internal-only-3.0.0-6af642d.zip
{noformat}

 I can see there is a *asm-5.0.4.jar*. After adding the following dependency in 
Maven, the issue is solved:


[jira] [Commented] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-02-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154677#comment-15154677
 ] 

Yuki Morishita commented on CASSANDRA-10990:


One more suggestion.
Isn't it simpler to just have CachedInputStream that first fills on heap buffer 
and writes out to file as buffer gets full?

bq. Do you easily remember if there is a way to retrieve the average partition 
size for a given table? I remember seeing something along those lines but I'm 
not sure where it is.

I think we can just set to be more sane value (few hundreds kilobytes?), the 
use case here is for static columns only.


> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154654#comment-15154654
 ] 

Jonathan Ellis commented on CASSANDRA-11035:


It seems like we should have options to allow to tune for scanning vs 
single-row workloads.  DTCS addresses some of the former but not all.

> Use cardinality estimation to pick better compaction candidates for STCS 
> (SizeTieredCompactionStrategy)
> ---
>
> Key: CASSANDRA-11035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>
> This was initially mentioned in this blog post 
> http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
>  but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
> "Potentially even more useful would be using cardinality estimation to pick 
> better compaction candidates. Instead of blindly merging sstables of a 
> similar size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should 
> benefit as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-02-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154614#comment-15154614
 ] 

Marcus Eriksson commented on CASSANDRA-11035:
-

this checks row overlap (ie, both partition key and clustering needs to be the 
same) so if the user reads full partitions they might hit too many sstables if 
we do no compaction.

We could do a partition overlap based compaction if there are no row overlap 
though?

> Use cardinality estimation to pick better compaction candidates for STCS 
> (SizeTieredCompactionStrategy)
> ---
>
> Key: CASSANDRA-11035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>
> This was initially mentioned in this blog post 
> http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
>  but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
> "Potentially even more useful would be using cardinality estimation to pick 
> better compaction candidates. Instead of blindly merging sstables of a 
> similar size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should 
> benefit as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-02-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154614#comment-15154614
 ] 

Marcus Eriksson edited comment on CASSANDRA-11035 at 2/19/16 6:36 PM:
--

this checks row overlap (ie, both partition key and clustering needs to be the 
same) so if the user reads full partitions they might hit too many sstables if 
we do no compaction.

We could do a partition overlap based compaction if there is no row overlap 
though?


was (Author: krummas):
this checks row overlap (ie, both partition key and clustering needs to be the 
same) so if the user reads full partitions they might hit too many sstables if 
we do no compaction.

We could do a partition overlap based compaction if there are no row overlap 
though?

> Use cardinality estimation to pick better compaction candidates for STCS 
> (SizeTieredCompactionStrategy)
> ---
>
> Key: CASSANDRA-11035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>
> This was initially mentioned in this blog post 
> http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
>  but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
> "Potentially even more useful would be using cardinality estimation to pick 
> better compaction candidates. Instead of blindly merging sstables of a 
> similar size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should 
> benefit as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11192) remove DatabaseDescriptor dependency from o.a.c.io.util package

2016-02-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154599#comment-15154599
 ] 

Yuki Morishita commented on CASSANDRA-11192:


This is more like passing config values such as {{trickle_fsync}} from outside 
rather than accessing {{DatabaseDescriptor}} or whatever class CASSANDRA-9054 
breaks up to from inside the class.

> remove DatabaseDescriptor dependency from o.a.c.io.util package
> ---
>
> Key: CASSANDRA-11192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11192
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Yuki Morishita
>
> DatabaseDescriptor is the source of all configuration in Cassandra, but since 
> its static initialization from Config/cassandra.yaml, it is hard to configure 
> programatically. Also if it's not {{Config.setClientMode(true)}}, 
> DatabaseDescriptor creates/initializes tons of unnecessary things for just 
> reading SSTable.
> Since o.a.c.io.util is the core of accessing files, they should be as 
> independent as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154597#comment-15154597
 ] 

Jonathan Ellis commented on CASSANDRA-11035:


We're pretty good at dealing with 100s of 1000s of sstables now, thanks to 
defaulting LCS to tiny sstables initially.  Is unbounded growth of 
non-overlapping data really a problem?

> Use cardinality estimation to pick better compaction candidates for STCS 
> (SizeTieredCompactionStrategy)
> ---
>
> Key: CASSANDRA-11035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>
> This was initially mentioned in this blog post 
> http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
>  but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
> "Potentially even more useful would be using cardinality estimation to pick 
> better compaction candidates. Instead of blindly merging sstables of a 
> similar size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should 
> benefit as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11192) remove DatabaseDescriptor dependency from o.a.c.io.util package

2016-02-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154592#comment-15154592
 ] 

Joshua McKenzie commented on CASSANDRA-11192:
-

Is this a duplicate of CASSANDRA-9054?

> remove DatabaseDescriptor dependency from o.a.c.io.util package
> ---
>
> Key: CASSANDRA-11192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11192
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Yuki Morishita
>
> DatabaseDescriptor is the source of all configuration in Cassandra, but since 
> its static initialization from Config/cassandra.yaml, it is hard to configure 
> programatically. Also if it's not {{Config.setClientMode(true)}}, 
> DatabaseDescriptor creates/initializes tons of unnecessary things for just 
> reading SSTable.
> Since o.a.c.io.util is the core of accessing files, they should be as 
> independent as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-02-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154585#comment-15154585
 ] 

Tyler Hobbs commented on CASSANDRA-8616:


bq. Probably we should consider rewriting tools so that they never use 
ColumnFamilyStore.

That is sounding like a better option to me, too.  Without doing that, I worry 
that we will miss edge cases that touch the commitlog in the future.

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-02-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154579#comment-15154579
 ] 

Tyler Hobbs commented on CASSANDRA-11035:
-

In combination with a minimum threshold, we could also potentially increase the 
sstable size variation within a bucket.  In other words, instead of requiring 
sstables in a bucket to be within 50% of the average size, allow them to be 
within 75% of the average size.  If we know that sstables have a lot of 
overlap, it makes sense to be a little more flexible about the sizes.

However, if we do implement a minimum threshold, we should probably add a cap 
on the max number of sstables (for STCS, at least).  Otherwise, you might see 
unbounded sstable growth for something like a Users table.

> Use cardinality estimation to pick better compaction candidates for STCS 
> (SizeTieredCompactionStrategy)
> ---
>
> Key: CASSANDRA-11035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>
> This was initially mentioned in this blog post 
> http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
>  but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
> "Potentially even more useful would be using cardinality estimation to pick 
> better compaction candidates. Instead of blindly merging sstables of a 
> similar size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should 
> benefit as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7276) Include keyspace and table names in logs where possible

2016-02-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154577#comment-15154577
 ] 

Paulo Motta commented on CASSANDRA-7276:


Looking good. Some comments below:

* On {{RepairMessageVerbHandler}} you can extract cf/table info from 
{{RepairJobDesc}}
* Change log prefix from "Keyspace:{}, Table:{}" to \[ks.table], so it'll be 
easier to spot visually and extract in logs. If there are multiple tables you 
can maybe have \[ks.table1,ks.table2,ks.table3\].
* On the uncaught exception handler on {{CassandraDaemon}}, you can avoid 
printing an exception multiple times by only modifying the first logging 
statement with {{logger.error("{}Exception in thread {}", 
Utils.maybeGetKsAndTableInfo(e), t, e);}}
** The {{Utils.maybeGetKsAndTableInfo(e)}} would do the check for 
{{ContextualizedException}} and return an empty string otherwise.
* Rename {{IKeyspaceAwareVerbHandler}} to {{IContextualizedVerbHandler}} (since 
it's no longer keyspace only)
* Do not access {{ContextualizedException}} fields directly, but via getters 
and setters (I know this is not respected everywhere, but it's general Java 
guideline)
* Add apache notice to {{ContextualizedException}}
* Please rebase your patch to latest trunk from 
https://github.com/apache/cassandra.git
* If possible, it would be nice if you could test this with ccm + stress and 
check if log formatting is correct and no unexpected problems will arise.
** As well as testing that uncatched {{ContextualizedException}} is being 
correctly logged.

Thanks!

> Include keyspace and table names in logs where possible
> ---
>
> Key: CASSANDRA-7276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7276
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.1.x
>
> Attachments: 0001-Logging-for-Keyspace-and-Tables.patch, 
> 2.1-CASSANDRA-7276-v1.txt, cassandra-2.1-7276-compaction.txt, 
> cassandra-2.1-7276.txt, cassandra-2.1.9-7276-v2.txt, cassandra-2.1.9-7276.txt
>
>
> Most error messages and stacktraces give you no clue as to what keyspace or 
> table was causing the problem.  For example:
> {noformat}
> ERROR [MutationStage:61648] 2014-05-20 12:05:45,145 CassandraDaemon.java 
> (line 198) Exception in thread Thread[MutationStage:61648,5,main]
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Unknown Source)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:63)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:98)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
> at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
> at 
> org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:328)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:200)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:368)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:333)
> at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:206)
> at 
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> We should try to include info on the keyspace and column family in the error 
> messages or logs whenever possible.  This includes reads, writes, 
> compactions, flushes, repairs, and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10956) Enable authentication of native protocol users via client certificates

2016-02-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154560#comment-15154560
 ] 

Sam Tunnicliffe commented on CASSANDRA-10956:
-

Just a note to let you know I've not forgotten about this ticket, and to 
apologise for doing my reviewer's job on it yet. The delay is partly because I 
think this could be particularly useful for CASSANDRA-10091, so I've just been 
ironing some kinks out of that before I start looking at how the two patches 
play together.

> Enable authentication of native protocol users via client certificates
> --
>
> Key: CASSANDRA-10956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10956
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Samuel Klock
>Assignee: Samuel Klock
> Attachments: 10956.patch
>
>
> Currently, the native protocol only supports user authentication via SASL.  
> While this is adequate for many use cases, it may be superfluous in scenarios 
> where clients are required to present an SSL certificate to connect to the 
> server.  If the certificate presented by a client is sufficient by itself to 
> specify a user, then an additional (series of) authentication step(s) via 
> SASL merely add overhead.  Worse, for uses wherein it's desirable to obtain 
> the identity from the client's certificate, it's necessary to implement a 
> custom SASL mechanism to do so, which increases the effort required to 
> maintain both client and server and which also duplicates functionality 
> already provided via SSL/TLS.
> Cassandra should provide a means of using certificates for user 
> authentication in the native protocol without any effort above configuring 
> SSL on the client and server.  Here's a possible strategy:
> * Add a new authenticator interface that returns {{AuthenticatedUser}} 
> objects based on the certificate chain presented by the client.
> * If this interface is in use, the user is authenticated immediately after 
> the server receives the {{STARTUP}} message.  It then responds with a 
> {{READY}} message.
> * Otherwise, the existing flow of control is used (i.e., if the authenticator 
> requires authentication, then an {{AUTHENTICATE}} message is sent to the 
> client).
> One advantage of this strategy is that it is backwards-compatible with 
> existing schemes; current users of SASL/{{IAuthenticator}} are not impacted.  
> Moreover, it can function as a drop-in replacement for SASL schemes without 
> requiring code changes (or even config changes) on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11192) remove DatabaseDescriptor dependency from o.a.c.io.util package

2016-02-19 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-11192:
--

 Summary: remove DatabaseDescriptor dependency from o.a.c.io.util 
package
 Key: CASSANDRA-11192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11192
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Yuki Morishita


DatabaseDescriptor is the source of all configuration in Cassandra, but since 
its static initialization from Config/cassandra.yaml, it is hard to configure 
programatically. Also if it's not {{Config.setClientMode(true)}}, 
DatabaseDescriptor creates/initializes tons of unnecessary things for just 
reading SSTable.

Since o.a.c.io.util is the core of accessing files, they should be as 
independent as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11191) Support more flexible offline access to SSTable

2016-02-19 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-11191:
--

 Summary: Support more flexible offline access to SSTable
 Key: CASSANDRA-11191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11191
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita


Right now, using SSTableReader/SSTableWriter alone needs certain set ups like 
{{Config.setClientMode(true)}} or loading schema outside. Even doing so, 
various options to read/write SSTable offline is fixed and not programatically 
modifiable.

We should decouple most of {{org.apache.cassandra.io.sstable}} from other parts 
of cassandra so that we can easily reuse them in offilne tools.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-02-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154523#comment-15154523
 ] 

Yuki Morishita commented on CASSANDRA-8616:
---

Bad news. My approach didn't work.

There is one point that tries to access commit log: offline tools like 
sstablescrub tries to delete data from system.sstable_activity after scrubbing 
SSTable. This causes access to commit log and since we are trying to disable, 
the tool hangs at 
[here|https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManager.java#L271].
Previously I stated that many offline tools cannot run with 
{{Client.setClientMode(true)}}, so reading from SSTableReader tracks sstable 
activity even we are ofline.

Probably we should consider rewriting tools so that they never use 
ColumnFamilyStore.

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154503#comment-15154503
 ] 

Jonathan Ellis commented on CASSANDRA-11035:


Should we have a threshold for minimum overlap required before merging two 
sstables?  So if there are no good candidates we do nothing instead of blindly 
doing the first four.

> Use cardinality estimation to pick better compaction candidates for STCS 
> (SizeTieredCompactionStrategy)
> ---
>
> Key: CASSANDRA-11035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>
> This was initially mentioned in this blog post 
> http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
>  but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
> "Potentially even more useful would be using cardinality estimation to pick 
> better compaction candidates. Instead of blindly merging sstables of a 
> similar size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should 
> benefit as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11093) processs restarts are failing becase native port and jmx ports are in use

2016-02-19 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11093:

Assignee: (was: Sam Tunnicliffe)

> processs restarts are failing becase native port and jmx ports are in use
> -
>
> Key: CASSANDRA-11093
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11093
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: PROD
>Reporter: varun
>Priority: Minor
>  Labels: lhf
>
> A process restart should automatically take care of this. But it is not and 
> it is a problem.
> The ports are are considered in use even if the process has quit/died/killed 
> but the socket is in a TIME_WAIT state in the TCP FSM 
> (http://tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm).
> tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 192.168.1.2:9160 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 10.130.128.131:58263 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 10.130.128.131:58262 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:9042 :::* LISTEN 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57191 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57190 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37105 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42562 :::127.0.0.1:7199 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:57190 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57198 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37106 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:57197 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57191 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57198 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57197 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42567 :::127.0.0.1:7199 TIME_WAIT -
> I had to write a restart handler that does a netstat call and looks to make 
> sure all the TIME_WAIT states exhaust before starting the node back up. This 
> happened on 26 of the 56 when a rolling restart was performed. The issue was 
> mostly around JMX port 7199. There was another rollling restart done on the 
> 26 nodes to remediate the JMX ports issue in that restart one node had the 
> issue where port 9042 was considered used after the restart and the process 
> died after a bit of time.
> What needs to be done for port the native port 9042 and JMX port 7199 is to 
> create the underlying TCP socket with SO_REUSEADDR. This eases the 
> restriction and allows the port to be bound by process even if there are 
> sockets open to that port in the TCP FSM, as long as there is no other 
> process listening on that port. There is a Java method available to set this 
> option in java.net.Socket 
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%29.
> native port 9042: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> JMX port 7199: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40
> Looking in the code itself this option is being set on thrift (9160 
> (default)) and internode communication ports, uncrypted (7000 (default)) and 
> SSL encrypted (7001 (default)) .
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> This needs to be set to native and jmx ports as well.
> References:
> https://unix.stackexchange.com/questions/258379/when-is-a-port-considered-being-used/258380?noredirect=1
> https://stackoverflow.com/questions/23531558/allow-restarting-java-application-with-jmx-monitoring-enabled-immediately
> https://docs.oracle.com/javase/8/docs/technotes/guides/rmi/socketfactory/
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%293
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11093) processs restarts are failing becase native port and jmx ports are in use

2016-02-19 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11093:

Reviewer: Sam Tunnicliffe

> processs restarts are failing becase native port and jmx ports are in use
> -
>
> Key: CASSANDRA-11093
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11093
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: PROD
>Reporter: varun
>Priority: Minor
>  Labels: lhf
>
> A process restart should automatically take care of this. But it is not and 
> it is a problem.
> The ports are are considered in use even if the process has quit/died/killed 
> but the socket is in a TIME_WAIT state in the TCP FSM 
> (http://tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm).
> tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 192.168.1.2:9160 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 10.130.128.131:58263 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 10.130.128.131:58262 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:9042 :::* LISTEN 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57191 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57190 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37105 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42562 :::127.0.0.1:7199 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:57190 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57198 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37106 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:57197 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57191 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57198 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57197 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42567 :::127.0.0.1:7199 TIME_WAIT -
> I had to write a restart handler that does a netstat call and looks to make 
> sure all the TIME_WAIT states exhaust before starting the node back up. This 
> happened on 26 of the 56 when a rolling restart was performed. The issue was 
> mostly around JMX port 7199. There was another rollling restart done on the 
> 26 nodes to remediate the JMX ports issue in that restart one node had the 
> issue where port 9042 was considered used after the restart and the process 
> died after a bit of time.
> What needs to be done for port the native port 9042 and JMX port 7199 is to 
> create the underlying TCP socket with SO_REUSEADDR. This eases the 
> restriction and allows the port to be bound by process even if there are 
> sockets open to that port in the TCP FSM, as long as there is no other 
> process listening on that port. There is a Java method available to set this 
> option in java.net.Socket 
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%29.
> native port 9042: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> JMX port 7199: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40
> Looking in the code itself this option is being set on thrift (9160 
> (default)) and internode communication ports, uncrypted (7000 (default)) and 
> SSL encrypted (7001 (default)) .
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> This needs to be set to native and jmx ports as well.
> References:
> https://unix.stackexchange.com/questions/258379/when-is-a-port-considered-being-used/258380?noredirect=1
> https://stackoverflow.com/questions/23531558/allow-restarting-java-application-with-jmx-monitoring-enabled-immediately
> https://docs.oracle.com/javase/8/docs/technotes/guides/rmi/socketfactory/
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%293
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11093) processs restarts are failing becase native port and jmx ports are in use

2016-02-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154491#comment-15154491
 ] 

Sam Tunnicliffe commented on CASSANDRA-11093:
-

[~varun] yes, I should have been set as reviewer, not assignee (fixed now).

> processs restarts are failing becase native port and jmx ports are in use
> -
>
> Key: CASSANDRA-11093
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11093
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: PROD
>Reporter: varun
>Priority: Minor
>  Labels: lhf
>
> A process restart should automatically take care of this. But it is not and 
> it is a problem.
> The ports are are considered in use even if the process has quit/died/killed 
> but the socket is in a TIME_WAIT state in the TCP FSM 
> (http://tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm).
> tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 192.168.1.2:9160 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 10.130.128.131:58263 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 10.130.128.131:58262 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:9042 :::* LISTEN 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57191 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57190 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37105 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42562 :::127.0.0.1:7199 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:57190 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57198 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37106 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:57197 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57191 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57198 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57197 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42567 :::127.0.0.1:7199 TIME_WAIT -
> I had to write a restart handler that does a netstat call and looks to make 
> sure all the TIME_WAIT states exhaust before starting the node back up. This 
> happened on 26 of the 56 when a rolling restart was performed. The issue was 
> mostly around JMX port 7199. There was another rollling restart done on the 
> 26 nodes to remediate the JMX ports issue in that restart one node had the 
> issue where port 9042 was considered used after the restart and the process 
> died after a bit of time.
> What needs to be done for port the native port 9042 and JMX port 7199 is to 
> create the underlying TCP socket with SO_REUSEADDR. This eases the 
> restriction and allows the port to be bound by process even if there are 
> sockets open to that port in the TCP FSM, as long as there is no other 
> process listening on that port. There is a Java method available to set this 
> option in java.net.Socket 
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%29.
> native port 9042: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> JMX port 7199: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40
> Looking in the code itself this option is being set on thrift (9160 
> (default)) and internode communication ports, uncrypted (7000 (default)) and 
> SSL encrypted (7001 (default)) .
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> This needs to be set to native and jmx ports as well.
> References:
> https://unix.stackexchange.com/questions/258379/when-is-a-port-considered-being-used/258380?noredirect=1
> https://stackoverflow.com/questions/23531558/allow-restarting-java-application-with-jmx-monitoring-enabled-immediately
> https://docs.oracle.com/javase/8/docs/technotes/guides/rmi/socketfactory/
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%293
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40



--
This message was sent 

[jira] [Commented] (CASSANDRA-10956) Enable authentication of native protocol users via client certificates

2016-02-19 Thread Samuel Klock (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154488#comment-15154488
 ] 

Samuel Klock commented on CASSANDRA-10956:
--

Thank you for the feedback.  I've left some replies to your comments on GitHub, 
and we'll plan to incorporate your feedback in a new version of the patch in 
the next few days.

Regarding anonymous authentication: would it be reasonable to make this 
behavior configurable? The intent is to enable operators to provide some level 
of access (perhaps read-only) to users who are not capable of authenticating. I 
do agree that it hardcoding this behavior in 
{{CommonNameCertificateAuthenticator}} probably isn't correct.

(It's also worth noting that the native protocol doesn't appear to support 
authentication at all for existing {{IAuthenticators}} that don't require 
authentication, so maybe {{ICertificateAuthenticator}} shouldn't support it 
either.)

> Enable authentication of native protocol users via client certificates
> --
>
> Key: CASSANDRA-10956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10956
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Samuel Klock
>Assignee: Samuel Klock
> Attachments: 10956.patch
>
>
> Currently, the native protocol only supports user authentication via SASL.  
> While this is adequate for many use cases, it may be superfluous in scenarios 
> where clients are required to present an SSL certificate to connect to the 
> server.  If the certificate presented by a client is sufficient by itself to 
> specify a user, then an additional (series of) authentication step(s) via 
> SASL merely add overhead.  Worse, for uses wherein it's desirable to obtain 
> the identity from the client's certificate, it's necessary to implement a 
> custom SASL mechanism to do so, which increases the effort required to 
> maintain both client and server and which also duplicates functionality 
> already provided via SSL/TLS.
> Cassandra should provide a means of using certificates for user 
> authentication in the native protocol without any effort above configuring 
> SSL on the client and server.  Here's a possible strategy:
> * Add a new authenticator interface that returns {{AuthenticatedUser}} 
> objects based on the certificate chain presented by the client.
> * If this interface is in use, the user is authenticated immediately after 
> the server receives the {{STARTUP}} message.  It then responds with a 
> {{READY}} message.
> * Otherwise, the existing flow of control is used (i.e., if the authenticator 
> requires authentication, then an {{AUTHENTICATE}} message is sent to the 
> client).
> One advantage of this strategy is that it is backwards-compatible with 
> existing schemes; current users of SASL/{{IAuthenticator}} are not impacted.  
> Moreover, it can function as a drop-in replacement for SASL schemes without 
> requiring code changes (or even config changes) on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-02-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154467#comment-15154467
 ] 

Marcus Eriksson commented on CASSANDRA-11035:
-

been running a few more benchmarks locally;

* as long as compaction keeps up, it is identical to the current STCS (atleast 
in my short benchmarks) - we basically always compact the 4 similarly sized 
sstables that we find
* once compaction can't keep up (there is a bunch of sstables to pick from) we 
see big improvements

I think we should do this for STCS-in-L0 as well since the the most common (I 
think) way to get behind in L0 is from streaming/repair where we drop in many 
sstables in L0 from other nodes. This should result in a bunch of subsets of 
sstables that are highly overlapping while not overlapping with the other 
subsets at all. For example, say we have 4 levels in LCS and stream range (10, 
20], then we would end up with 4 sstables with that range, and compacting those 
4 sstables together first should produce good results.

> Use cardinality estimation to pick better compaction candidates for STCS 
> (SizeTieredCompactionStrategy)
> ---
>
> Key: CASSANDRA-11035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>
> This was initially mentioned in this blog post 
> http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
>  but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
> "Potentially even more useful would be using cardinality estimation to pick 
> better compaction candidates. Instead of blindly merging sstables of a 
> similar size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should 
> benefit as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10625) Problem of year 10000: Dates too far in the future can be saved but not read back using cqlsh

2016-02-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154453#comment-15154453
 ] 

Paulo Motta commented on CASSANDRA-10625:
-

Tests look good, marking as ready to commit.

> Problem of year 1: Dates too far in the future can be saved but not read 
> back using cqlsh
> -
>
> Key: CASSANDRA-10625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Piotr Kołaczkowski
>Assignee: Adam Holmberg
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> {noformat}
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '-12-31 
> 23:59:59+');
> cqlsh> select * from test.timestamp_test ;
>  pkey | ts
> --+--
> 1 | -12-31 23:59:59+
> (1 rows)
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '1-01-01 
> 00:00:01+');
> cqlsh> select * from test.timestamp_test ;
> Traceback (most recent call last):
>   File "bin/../resources/cassandra/bin/cqlsh", line 1112, in 
> perform_simple_statement
> rows = self.session.execute(statement, trace=self.tracing_enabled)
>   File 
> "/home/pkolaczk/Projekty/DataStax/bdp/resources/cassandra/bin/../zipfiles/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py",
>  line 1602, in execute
> result = future.result()
>   File 
> "/home/pkolaczk/Projekty/DataStax/bdp/resources/cassandra/bin/../zipfiles/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py",
>  line 3347, in result
> raise self._final_exception
> OverflowError: date value out of range
> {noformat}
> The connection is broken afterwards:
> {noformat}
> cqlsh> insert into test.timestamp_test (pkey, ts) VALUES (1, '1-01-01 
> 00:00:01+');
> NoHostAvailable: ('Unable to complete the operation against any hosts', 
> {: ConnectionShutdown('Connection to 127.0.0.1 is 
> defunct',)})
> {noformat}
> Expected behaviors (one of):
> - don't allow to insert dates larger than -12-31 and document the 
> limitation
> - handle all dates up to Java Date(MAX_LONG) for writing and reading



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11093) processs restarts are failing becase native port and jmx ports are in use

2016-02-19 Thread varun (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154450#comment-15154450
 ] 

varun commented on CASSANDRA-11093:
---

Is it the patch that [~Ge] posted in the previous comment?

> processs restarts are failing becase native port and jmx ports are in use
> -
>
> Key: CASSANDRA-11093
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11093
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: PROD
>Reporter: varun
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: lhf
>
> A process restart should automatically take care of this. But it is not and 
> it is a problem.
> The ports are are considered in use even if the process has quit/died/killed 
> but the socket is in a TIME_WAIT state in the TCP FSM 
> (http://tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm).
> tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 192.168.1.2:9160 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 10.130.128.131:58263 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 10.130.128.131:58262 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:9042 :::* LISTEN 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57191 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57190 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37105 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42562 :::127.0.0.1:7199 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:57190 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57198 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37106 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:57197 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57191 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57198 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57197 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42567 :::127.0.0.1:7199 TIME_WAIT -
> I had to write a restart handler that does a netstat call and looks to make 
> sure all the TIME_WAIT states exhaust before starting the node back up. This 
> happened on 26 of the 56 when a rolling restart was performed. The issue was 
> mostly around JMX port 7199. There was another rollling restart done on the 
> 26 nodes to remediate the JMX ports issue in that restart one node had the 
> issue where port 9042 was considered used after the restart and the process 
> died after a bit of time.
> What needs to be done for port the native port 9042 and JMX port 7199 is to 
> create the underlying TCP socket with SO_REUSEADDR. This eases the 
> restriction and allows the port to be bound by process even if there are 
> sockets open to that port in the TCP FSM, as long as there is no other 
> process listening on that port. There is a Java method available to set this 
> option in java.net.Socket 
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%29.
> native port 9042: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> JMX port 7199: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40
> Looking in the code itself this option is being set on thrift (9160 
> (default)) and internode communication ports, uncrypted (7000 (default)) and 
> SSL encrypted (7001 (default)) .
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> This needs to be set to native and jmx ports as well.
> References:
> https://unix.stackexchange.com/questions/258379/when-is-a-port-considered-being-used/258380?noredirect=1
> https://stackoverflow.com/questions/23531558/allow-restarting-java-application-with-jmx-monitoring-enabled-immediately
> https://docs.oracle.com/javase/8/docs/technotes/guides/rmi/socketfactory/
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%293
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40



--
This message was 

[jira] [Commented] (CASSANDRA-11093) processs restarts are failing becase native port and jmx ports are in use

2016-02-19 Thread varun (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154447#comment-15154447
 ] 

varun commented on CASSANDRA-11093:
---

Hi [~beobal],

It says patch is available. Which patch version is this referring to? I cannot 
find the details in the ticket.

Thanks

> processs restarts are failing becase native port and jmx ports are in use
> -
>
> Key: CASSANDRA-11093
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11093
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: PROD
>Reporter: varun
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: lhf
>
> A process restart should automatically take care of this. But it is not and 
> it is a problem.
> The ports are are considered in use even if the process has quit/died/killed 
> but the socket is in a TIME_WAIT state in the TCP FSM 
> (http://tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm).
> tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 192.168.1.2:9160 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 10.130.128.131:58263 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 10.130.128.131:58262 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:9042 :::* LISTEN 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57191 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57190 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37105 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42562 :::127.0.0.1:7199 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:57190 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57198 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37106 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:57197 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57191 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57198 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57197 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42567 :::127.0.0.1:7199 TIME_WAIT -
> I had to write a restart handler that does a netstat call and looks to make 
> sure all the TIME_WAIT states exhaust before starting the node back up. This 
> happened on 26 of the 56 when a rolling restart was performed. The issue was 
> mostly around JMX port 7199. There was another rollling restart done on the 
> 26 nodes to remediate the JMX ports issue in that restart one node had the 
> issue where port 9042 was considered used after the restart and the process 
> died after a bit of time.
> What needs to be done for port the native port 9042 and JMX port 7199 is to 
> create the underlying TCP socket with SO_REUSEADDR. This eases the 
> restriction and allows the port to be bound by process even if there are 
> sockets open to that port in the TCP FSM, as long as there is no other 
> process listening on that port. There is a Java method available to set this 
> option in java.net.Socket 
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%29.
> native port 9042: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> JMX port 7199: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40
> Looking in the code itself this option is being set on thrift (9160 
> (default)) and internode communication ports, uncrypted (7000 (default)) and 
> SSL encrypted (7001 (default)) .
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> This needs to be set to native and jmx ports as well.
> References:
> https://unix.stackexchange.com/questions/258379/when-is-a-port-considered-being-used/258380?noredirect=1
> https://stackoverflow.com/questions/23531558/allow-restarting-java-application-with-jmx-monitoring-enabled-immediately
> https://docs.oracle.com/javase/8/docs/technotes/guides/rmi/socketfactory/
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%293
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> 

[jira] [Commented] (CASSANDRA-10458) cqlshrc: add option to always use ssl

2016-02-19 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154434#comment-15154434
 ] 

Stefan Podkowinski commented on CASSANDRA-10458:


Merge LGTM for this ticket.

> cqlshrc: add option to always use ssl
> -
>
> Key: CASSANDRA-10458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10458
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Wringe
>Assignee: Stefan Podkowinski
>  Labels: lhf
>
> I am currently running on a system in which my cassandra cluster is only 
> accessible over tls.
> The cqlshrc file is used to specify the host, the certificates and other 
> configurations, but one option its missing is to always connect over ssl.
> I would like to be able to call 'cqlsh' instead of always having to specify 
> 'cqlsh --ssl'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8958) Add client to cqlsh SHOW_SESSION

2016-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8958:

Labels: lhf  (was: )

> Add client to cqlsh SHOW_SESSION
> 
>
> Key: CASSANDRA-8958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8958
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> Once the python driver supports it, 
> https://datastax-oss.atlassian.net/browse/PYTHON-235, add the client to cqlsh 
> {{SHOW_SESSION}} as done in this commit:
> https://github.com/apache/cassandra/commit/249f79d3718fa05347d60e09f9d3fa15059bd3d3
> Also, update the bundled python driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2016-02-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154387#comment-15154387
 ] 

Aleksey Yeschenko commented on CASSANDRA-8969:
--

Can you cook up a patch for it? Should be trivial enough. Thanks.

> Add indication in cassandra.yaml that rpc timeouts going too high will cause 
> memory build up
> 
>
> Key: CASSANDRA-8969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Minor
>  Labels: lhf
>
> It would be helpful to communicate that setting the rpc timeouts too high may 
> cause memory problems on the server as it can become overloaded and has to 
> retain the in flight requests in memory.  I'll get this done but just adding 
> the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2016-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8969:
-
Priority: Minor  (was: Major)

> Add indication in cassandra.yaml that rpc timeouts going too high will cause 
> memory build up
> 
>
> Key: CASSANDRA-8969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Minor
>  Labels: lhf
>
> It would be helpful to communicate that setting the rpc timeouts too high may 
> cause memory problems on the server as it can become overloaded and has to 
> retain the in flight requests in memory.  I'll get this done but just adding 
> the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2016-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8969:

Labels: lhf  (was: )

> Add indication in cassandra.yaml that rpc timeouts going too high will cause 
> memory build up
> 
>
> Key: CASSANDRA-8969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>  Labels: lhf
>
> It would be helpful to communicate that setting the rpc timeouts too high may 
> cause memory problems on the server as it can become overloaded and has to 
> retain the in flight requests in memory.  I'll get this done but just adding 
> the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9692) Print sensible units for all log messages

2016-02-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154385#comment-15154385
 ] 

Benedict commented on CASSANDRA-9692:
-

Hi [~Giampaolo],

2-4, feel free to address in this ticket - and not just those; in your link 4, 
the transfer _rate_ doesn't need to be MB/s; in some cases KiB/s is probably 
better, and also we probably mean MiB/s not MB/s anyway.  Preferably all units 
of all messages should be assessed and made better / dynamic.

Regarding 1, there is an advantage to capping the lower bound at KiB: it is 
consistent for parsing, and hopefully makes it easier to spot where the value 
they are looking for is, and immediately see its scale.  Also, it is not at all 
difficult to interpret the meaning of 0.099KiB, although I suppose the rounding 
error might confuse people if they want to find a file with exactly that size.  
Personally, I prefer the consistency of sticking with KiB, but that's obvious 
since I wrote the method in the first place :)


> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-6815) Decided if we want to bring back thrift HSHA in 2.0.7

2016-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6815.
---
Resolution: Won't Fix
  Assignee: (was: Pavel Yaskevich)

> Decided if we want to bring back thrift HSHA in 2.0.7
> -
>
> Key: CASSANDRA-6815
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6815
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>
> This is the followup of CASSANDRA-6285, to decide what we want to do 
> regarding thrift servers moving forward. My reading of CASSANDRA-6285 
> suggests that the possible options includes:
> # bring back the old HSHA implementation from 1.2 as "hsha" and make the 
> disruptor implementation be "disruptor_hsha".
> # use the new TThreadedSelectorServer from thrift as "hsha", making the 
> disruptor implementation "disruptor_hsha" as above
> # just wait for Pavel to fix the disruptor implementation for off-heap 
> buffers to switch back to that, keeping on-heap buffer until then.
> # keep on-heap buffer for the disruptor implementation and do nothing 
> particular.
> I could be missing some options and we can probably do some mix of those. I 
> don't have a particular opinion to offer on the matter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11110) Parser improvements for SASI

2016-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154377#comment-15154377
 ] 

Jonathan Ellis commented on CASSANDRA-0:


Bumping now that 11067 is in.

> Parser improvements for SASI
> 
>
> Key: CASSANDRA-0
> URL: https://issues.apache.org/jira/browse/CASSANDRA-0
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
>
> Shouldn't require ALLOW FILTERING for SASI inequalities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11164) Order and filter cipher suites correctly

2016-02-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11164:

Assignee: (was: Stefania)
Reviewer: Stefania

> Order and filter cipher suites correctly
> 
>
> Key: CASSANDRA-11164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11164
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Petracca
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 11164-2.2.txt, 11164-on-10508-2.2.patch
>
>
> As pointed out in https://issues.apache.org/jira/browse/CASSANDRA-10508, 
> SSLFactory.filterCipherSuites() doesn't respect the ordering of desired 
> ciphers in cassandra.yaml.
> Also the fix that occurred for 
> https://issues.apache.org/jira/browse/CASSANDRA-3278 is incomplete and needs 
> to be applied to all locations where we create an SSLSocket so that JCE is 
> not required out of the box or with additional configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11164) Order and filter cipher suites correctly

2016-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11164:
-
Assignee: Stefania

> Order and filter cipher suites correctly
> 
>
> Key: CASSANDRA-11164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11164
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Petracca
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 11164-2.2.txt, 11164-on-10508-2.2.patch
>
>
> As pointed out in https://issues.apache.org/jira/browse/CASSANDRA-10508, 
> SSLFactory.filterCipherSuites() doesn't respect the ordering of desired 
> ciphers in cassandra.yaml.
> Also the fix that occurred for 
> https://issues.apache.org/jira/browse/CASSANDRA-3278 is incomplete and needs 
> to be applied to all locations where we create an SSLSocket so that JCE is 
> not required out of the box or with additional configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11158) AssertionError: null in Slice$Bound.create

2016-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11158:
-
Reviewer: Sylvain Lebresne

> AssertionError: null in Slice$Bound.create
> --
>
> Key: CASSANDRA-11158
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11158
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Samu Kallio
>Assignee: Branimir Lambov
>Priority: Critical
> Fix For: 3.0.x
>
>
> We've been running Cassandra 3.0.2 for around a week now. Yesterday, we had a 
> network event that briefly isolated one node from others in a 3 node cluster. 
> Since then, we've been seeing a constant stream of "Finished hinted handoff" 
> messages, as well as:
> {noformat}
> WARN  16:34:39 Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.AssertionError: null
> at org.apache.cassandra.db.Slice$Bound.create(Slice.java:365) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.Slice$Bound$Serializer.deserializeValues(Slice.java:553)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:274)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:115) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:107) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.getPosition(BigTableReader.java:216)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getPosition(SSTableReader.java:1568)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndSSTablesInTimestampOrder(SinglePartitionReadCommand.java:715)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:482)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:459)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:325)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:350) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:45)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_72]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.2.jar:3.0.2]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> {noformat}
> and also
> {noformat}
> ERROR 06:10:11 Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.AssertionError: null
> at org.apache.cassandra.db.Slice$Bound.create(Slice.java:365) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.Slice$Bound$Serializer.deserializeValues(Slice.java:553)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:274)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:115) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:107) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  

[jira] [Updated] (CASSANDRA-11093) processs restarts are failing becase native port and jmx ports are in use

2016-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11093:
-
Assignee: Sam Tunnicliffe

> processs restarts are failing becase native port and jmx ports are in use
> -
>
> Key: CASSANDRA-11093
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11093
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: PROD
>Reporter: varun
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: lhf
>
> A process restart should automatically take care of this. But it is not and 
> it is a problem.
> The ports are are considered in use even if the process has quit/died/killed 
> but the socket is in a TIME_WAIT state in the TCP FSM 
> (http://tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm).
> tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 192.168.1.2:9160 0.0.0.0:* LISTEN 30099/java
> tcp 0 0 10.130.128.131:58263 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 10.130.128.131:58262 10.130.128.131:9042 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:9042 :::* LISTEN 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57191 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57190 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37105 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42562 :::127.0.0.1:7199 TIME_WAIT -
> tcp 0 0 :::10.130.128.131:57190 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57198 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.176.70.226:37106 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:57197 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:57191 :::10.130.128.131:9042 ESTABLISHED 
> 30138/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57198 ESTABLISHED 
> 30099/java
> tcp 0 0 :::10.130.128.131:9042 :::10.130.128.131:57197 ESTABLISHED 
> 30099/java
> tcp 0 0 :::127.0.0.1:42567 :::127.0.0.1:7199 TIME_WAIT -
> I had to write a restart handler that does a netstat call and looks to make 
> sure all the TIME_WAIT states exhaust before starting the node back up. This 
> happened on 26 of the 56 when a rolling restart was performed. The issue was 
> mostly around JMX port 7199. There was another rollling restart done on the 
> 26 nodes to remediate the JMX ports issue in that restart one node had the 
> issue where port 9042 was considered used after the restart and the process 
> died after a bit of time.
> What needs to be done for port the native port 9042 and JMX port 7199 is to 
> create the underlying TCP socket with SO_REUSEADDR. This eases the 
> restriction and allows the port to be bound by process even if there are 
> sockets open to that port in the TCP FSM, as long as there is no other 
> process listening on that port. There is a Java method available to set this 
> option in java.net.Socket 
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%29.
> native port 9042: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> JMX port 7199: 
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40
> Looking in the code itself this option is being set on thrift (9160 
> (default)) and internode communication ports, uncrypted (7000 (default)) and 
> SSL encrypted (7001 (default)) .
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> This needs to be set to native and jmx ports as well.
> References:
> https://unix.stackexchange.com/questions/258379/when-is-a-port-considered-being-used/258380?noredirect=1
> https://stackoverflow.com/questions/23531558/allow-restarting-java-application-with-jmx-monitoring-enabled-immediately
> https://docs.oracle.com/javase/8/docs/technotes/guides/rmi/socketfactory/
> https://github.com/apache/cassandra/search?utf8=%E2%9C%93=setReuseAddress
> https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setReuseAddress%28boolean%293
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L38
> https://github.com/apache/cassandra/blob/4a0d1caa262af3b6f2b6d329e45766b4df845a88/tools/stress/src/org/apache/cassandra/stress/settings/SettingsPort.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11187) DESC table on a table with UDT's should also print it's Types

2016-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11187:
--
Issue Type: Improvement  (was: Bug)

> DESC table on a table with UDT's should also print it's Types
> -
>
> Key: CASSANDRA-11187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11187
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sebastian Estevez
>Priority: Minor
>
> Lot's of folks use desc table to capture table definitions. When you describe 
> a table with UDT's today it doesn't also spit out it's CREATE TYPE statements 
> which makes it tricky and inconvenient to share tabe definitions with UDT's.
> Current functionality:
> {code}
> > desc TABLE payments.payments ;
> CREATE TABLE payments.payments (
> branch text,
> timebucket text,
> create_ts timestamp,
> eventid text,
> applicable_manufacturer_or_applicable_gpo_making_payment_country text,
> applicable_manufacturer_or_applicable_gpo_making_payment_id text,
> applicable_manufacturer_or_applicable_gpo_making_payment_name text,
> applicable_manufacturer_or_applicable_gpo_making_payment_state text,
> charity_indicator text,
> city_of_travel text,
> contextual_information text,
> country_of_travel text,
> covered_recipient_type text,
> date_of_payment timestamp,
> delay_in_publication_indicator text,
> dispute_status_for_publication text,
> form_of_payment_or_transfer_of_value text,
> name_of_associated_covered_device_or_medical_supply1 text,
> name_of_associated_covered_device_or_medical_supply2 text,
> name_of_associated_covered_device_or_medical_supply3 text,
> name_of_associated_covered_device_or_medical_supply4 text,
> name_of_associated_covered_device_or_medical_supply5 text,
> name_of_associated_covered_drug_or_biological1 text,
> name_of_associated_covered_drug_or_biological2 text,
> name_of_associated_covered_drug_or_biological3 text,
> name_of_associated_covered_drug_or_biological4 text,
> name_of_associated_covered_drug_or_biological5 text,
> name_of_third_party_entity_receiving_payment_or_transfer_of_value text,
> nature_of_payment_or_transfer_of_value text,
> ndc_of_associated_covered_drug_or_biological1 text,
> ndc_of_associated_covered_drug_or_biological2 text,
> ndc_of_associated_covered_drug_or_biological3 text,
> ndc_of_associated_covered_drug_or_biological4 text,
> ndc_of_associated_covered_drug_or_biological5 text,
> number_of_payments_included_in_total_amount double,
> payment_publication_date timestamp,
> physicians set,
> product_indicator text,
> program_year text,
> record_id text,
> solr_query text,
> state_of_travel text,
> submitting_applicable_manufacturer_or_applicable_gpo_name text,
> teaching_hospital_id text,
> teaching_hospital_name text,
> third_party_equals_covered_recipient_indicator text,
> third_party_payment_recipient_indicator text,
> total_amount_of_payment_usdollars double,
> PRIMARY KEY ((branch, timebucket), create_ts, eventid)
> )WITH CLUSTERING ORDER BY (create_ts ASC, eventid ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> Desired functionality:
> {code}
> CREATE TYPE physician(
> physician_first_name text,
> physician_last_name text,
> physician_license_state_code1 text,
> physician_license_state_code2 text,
> physician_license_state_code3 text,
> physician_license_state_code4 text,
> physician_license_state_code5 text,
> physician_middle_name text,
> physician_name_suffix text,
> physician_ownership_indicator text,
> physician_primary_type text,
> physician_profile_id text,
> physician_specialty text
> );
> CREATE TYPE recipient(
> recipient_city text,
> recipient_country text,
> recipient_postal_code text,
> recipient_primary_business_street_address_line1 text,
> recipient_primary_business_street_address_line2 text,
> recipient_province text,
> recipient_state text,
> recipient_zip_code text
> );
> CREATE TABLE payments (
> branch text,
> 

[2/2] cassandra git commit: Merge branch cassandra-3.0 into trunk

2016-02-19 Thread blerer
Merge branch cassandra-3.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/acb2ab07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/acb2ab07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/acb2ab07

Branch: refs/heads/trunk
Commit: acb2ab0723752e6e21877413866e9e7872194ffd
Parents: b3eeadf a283890
Author: Benjamin Lerer 
Authored: Fri Feb 19 16:17:16 2016 +0100
Committer: Benjamin Lerer 
Committed: Fri Feb 19 16:17:27 2016 +0100

--

--




[jira] [Updated] (CASSANDRA-11064) Failed aggregate creation breaks server permanently

2016-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11064:
-
Reproduced In: 3.2.1, 3.2  (was: 3.2, 3.2.1)
 Reviewer: Sylvain Lebresne

> Failed aggregate creation breaks server permanently
> ---
>
> Key: CASSANDRA-11064
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11064
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Olivier Michallat
>Assignee: Robert Stupp
> Fix For: 3.0.x
>
>
> While testing edge cases around aggregates, I tried the following to see if 
> custom types were supported:
> {code}
> ccm create v321 -v3.2.1 -n3
> ccm updateconf enable_user_defined_functions:true
> ccm start
> ccm node1 cqlsh
> CREATE FUNCTION id(i 'DynamicCompositeType(s => UTF8Type, i => Int32Type)')
> RETURNS NULL ON NULL INPUT
> RETURNS 'DynamicCompositeType(s => UTF8Type, i => Int32Type)'
> LANGUAGE java
> AS 'return i;';
> // function created successfully
> CREATE AGGREGATE ag()
> SFUNC id
> STYPE 'DynamicCompositeType(s => UTF8Type, i => Int32Type)'
> INITCOND 's@foo:i@32';
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: 
> [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at 
> character '@'">{code}
> Despite the error, the aggregate appears in system tables:
> {code}
> select * from system_schema.aggregates;
>  keyspace_name | aggregate_name | ...
> ---++ ...
>   test | ag | ...
> {code}
> But you can't drop it, and trying to drop its function produces the server 
> error again:
> {code}
> DROP AGGREGATE ag;
> InvalidRequest: code=2200 [Invalid query] message="Cannot drop non existing 
> aggregate 'test.ag'"
> DROP FUNCTION id;
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: 
> [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at 
> character '@'">
> {code}
> What's worse, it's now impossible to restart the server:
> {code}
> ccm stop; ccm start
> org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: 
> [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at 
> character '@'
>   at 
> org.apache.cassandra.cql3.CQLFragmentParser.parseAny(CQLFragmentParser.java:48)
>   at org.apache.cassandra.cql3.Terms.asBytes(Terms.java:51)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createUDAFromRow(SchemaKeyspace.java:1225)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchUDAs(SchemaKeyspace.java:1204)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchFunctions(SchemaKeyspace.java:1129)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:897)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:872)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:860)
>   at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:125)
>   at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:115)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:680)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10840) Replacing an aggregate with a new version doesn't reset INITCOND

2016-02-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154336#comment-15154336
 ] 

Benjamin Lerer commented on CASSANDRA-10840:


It looks good to me. Thanks :-)

> Replacing an aggregate with a new version doesn't reset INITCOND
> 
>
> Key: CASSANDRA-10840
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10840
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Observed in Cassandra 2.2.4, though it might be an issue 
> in 3.0 as well
>Reporter: Sandeep Tamhankar
>Assignee: Robert Stupp
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> {code}
> use simplex;
>   CREATE FUNCTION state_group_and_sum(state map, star_rating 
> int)
>   CALLED ON NULL INPUT
>   RETURNS map
>   LANGUAGE java
>   AS 'if (state.get(star_rating) == null) 
> state.put(star_rating, 1); else state.put(star_rating, ((Integer) 
> state.get(star_rating)) + 1); return state;';
>   CREATE FUNCTION percent_stars(state map)
>   RETURNS NULL ON NULL INPUT
>   RETURNS map
>   LANGUAGE java AS $$
> Integer sum = 0; 
> for(Object k : state.keySet()) { 
> sum = sum + (Integer) state.get((Integer) k);
> }
> java.util.Map results = new java.util.HashMap Integer>();
> for(Object k : state.keySet()) {
> results.put((Integer) k, ((Integer) state.get((Integer) k))*100 / sum);
> }
> return results;
> $$;
> {code}
> {code}
> CREATE OR REPLACE AGGREGATE group_and_sum(int)
> SFUNC state_group_and_sum
> STYPE map
> FINALFUNC percent_stars
> INITCOND {}
> {code}
> 1. View the aggregates
> {{select * from system.schema_aggregates;}}
> 2. Now update
> {code}
> CREATE OR REPLACE AGGREGATE group_and_sum(int)
> SFUNC state_group_and_sum
> STYPE map
> FINALFUNC percent_stars
> INITCOND NULL
> {code}
> 3. View the aggregates
> {{select * from system.schema_aggregates;}}
> Expected result:
> * The update should have made initcond null
> Actual result:
> * The update did not touch INITCOND.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10818) Evaluate exposure of DataType instances from JavaUDF class

2016-02-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10818:

Reviewer: Tyler Hobbs

> Evaluate exposure of DataType instances from JavaUDF class
> --
>
> Key: CASSANDRA-10818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10818
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> Currently UDF implementations cannot create new UDT instances.
> There's no way to create a new UT instance without having the 
> {{com.datastax.driver.core.DataType}} to be able to call 
> {{com.datastax.driver.core.UserType.newValue()}}.
> From a quick look into the related code in {{JavaUDF}}, {{DataType}} and 
> {{UserType}} classes it looks fine to expose information about return and 
> argument types via {{JavaUDF}}.
> Have to find some solution for script UDFs - but feels doable, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11190) Fail fast repairs

2016-02-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154332#comment-15154332
 ] 

Yuki Morishita commented on CASSANDRA-11190:


+1 for having flag for previous behavior.
This depends on (will be built on top of) CASSANDRA-3486, right?

> Fail fast repairs
> -
>
> Key: CASSANDRA-11190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>
> Currently, if one node fails any phase of the repair (validation, streaming), 
> the repair session is aborted, but the other nodes are not notified and keep 
> doing either validation or syncing with other nodes.
> With CASSANDRA-10070 automatically scheduling repairs and potentially 
> scheduling retries it would be nice to make sure all nodes abort failed 
> repairs in other to be able to start other repairs safely in the same nodes.
> From CASSANDRA-10070:
> bq. As far as I understood, if there are nodes A, B, C running repair, A is 
> the coordinator. If validation or streaming fails on node B, the coordinator 
> (A) is notified and fails the repair session, but node C will remain doing 
> validation and/or streaming, what could cause problems (or increased load) if 
> we start another repair session on the same range.
> bq. We will probably need to extend the repair protocol to perform this 
> cleanup/abort step on failure. We already have a legacy cleanup message that 
> doesn't seem to be used in the current protocol that we could maybe reuse to 
> cleanup repair state after a failure. This repair abortion will probably have 
> intersection with CASSANDRA-3486. In any case, this is a separate (but 
> related) issue and we should address it in an independent ticket, and make 
> this ticket dependent on that.
> On CASSANDRA-5426 [~slebresne] suggested doing this to avoid unexpected 
> conditions/hangs:
> bq. I wonder if maybe we should have more of a fail-fast policy when there is 
> errors. For instance, if one node fail it's validation phase, maybe it might 
> be worth failing right away and let the user re-trigger a repair once he has 
> fixed whatever was the source of the error, rather than still 
> differencing/syncing the other nodes.
> bq. Going a bit further, I think we should add 2 messages to interrupt the 
> validation and sync phase. If only because that could be useful to users if 
> they need to stop a repair for some reason, but also, if we get an error 
> during validation from one node, we could use that to interrupt the other 
> nodes and thus fail fast while minimizing the amount of work done uselessly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9954) Improve Java-UDF timeout detection

2016-02-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9954:
---
Reviewer: Tyler Hobbs

> Improve Java-UDF timeout detection
> --
>
> Key: CASSANDRA-9954
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9954
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> CASSANDRA-9402 introduced a sandbox using a thread-pool to enforce security 
> constraints and to detect "amok UDFs" - i.e. UDFs that essentially never 
> return (e.g. {{while (true)}}.
> Currently the safest way to react on such an "amok UDF" is to _fail-fast_ - 
> to stop the C* daemon since stopping a thread (in Java) is just no solution.
> CASSANDRA-9890 introduced further protection by inspecting the byte-code. The 
> same mechanism can also be used to manipulate the Java-UDF byte-code.
> By manipulating the byte-code I mean to add regular "is-amok-UDF" checks in 
> the compiled code.
> EDIT: These "is-amok-UDF" checks would also work for _UNFENCED_ Java-UDFs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9325) cassandra-stress requires keystore for SSL but provides no way to configure it

2016-02-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9325:
---
Reviewer: T Jake Luciani

> cassandra-stress requires keystore for SSL but provides no way to configure it
> --
>
> Key: CASSANDRA-9325
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9325
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: J.B. Langston
>Assignee: Stefan Podkowinski
>  Labels: lhf, stress
> Fix For: 2.2.x
>
>
> Even though it shouldn't be required unless client certificate authentication 
> is enabled, the stress tool is looking for a keystore in the default location 
> of conf/.keystore with the default password of cassandra. There is no command 
> line option to override these defaults so you have to provide a keystore that 
> satisfies the default. It looks for conf/.keystore in the working directory, 
> so you need to create this in the directory you are running cassandra-stress 
> from.It doesn't really matter what's in the keystore; it just needs to exist 
> in the expected location and have a password of cassandra.
> Since the keystore might be required if client certificate authentication is 
> enabled, we need to add -transport parameters for keystore and 
> keystore-password.  Ideally, these should be optional and stress shouldn't 
> require the keystore unless client certificate authentication is enabled on 
> the server.
> In case it wasn't apparent, this is for Cassandra 2.1 and later's stress 
> tool.  I actually had even more problems getting Cassandra 2.0's stress tool 
> working with SSL and gave up on it.  We probably don't need to fix 2.0; we 
> can just document that it doesn't support SSL and recommend using 2.1 instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11190) Fail fast repairs

2016-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11190:
--
Issue Type: Improvement  (was: Bug)

> Fail fast repairs
> -
>
> Key: CASSANDRA-11190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>
> Currently, if one node fails any phase of the repair (validation, streaming), 
> the repair session is aborted, but the other nodes are not notified and keep 
> doing either validation or syncing with other nodes.
> With CASSANDRA-10070 automatically scheduling repairs and potentially 
> scheduling retries it would be nice to make sure all nodes abort failed 
> repairs in other to be able to start other repairs safely in the same nodes.
> From CASSANDRA-10070:
> bq. As far as I understood, if there are nodes A, B, C running repair, A is 
> the coordinator. If validation or streaming fails on node B, the coordinator 
> (A) is notified and fails the repair session, but node C will remain doing 
> validation and/or streaming, what could cause problems (or increased load) if 
> we start another repair session on the same range.
> bq. We will probably need to extend the repair protocol to perform this 
> cleanup/abort step on failure. We already have a legacy cleanup message that 
> doesn't seem to be used in the current protocol that we could maybe reuse to 
> cleanup repair state after a failure. This repair abortion will probably have 
> intersection with CASSANDRA-3486. In any case, this is a separate (but 
> related) issue and we should address it in an independent ticket, and make 
> this ticket dependent on that.
> On CASSANDRA-5426 [~slebresne] suggested doing this to avoid unexpected 
> conditions/hangs:
> bq. I wonder if maybe we should have more of a fail-fast policy when there is 
> errors. For instance, if one node fail it's validation phase, maybe it might 
> be worth failing right away and let the user re-trigger a repair once he has 
> fixed whatever was the source of the error, rather than still 
> differencing/syncing the other nodes.
> bq. Going a bit further, I think we should add 2 messages to interrupt the 
> validation and sync phase. If only because that could be useful to users if 
> they need to stop a repair for some reason, but also, if we get an error 
> during validation from one node, we could use that to interrupt the other 
> nodes and thus fail fast while minimizing the amount of work done uselessly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11187) DESC table on a table with UDT's should also print it's Types

2016-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11187:
--
Priority: Minor  (was: Major)

> DESC table on a table with UDT's should also print it's Types
> -
>
> Key: CASSANDRA-11187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11187
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Priority: Minor
>
> Lot's of folks use desc table to capture table definitions. When you describe 
> a table with UDT's today it doesn't also spit out it's CREATE TYPE statements 
> which makes it tricky and inconvenient to share tabe definitions with UDT's.
> Current functionality:
> {code}
> > desc TABLE payments.payments ;
> CREATE TABLE payments.payments (
> branch text,
> timebucket text,
> create_ts timestamp,
> eventid text,
> applicable_manufacturer_or_applicable_gpo_making_payment_country text,
> applicable_manufacturer_or_applicable_gpo_making_payment_id text,
> applicable_manufacturer_or_applicable_gpo_making_payment_name text,
> applicable_manufacturer_or_applicable_gpo_making_payment_state text,
> charity_indicator text,
> city_of_travel text,
> contextual_information text,
> country_of_travel text,
> covered_recipient_type text,
> date_of_payment timestamp,
> delay_in_publication_indicator text,
> dispute_status_for_publication text,
> form_of_payment_or_transfer_of_value text,
> name_of_associated_covered_device_or_medical_supply1 text,
> name_of_associated_covered_device_or_medical_supply2 text,
> name_of_associated_covered_device_or_medical_supply3 text,
> name_of_associated_covered_device_or_medical_supply4 text,
> name_of_associated_covered_device_or_medical_supply5 text,
> name_of_associated_covered_drug_or_biological1 text,
> name_of_associated_covered_drug_or_biological2 text,
> name_of_associated_covered_drug_or_biological3 text,
> name_of_associated_covered_drug_or_biological4 text,
> name_of_associated_covered_drug_or_biological5 text,
> name_of_third_party_entity_receiving_payment_or_transfer_of_value text,
> nature_of_payment_or_transfer_of_value text,
> ndc_of_associated_covered_drug_or_biological1 text,
> ndc_of_associated_covered_drug_or_biological2 text,
> ndc_of_associated_covered_drug_or_biological3 text,
> ndc_of_associated_covered_drug_or_biological4 text,
> ndc_of_associated_covered_drug_or_biological5 text,
> number_of_payments_included_in_total_amount double,
> payment_publication_date timestamp,
> physicians set,
> product_indicator text,
> program_year text,
> record_id text,
> solr_query text,
> state_of_travel text,
> submitting_applicable_manufacturer_or_applicable_gpo_name text,
> teaching_hospital_id text,
> teaching_hospital_name text,
> third_party_equals_covered_recipient_indicator text,
> third_party_payment_recipient_indicator text,
> total_amount_of_payment_usdollars double,
> PRIMARY KEY ((branch, timebucket), create_ts, eventid)
> )WITH CLUSTERING ORDER BY (create_ts ASC, eventid ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> Desired functionality:
> {code}
> CREATE TYPE physician(
> physician_first_name text,
> physician_last_name text,
> physician_license_state_code1 text,
> physician_license_state_code2 text,
> physician_license_state_code3 text,
> physician_license_state_code4 text,
> physician_license_state_code5 text,
> physician_middle_name text,
> physician_name_suffix text,
> physician_ownership_indicator text,
> physician_primary_type text,
> physician_profile_id text,
> physician_specialty text
> );
> CREATE TYPE recipient(
> recipient_city text,
> recipient_country text,
> recipient_postal_code text,
> recipient_primary_business_street_address_line1 text,
> recipient_primary_business_street_address_line2 text,
> recipient_province text,
> recipient_state text,
> recipient_zip_code text
> );
> CREATE TABLE payments (
> branch text,
> timebucket text,

[1/2] cassandra git commit: Ninja Fix: remove wrongly commited test

2016-02-19 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk b3eeadf92 -> acb2ab072


Ninja Fix: remove wrongly commited test


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2838908
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2838908
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2838908

Branch: refs/heads/trunk
Commit: a28389087abf7e8df2964bf212ab27d07a26849a
Parents: a76a8ef
Author: Benjamin Lerer 
Authored: Fri Feb 19 16:14:16 2016 +0100
Committer: Benjamin Lerer 
Committed: Fri Feb 19 16:14:16 2016 +0100

--
 .../commitlog/CommitLogSegmentManagerTest.java  | 63 
 1 file changed, 63 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2838908/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
deleted file mode 100644
index 59b380f..000
--- 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
+++ /dev/null
@@ -1,63 +0,0 @@
-package org.apache.cassandra.db.commitlog;
-
-import java.nio.ByteBuffer;
-import java.util.Random;
-
-import javax.naming.ConfigurationException;
-
-import org.apache.cassandra.SchemaLoader;
-import org.apache.cassandra.config.Config.CommitLogSync;
-import org.apache.cassandra.config.DatabaseDescriptor;
-import org.apache.cassandra.config.ParameterizedClass;
-import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.Keyspace;
-import org.apache.cassandra.db.Mutation;
-import org.apache.cassandra.db.RowUpdateBuilder;
-import org.apache.cassandra.db.compaction.CompactionManager;
-import org.apache.cassandra.db.marshal.AsciiType;
-import org.apache.cassandra.db.marshal.BytesType;
-import org.apache.cassandra.schema.KeyspaceParams;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import com.google.common.collect.ImmutableMap;
-
-public class CommitLogSegmentManagerTest
-{
-private static final String KEYSPACE1 = "CommitLogTest";
-private static final String STANDARD1 = "Standard1";
-private static final String STANDARD2 = "Standard2";
-
-private final static byte[] entropy = new byte[1024 * 256];
-@BeforeClass
-public static void defineSchema() throws ConfigurationException
-{
-new Random().nextBytes(entropy);
-DatabaseDescriptor.setCommitLogCompression(new 
ParameterizedClass("LZ4Compressor", ImmutableMap.of()));
-DatabaseDescriptor.setCommitLogSegmentSize(1);
-DatabaseDescriptor.setCommitLogSync(CommitLogSync.periodic);
-DatabaseDescriptor.setCommitLogSyncPeriod(10 * 1000);
-SchemaLoader.prepareServer();
-SchemaLoader.createKeyspace(KEYSPACE1,
-KeyspaceParams.simple(1),
-SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD1, 0, AsciiType.instance, BytesType.instance),
-SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD2, 0, AsciiType.instance, BytesType.instance));
-
-CompactionManager.instance.disableAutoCompaction();
-}
-
-@Test
-public void testCompressedCommitLogBackpressure() throws Throwable
-{
-CommitLog.instance.resetUnsafe(true);
-ColumnFamilyStore cfs1 = 
Keyspace.open(KEYSPACE1).getColumnFamilyStore(STANDARD1);
-
-Mutation m = new RowUpdateBuilder(cfs1.metadata, 0, "k")
- .clustering("bytes")
- .add("val", ByteBuffer.wrap(entropy))
- .build();
-
-for (int i = 0; i < 2; i++)
-CommitLog.instance.add(m);
-}
-}



cassandra git commit: Ninja Fix: remove wrongly commited test

2016-02-19 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 a76a8efcc -> a28389087


Ninja Fix: remove wrongly commited test


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2838908
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2838908
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2838908

Branch: refs/heads/cassandra-3.0
Commit: a28389087abf7e8df2964bf212ab27d07a26849a
Parents: a76a8ef
Author: Benjamin Lerer 
Authored: Fri Feb 19 16:14:16 2016 +0100
Committer: Benjamin Lerer 
Committed: Fri Feb 19 16:14:16 2016 +0100

--
 .../commitlog/CommitLogSegmentManagerTest.java  | 63 
 1 file changed, 63 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2838908/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
deleted file mode 100644
index 59b380f..000
--- 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
+++ /dev/null
@@ -1,63 +0,0 @@
-package org.apache.cassandra.db.commitlog;
-
-import java.nio.ByteBuffer;
-import java.util.Random;
-
-import javax.naming.ConfigurationException;
-
-import org.apache.cassandra.SchemaLoader;
-import org.apache.cassandra.config.Config.CommitLogSync;
-import org.apache.cassandra.config.DatabaseDescriptor;
-import org.apache.cassandra.config.ParameterizedClass;
-import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.Keyspace;
-import org.apache.cassandra.db.Mutation;
-import org.apache.cassandra.db.RowUpdateBuilder;
-import org.apache.cassandra.db.compaction.CompactionManager;
-import org.apache.cassandra.db.marshal.AsciiType;
-import org.apache.cassandra.db.marshal.BytesType;
-import org.apache.cassandra.schema.KeyspaceParams;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import com.google.common.collect.ImmutableMap;
-
-public class CommitLogSegmentManagerTest
-{
-private static final String KEYSPACE1 = "CommitLogTest";
-private static final String STANDARD1 = "Standard1";
-private static final String STANDARD2 = "Standard2";
-
-private final static byte[] entropy = new byte[1024 * 256];
-@BeforeClass
-public static void defineSchema() throws ConfigurationException
-{
-new Random().nextBytes(entropy);
-DatabaseDescriptor.setCommitLogCompression(new 
ParameterizedClass("LZ4Compressor", ImmutableMap.of()));
-DatabaseDescriptor.setCommitLogSegmentSize(1);
-DatabaseDescriptor.setCommitLogSync(CommitLogSync.periodic);
-DatabaseDescriptor.setCommitLogSyncPeriod(10 * 1000);
-SchemaLoader.prepareServer();
-SchemaLoader.createKeyspace(KEYSPACE1,
-KeyspaceParams.simple(1),
-SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD1, 0, AsciiType.instance, BytesType.instance),
-SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD2, 0, AsciiType.instance, BytesType.instance));
-
-CompactionManager.instance.disableAutoCompaction();
-}
-
-@Test
-public void testCompressedCommitLogBackpressure() throws Throwable
-{
-CommitLog.instance.resetUnsafe(true);
-ColumnFamilyStore cfs1 = 
Keyspace.open(KEYSPACE1).getColumnFamilyStore(STANDARD1);
-
-Mutation m = new RowUpdateBuilder(cfs1.metadata, 0, "k")
- .clustering("bytes")
- .add("val", ByteBuffer.wrap(entropy))
- .build();
-
-for (int i = 0; i < 2; i++)
-CommitLog.instance.add(m);
-}
-}



[jira] [Updated] (CASSANDRA-11179) Parallel cleanup can lead to disk space exhaustion

2016-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11179:
--
Issue Type: Improvement  (was: Bug)

> Parallel cleanup can lead to disk space exhaustion
> --
>
> Key: CASSANDRA-11179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11179
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Tools
>Reporter: Tyler Hobbs
>
> In CASSANDRA-5547, we made cleanup (among other things) run in parallel 
> across multiple sstables.  There have been reports on IRC of this leading to 
> disk space exhaustion, because multiple sstables are (almost entirely) 
> rewritten at the same time.  This seems particularly problematic because 
> cleanup is frequently run after a cluster is expanded due to low disk space.
> I'm not really familiar with how we perform free disk space checks now, but 
> it sounds like we can make some improvements here.  It would be good to 
> reduce the concurrency of cleanup operations if there isn't enough free disk 
> space to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11158) AssertionError: null in Slice$Bound.create

2016-02-19 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154325#comment-15154325
 ] 

Branimir Lambov commented on CASSANDRA-11158:
-

{{ClusteringPrefix}} was using {{Slice.Bound}} to deserialize prefixes, which 
wasn't able to deal with boundaries. This only appeared as a problem when a 
boundary ended up as a name in the column (ie. CQL row) index. Patch adds 
relevant {{RangeTombstone.Bound}} method and changes {{ClusteringPrefix}} to 
use that.

|[code|https://github.com/blambov/cassandra/tree/11158]|[utest|http://cassci.datastax.com/job/blambov-11158-testall/]|[dtest|http://cassci.datastax.com/job/blambov-11158-dtest/]|
[~samukallio]: Yes, but I would wait until the u/dtests have completed.

> AssertionError: null in Slice$Bound.create
> --
>
> Key: CASSANDRA-11158
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11158
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Samu Kallio
>Assignee: Branimir Lambov
>Priority: Critical
> Fix For: 3.0.x
>
>
> We've been running Cassandra 3.0.2 for around a week now. Yesterday, we had a 
> network event that briefly isolated one node from others in a 3 node cluster. 
> Since then, we've been seeing a constant stream of "Finished hinted handoff" 
> messages, as well as:
> {noformat}
> WARN  16:34:39 Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.AssertionError: null
> at org.apache.cassandra.db.Slice$Bound.create(Slice.java:365) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.Slice$Bound$Serializer.deserializeValues(Slice.java:553)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:274)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:115) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:107) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.getPosition(BigTableReader.java:216)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getPosition(SSTableReader.java:1568)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndSSTablesInTimestampOrder(SinglePartitionReadCommand.java:715)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:482)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:459)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:325)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:350) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:45)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_72]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.2.jar:3.0.2]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> {noformat}
> and also
> {noformat}
> ERROR 06:10:11 Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.AssertionError: null
> at org.apache.cassandra.db.Slice$Bound.create(Slice.java:365) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.Slice$Bound$Serializer.deserializeValues(Slice.java:553)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> 

cassandra git commit: Ninja Fix: remove wrongly commited test

2016-02-19 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk fdee887b4 -> b3eeadf92


Ninja Fix: remove wrongly commited test


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3eeadf9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3eeadf9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3eeadf9

Branch: refs/heads/trunk
Commit: b3eeadf924d2f886a2ae5a2c8708549afac4282c
Parents: fdee887
Author: Benjamin Lerer 
Authored: Fri Feb 19 16:11:08 2016 +0100
Committer: Benjamin Lerer 
Committed: Fri Feb 19 16:11:08 2016 +0100

--
 .../commitlog/CommitLogSegmentManagerTest.java  | 63 
 1 file changed, 63 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3eeadf9/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
deleted file mode 100644
index 59b380f..000
--- 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerTest.java
+++ /dev/null
@@ -1,63 +0,0 @@
-package org.apache.cassandra.db.commitlog;
-
-import java.nio.ByteBuffer;
-import java.util.Random;
-
-import javax.naming.ConfigurationException;
-
-import org.apache.cassandra.SchemaLoader;
-import org.apache.cassandra.config.Config.CommitLogSync;
-import org.apache.cassandra.config.DatabaseDescriptor;
-import org.apache.cassandra.config.ParameterizedClass;
-import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.Keyspace;
-import org.apache.cassandra.db.Mutation;
-import org.apache.cassandra.db.RowUpdateBuilder;
-import org.apache.cassandra.db.compaction.CompactionManager;
-import org.apache.cassandra.db.marshal.AsciiType;
-import org.apache.cassandra.db.marshal.BytesType;
-import org.apache.cassandra.schema.KeyspaceParams;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import com.google.common.collect.ImmutableMap;
-
-public class CommitLogSegmentManagerTest
-{
-private static final String KEYSPACE1 = "CommitLogTest";
-private static final String STANDARD1 = "Standard1";
-private static final String STANDARD2 = "Standard2";
-
-private final static byte[] entropy = new byte[1024 * 256];
-@BeforeClass
-public static void defineSchema() throws ConfigurationException
-{
-new Random().nextBytes(entropy);
-DatabaseDescriptor.setCommitLogCompression(new 
ParameterizedClass("LZ4Compressor", ImmutableMap.of()));
-DatabaseDescriptor.setCommitLogSegmentSize(1);
-DatabaseDescriptor.setCommitLogSync(CommitLogSync.periodic);
-DatabaseDescriptor.setCommitLogSyncPeriod(10 * 1000);
-SchemaLoader.prepareServer();
-SchemaLoader.createKeyspace(KEYSPACE1,
-KeyspaceParams.simple(1),
-SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD1, 0, AsciiType.instance, BytesType.instance),
-SchemaLoader.standardCFMD(KEYSPACE1, 
STANDARD2, 0, AsciiType.instance, BytesType.instance));
-
-CompactionManager.instance.disableAutoCompaction();
-}
-
-@Test
-public void testCompressedCommitLogBackpressure() throws Throwable
-{
-CommitLog.instance.resetUnsafe(true);
-ColumnFamilyStore cfs1 = 
Keyspace.open(KEYSPACE1).getColumnFamilyStore(STANDARD1);
-
-Mutation m = new RowUpdateBuilder(cfs1.metadata, 0, "k")
- .clustering("bytes")
- .add("val", ByteBuffer.wrap(entropy))
- .build();
-
-for (int i = 0; i < 2; i++)
-CommitLog.instance.add(m);
-}
-}



[jira] [Updated] (CASSANDRA-11170) Uneven load can be created by cross DC mutation propagations, as remote coordinator is not randomly picked

2016-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11170:
--
Issue Type: Improvement  (was: Bug)

> Uneven load can be created by cross DC mutation propagations, as remote 
> coordinator is not randomly picked
> --
>
> Key: CASSANDRA-11170
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11170
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Wei Deng
>
> I was looking at the o.a.c.service.StorageProxy code and realized that it 
> seems to be always picking the first IP in the remote DC target list as the 
> destination, whenever it needs to send the mutation to a remote DC. See these 
> lines in the code:
> https://github.com/apache/cassandra/blob/1944bf507d66b5c103c136319caeb4a9e3767a69/src/java/org/apache/cassandra/service/StorageProxy.java#L1280-L1301
> This could cause one node in the remote DC receiving more mutation messages 
> than the other nodes, and hence uneven workload distribution.
> A trivial test (with TRACE logging level enabled) on a 3+3 node cluster 
> proved the problem, see the system.log entries below:
> {code}
> INFO  [RMI TCP Connection(18)-54.173.227.52] 2016-02-13 09:54:55,948  
> StorageService.java:3353 - set log level to TRACE for classes under 
> 'org.apache.cassandra.service.StorageProxy' (if the level doesn't look like 
> 'TRACE' then the logger couldn't parse 'TRACE')
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,148  StorageProxy.java:1284 - 
> Adding FWD message to 8996@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,149  StorageProxy.java:1284 - 
> Adding FWD message to 8997@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,149  StorageProxy.java:1289 - 
> Sending message to 8998@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,939  StorageProxy.java:1284 - 
> Adding FWD message to 9032@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,940  StorageProxy.java:1284 - 
> Adding FWD message to 9033@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,941  StorageProxy.java:1289 - 
> Sending message to 9034@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,975  StorageProxy.java:1284 - 
> Adding FWD message to 9064@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,976  StorageProxy.java:1284 - 
> Adding FWD message to 9065@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,977  StorageProxy.java:1289 - 
> Sending message to 9066@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,464  StorageProxy.java:1284 - 
> Adding FWD message to 9094@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,465  StorageProxy.java:1284 - 
> Adding FWD message to 9095@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,478  StorageProxy.java:1289 - 
> Sending message to 9096@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,243  StorageProxy.java:1284 - 
> Adding FWD message to 9121@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,244  StorageProxy.java:1284 - 
> Adding FWD message to 9122@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,244  StorageProxy.java:1289 - 
> Sending message to 9123@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,248  StorageProxy.java:1284 - 
> Adding FWD message to 9145@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,249  StorageProxy.java:1284 - 
> Adding FWD message to 9146@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,249  StorageProxy.java:1289 - 
> Sending message to 9147@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,731  StorageProxy.java:1284 - 
> Adding FWD message to 9170@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,734  StorageProxy.java:1284 - 
> Adding FWD message to 9171@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,735  StorageProxy.java:1289 - 
> Sending message to 9172@/54.183.209.219
> INFO  [RMI TCP Connection(22)-54.173.227.52] 2016-02-13 09:56:19,545  
> StorageService.java:3353 - set log level to INFO for classes under 
> 'org.apache.cassandra.service.StorageProxy' (if the level doesn't look like 
> 'INFO' then the logger couldn't parse 'INFO')
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11168) Hint Metrics are updated even if hinted_hand-offs=false

2016-02-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154317#comment-15154317
 ] 

Joshua McKenzie commented on CASSANDRA-11168:
-

A patch would be great. We'll get a reviewer for that when you have that 
available.

Thanks.

> Hint Metrics are updated even if hinted_hand-offs=false
> ---
>
> Key: CASSANDRA-11168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11168
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
>
> In our PROD logs, we noticed a lot of hint metrics even though we have 
> disabled hinted handoffs.
> The reason is StorageProxy.ShouldHint has an inverted if condition. 
> We should also wrap the if (hintWindowExpired) block in if 
> (DatabaseDescriptor.hintedHandoffEnabled()).
> The fix is easy, and I can provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11168) Hint Metrics are updated even if hinted_hand-offs=false

2016-02-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11168:

Assignee: Anubhav Kale

> Hint Metrics are updated even if hinted_hand-offs=false
> ---
>
> Key: CASSANDRA-11168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11168
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
>
> In our PROD logs, we noticed a lot of hint metrics even though we have 
> disabled hinted handoffs.
> The reason is StorageProxy.ShouldHint has an inverted if condition. 
> We should also wrap the if (hintWindowExpired) block in if 
> (DatabaseDescriptor.hintedHandoffEnabled()).
> The fix is easy, and I can provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10397) Add local timezone support to cqlsh

2016-02-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154309#comment-15154309
 ] 

Paulo Motta commented on CASSANDRA-10397:
-

Dtests look good, marking as ready to commit.

> Add local timezone support to cqlsh
> ---
>
> Key: CASSANDRA-10397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10397
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Ubuntu 14.04 LTS
>Reporter: Suleman Rai
>Assignee: Stefan Podkowinski
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.6, 3.0.4, 3.4
>
> Attachments: sub_precision_fix_trunk.diff
>
>
> CQLSH is not adding the timezone offset to the timestamp after it has been 
> inserted into a table.
> create table test(id int PRIMARY KEY, time timestamp);
> INSERT INTO test(id,time) values (1,dateof(now()));
> select *from test;
> id | time
> +-
>   1 | 2015-09-25 13:00:32
> It is just displaying the default UTC timestamp without adding the timezone 
> offset. It should be 2015-09-25 21:00:32 in my case as my timezone offset is 
> +0800.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10458) cqlshrc: add option to always use ssl

2016-02-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154308#comment-15154308
 ] 

Paulo Motta commented on CASSANDRA-10458:
-

LGTM. Rebased over CASSANDRA-11124 to avoid merge conflicts and submitted 
cassci tests:

||2.2||3.0||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-10458]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-10458]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-10458]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10458-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10458-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10458-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10458-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10458-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10458-dtest/lastCompletedBuild/testReport/]|

[~spo...@gmail.com] If you could double check the rebase went ok that would be 
good as there was some merge conflicts on the 3.0 branch. You may move the 
ticket to "ready to commit" if tests look good. (Please ignore potential cqlsh 
subsecond precision tests failures, as those are already addressed by 
CASSANDRA-10397).

commit info: 2.2 and 3.0 are different patches. 3.0 merges cleanly upwards.

> cqlshrc: add option to always use ssl
> -
>
> Key: CASSANDRA-10458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10458
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Wringe
>Assignee: Stefan Podkowinski
>  Labels: lhf
>
> I am currently running on a system in which my cassandra cluster is only 
> accessible over tls.
> The cqlshrc file is used to specify the host, the certificates and other 
> configurations, but one option its missing is to always connect over ssl.
> I would like to be able to call 'cqlsh' instead of always having to specify 
> 'cqlsh --ssl'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >