[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15739:
---
Status: Ready to Commit  (was: Review In Progress)

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088320#comment-17088320
 ] 

Michael Semb Wever commented on CASSANDRA-15739:


bq. For the dtest patch, can we cleanup the whitespace and import changes the 
editor likely made automatically? 

I can fix that before I push.

bq. Also, are we going to wait the ccm PR to fix the TODO? 

Can we open another ticket for that please.

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088320#comment-17088320
 ] 

Michael Semb Wever edited comment on CASSANDRA-15739 at 4/21/20, 5:56 AM:
--

bq. For the dtest patch, can we cleanup the whitespace and import changes the 
editor likely made automatically? 

I can fix that before I push.

bq. Also, are we going to wait the ccm PR to fix the TODO? 

Can we open another ticket for that please.


was (Author: michaelsembwever):
bq. For the dtest patch, can we cleanup the whitespace and import changes the 
editor likely made automatically? 

I can fix that before I push.

bq. Also, are we going to wait the ccm PR to fix the TODO? 

Can we open another ticket for that please.

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15560) Change io.compressor.LZ4Compressor to LZ4SafeDecompressor

2020-04-20 Thread Jordan West (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West reassigned CASSANDRA-15560:
---

Assignee: Berenguer Blasi  (was: Jordan West)

> Change io.compressor.LZ4Compressor to LZ4SafeDecompressor
> -
>
> Key: CASSANDRA-15560
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15560
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Compression
>Reporter: Jordan West
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0, 4.0-rc
>
>
> CASSANDRA-15556 and related tickets showed that LZ4FastDecompressor can crash 
> the JVM and that LZ4SafeDecompressor performs better w/o the crash risk — its 
> also not deprecated. While we protect ourselves by checksumming the 
> compressed data but that doesn’t mean we should leave deprecated code that 
> can segfault the jvm (providing a potential DDOS vector among other things) 
> in crucial places like io.compress. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15560) Change io.compressor.LZ4Compressor to LZ4SafeDecompressor

2020-04-20 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088300#comment-17088300
 ] 

Jordan West commented on CASSANDRA-15560:
-

That would be great! I was hoping to have time but other tickets have taken 
higher priority. Happy to be a reviewer. 

> Change io.compressor.LZ4Compressor to LZ4SafeDecompressor
> -
>
> Key: CASSANDRA-15560
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15560
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Compression
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Fix For: 4.0, 4.0-rc
>
>
> CASSANDRA-15556 and related tickets showed that LZ4FastDecompressor can crash 
> the JVM and that LZ4SafeDecompressor performs better w/o the crash risk — its 
> also not deprecated. While we protect ourselves by checksumming the 
> compressed data but that doesn’t mean we should leave deprecated code that 
> can segfault the jvm (providing a potential DDOS vector among other things) 
> in crucial places like io.compress. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15718) Improve BatchMetricsTest

2020-04-20 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088299#comment-17088299
 ] 

Jordan West commented on CASSANDRA-15718:
-

I wouldn't change the behavior of min/max just for the tests (if I understand 
right). That has been the historical behavior even before my recent changes. If 
you want to add a method like `getBucketValue` I would recommend a different 
name since `bucketValue` is already used to mean the count for a given bucket 
summed across stripes.

> Improve BatchMetricsTest 
> -
>
> Key: CASSANDRA-15718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15718
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Stephen Mallette
>Assignee: Stephen Mallette
>Priority: Normal
>
> As noted in CASSANDRA-15582 {{BatchMetricsTest}} should test 
> {{BatchStatement.Type.COUNTER}} to cover all the {{BatchMetrics}}.  Some 
> changes were introduced to make this improvement at:
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics
> and the following suggestions were made in review (in addition to the 
> suggestion that a separate JIRA be created for this change) by [~dcapwell]:
> {quote}
> * I like the usage of BatchStatement.Type for the tests
> * honestly feel quick theories is better than random, but glad you added the 
> seed to all asserts =). Would still be better as a quick theories test since 
> you basically wrote a property anyways!
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R131
>  feel you should rename to expectedPartitionsPerLoggedBatch 
> {Count,Logged,Unlogged}
> * . pre is what the value is, post is what the value is expected to be 
> (rather than what it is).
> * 
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R150
>  this doesn't look correct. the batch has distinctPartitions mutations, so 
> shouldn't max reflect that? I ran the current test in a debugger and see that 
> that is the case (aka current test is wrong).
> most of the comments are nit picks, but the last one looks like a test bug to 
> me
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088294#comment-17088294
 ] 

Jordan West commented on CASSANDRA-15739:
-

For the dtest patch, can we cleanup the whitespace and import changes the 
editor likely made automatically? Also, are we going to wait the ccm PR to fix 
the TODO? Otherwise I believe it looks ok – would like to see the results of 
the Jenkins run first however. 

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15623) When running CQLSH with STDIN input, exit with error status code if script fails

2020-04-20 Thread Jacob Becker (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088278#comment-17088278
 ] 

Jacob Becker commented on CASSANDRA-15623:
--

[~djoshi], it was my pleasure.

> When running CQLSH with STDIN input, exit with error status code if script 
> fails
> 
>
> Key: CASSANDRA-15623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15623
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Jacob Becker
>Assignee: Jacob Becker
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 3.0.21, 3.11.7, 4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Assuming CASSANDRA-6344 is in place for years and considering that scripts 
> submitted with the `-e` option behave in a similar fashion, it is very 
> surprising that scripts submitted to STDIN (i.e. piped in) always exit with a 
> zero code, regardless of errors. I believe this should be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-20 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088266#comment-17088266
 ] 

Joey Lynch commented on CASSANDRA-15379:


*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: ~100 million partitions with 2 rows each of 10 columns, total 
size per partition of about 4 KiB of random data. ~120 GiB per node data size 
(replicated 6 ways)
 * Compaction settings: LCS with size=320MiB, fanout=20
 * Compression: Zstd with 16 KiB block size

I had to tweak some settings to make compaction less of the overall trace (it 
was 50+% or more of the traces) which are hiding the flush behavior. 
Specifically I increased the size of the memtable before flush by increasing 
the {{memtable_cleanup_threshold}} setting from 0.11 to 0.5, which allowed 
flushes to get up to 1.4 GiB, and by setting compaction to defer as long as we 
can before doing the L0 -> L1 transition:
{noformat}
compaction = {'class': 'LeveledCompactionStrategy', 'fanout_size': '20', 
'max_threshold': '128', 'min_threshold': '32', 'sstable_size_in_mb': '320'}
compression = {'chunk_length_in_kb': '16', 'class': 
'org.apache.cassandra.io.compress.ZstdCompressor'}
{noformat}
I would prefer to up fanout_size even more to defer compactions further, but 
with the increase in memtable size and increase in sstable size and fanout I 
was able to reduce the compaction load to where the cluster was stable (pending 
compactions not growing without bound) on both baseline and candidate 

*Zstd Defaults Benchmark Results*:

Candidate flushes were spaced about 4 minutes apart and took about 8 seconds to 
flush 1.4 GiB. Flamegraphs show 50% of on-cpu time in flush writer and ~45 in 
compression. [^15379_candidate_flush_trace.png]

Baseline flushes were spaced about 4 minutes apart and took about 22 seconds to 
flush 1.4 GiB. Flamegraphs show 20% of on-cpu time in flush writer and ~75 in 
compression.  [^15379_baseline_flush_trace.png]

No significant change in coordinator level, replica level latency or system 
metrics. Some latencies were better on candidate some worse. 
[^15379_system_zstd_defaults.png] [^15379_coordinator_zstd_defaults.png] 
[^15379_replica_zstd_defaults.png]

I think the main finding here is that already, with the cheapest zstd level, we 
are running closer to the flush interval than I'd like (if it takes longer to 
flush then the next time we flush, it's bad news bears for the cluster), and 
this is with a relatively small number of writes per second (~400 coordinator 
writes per second per node)

*Next steps:*

I've published a final squashed commit to:
||trunk||
|[657c39d4|https://github.com/jolynch/cassandra/commit/657c39d4aba0888c6db6a46d1b1febf899de9578]|
|[branch|https://github.com/apache/cassandra/compare/trunk...jolynch:CASSANDRA-15379-final]|
|[!https://circleci.com/gh/jolynch/cassandra/tree/CASSANDRA-15379-final.png?circle-token=
 
1102a59698d04899ec971dd36e925928f7b521f5!|https://circleci.com/gh/jolynch/cassandra/tree/CASSANDRA-15379-final]|

There appear to be a lot of failures in java8 runs that I'm pretty sure are 
unrelated to my change (unit tests and in-jvm dtests passed, along with long 
unit tests). I'll look into all the failures and make sure they're unrelated 
(on a related note I'm :( that trunk is so red again).

I am now running a test with Zstd compression set to a block size of 256 KiB 
and level 10, which is how we typically run it in production for write mosty 
read rarely datasets such as trace data (for the significant reduction in disk 
space). 

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_baseline_flush_trace.png, 
> 15379_candidate_flush_trace.png, 15379_coordinator_defaults.png, 
> 15379_coordinator_zstd_defaults.png, 15379_replica_defaults.png, 
> 15379_replica_zstd_defaults.png, 15379_system_defaults.png, 
> 15379_system_zstd_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are 

[jira] [Updated] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-20 Thread Joey Lynch (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Lynch updated CASSANDRA-15379:
---
Attachment: 15379_replica_zstd_defaults.png
15379_coordinator_zstd_defaults.png
15379_system_zstd_defaults.png

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_baseline_flush_trace.png, 
> 15379_candidate_flush_trace.png, 15379_coordinator_defaults.png, 
> 15379_coordinator_zstd_defaults.png, 15379_replica_defaults.png, 
> 15379_replica_zstd_defaults.png, 15379_system_defaults.png, 
> 15379_system_zstd_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are working around this with a very simple patch which flushes 
> SSTables as the default compression strategy (LZ4) regardless of the table 
> params. This is a simple solution but I think the ideal solution though might 
> be for the flush compression strategy to be configurable separately from the 
> table compression strategy (while defaulting to the same thing). Instead of 
> adding yet another compression option to the yaml (like hints and commitlog) 
> I was thinking of just adding it to the table parameters and then adding a 
> {{default_table_parameters}} yaml option like:
> {noformat}
> # Default table properties to apply on freshly created tables. The currently 
> supported defaults are:
> # * compression   : How are SSTables compressed in general (flush, 
> compaction, etc ...)
> # * flush_compression : How are SSTables compressed as they flush
> # supported
> default_table_parameters:
>   compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 16
>   flush_compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 4
> {noformat}
> This would have the nice effect as well of giving our configuration a path 
> forward to providing user specified defaults for table creation (so e.g. if a 
> particular user wanted to use a different default chunk_length_in_kb they can 
> do that).
> So the proposed (~mandatory) scope is:
> * Flush with a faster compression strategy
> I'd like to implement the following at the same time:
> * Per table flush compression configuration
> * Ability to default the table flush and compaction compression in the yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-20 Thread Joey Lynch (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Lynch updated CASSANDRA-15379:
---
Attachment: 15379_baseline_flush_trace.png

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_baseline_flush_trace.png, 
> 15379_candidate_flush_trace.png, 15379_coordinator_defaults.png, 
> 15379_replica_defaults.png, 15379_system_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are working around this with a very simple patch which flushes 
> SSTables as the default compression strategy (LZ4) regardless of the table 
> params. This is a simple solution but I think the ideal solution though might 
> be for the flush compression strategy to be configurable separately from the 
> table compression strategy (while defaulting to the same thing). Instead of 
> adding yet another compression option to the yaml (like hints and commitlog) 
> I was thinking of just adding it to the table parameters and then adding a 
> {{default_table_parameters}} yaml option like:
> {noformat}
> # Default table properties to apply on freshly created tables. The currently 
> supported defaults are:
> # * compression   : How are SSTables compressed in general (flush, 
> compaction, etc ...)
> # * flush_compression : How are SSTables compressed as they flush
> # supported
> default_table_parameters:
>   compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 16
>   flush_compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 4
> {noformat}
> This would have the nice effect as well of giving our configuration a path 
> forward to providing user specified defaults for table creation (so e.g. if a 
> particular user wanted to use a different default chunk_length_in_kb they can 
> do that).
> So the proposed (~mandatory) scope is:
> * Flush with a faster compression strategy
> I'd like to implement the following at the same time:
> * Per table flush compression configuration
> * Ability to default the table flush and compaction compression in the yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-20 Thread Joey Lynch (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Lynch updated CASSANDRA-15379:
---
Attachment: 15379_candidate_flush_trace.png

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_candidate_flush_trace.png, 
> 15379_coordinator_defaults.png, 15379_replica_defaults.png, 
> 15379_system_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are working around this with a very simple patch which flushes 
> SSTables as the default compression strategy (LZ4) regardless of the table 
> params. This is a simple solution but I think the ideal solution though might 
> be for the flush compression strategy to be configurable separately from the 
> table compression strategy (while defaulting to the same thing). Instead of 
> adding yet another compression option to the yaml (like hints and commitlog) 
> I was thinking of just adding it to the table parameters and then adding a 
> {{default_table_parameters}} yaml option like:
> {noformat}
> # Default table properties to apply on freshly created tables. The currently 
> supported defaults are:
> # * compression   : How are SSTables compressed in general (flush, 
> compaction, etc ...)
> # * flush_compression : How are SSTables compressed as they flush
> # supported
> default_table_parameters:
>   compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 16
>   flush_compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 4
> {noformat}
> This would have the nice effect as well of giving our configuration a path 
> forward to providing user specified defaults for table creation (so e.g. if a 
> particular user wanted to use a different default chunk_length_in_kb they can 
> do that).
> So the proposed (~mandatory) scope is:
> * Flush with a faster compression strategy
> I'd like to implement the following at the same time:
> * Per table flush compression configuration
> * Ability to default the table flush and compaction compression in the yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15701) Does Cassandra 3.11.3/3.11.5 is affected by CVE-2019-10712 or not ?

2020-04-20 Thread wht (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088223#comment-17088223
 ] 

wht commented on CASSANDRA-15701:
-

well,I really don’t know if it affects cassandra, so I come here for help. 

> Does  Cassandra 3.11.3/3.11.5  is affected by CVE-2019-10712 or not ?
> -
>
> Key: CASSANDRA-15701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15701
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: wht
>Priority: Normal
>
> Because  cassandra 3.11.3/3.11.5 rely on jackson-mapper-asl-1.9.13.jar which 
> has been reported a vulnerability CVE-2019-10172, 
> [https://nvd.nist.gov/vuln/detail/CVE-2019-10172], so I want to know if it 
> has an impact to cassandra. Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15718) Improve BatchMetricsTest

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088142#comment-17088142
 ] 

David Capwell commented on CASSANDRA-15718:
---

[~spmallette] sorry for the late replay; my comments are mostly small so 
overall LGTM.  The main open one is if we could do better with min/max assert 
but if thats too much trouble we can leave it as just max.

> Improve BatchMetricsTest 
> -
>
> Key: CASSANDRA-15718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15718
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Stephen Mallette
>Assignee: Stephen Mallette
>Priority: Normal
>
> As noted in CASSANDRA-15582 {{BatchMetricsTest}} should test 
> {{BatchStatement.Type.COUNTER}} to cover all the {{BatchMetrics}}.  Some 
> changes were introduced to make this improvement at:
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics
> and the following suggestions were made in review (in addition to the 
> suggestion that a separate JIRA be created for this change) by [~dcapwell]:
> {quote}
> * I like the usage of BatchStatement.Type for the tests
> * honestly feel quick theories is better than random, but glad you added the 
> seed to all asserts =). Would still be better as a quick theories test since 
> you basically wrote a property anyways!
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R131
>  feel you should rename to expectedPartitionsPerLoggedBatch 
> {Count,Logged,Unlogged}
> * . pre is what the value is, post is what the value is expected to be 
> (rather than what it is).
> * 
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R150
>  this doesn't look correct. the batch has distinctPartitions mutations, so 
> shouldn't max reflect that? I ran the current test in a debugger and see that 
> that is the case (aka current test is wrong).
> most of the comments are nit picks, but the last one looks like a test bug to 
> me
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15718) Improve BatchMetricsTest

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088140#comment-17088140
 ] 

David Capwell commented on CASSANDRA-15718:
---

* 
https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R170.
 Should call snapshot once, else you do more memory copies than needed.
* 
https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R133
 Can we remove ".withExamples(10)", should rely on the default.  We currently 
disable shrinking so that won't case the GC to freak out.  If you also fix the 
above statement you drop the amount of memory considerably (clone 9 times per 
iteration, could be 1 time).
* 
https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R178
 this doesn't look stable it assumes the very first test case has 1 partition, 
if the first test case has 2 or more partitions then it should fail (since the 
value is now 2 [1]).
* Would be good to call 
"org.apache.cassandra.metrics.DecayingEstimatedHistogramReservoir#clear" so 
each test doesn't see the results of the previous tests.

For min/max the only thing I can think of is to expose a test method to get the 
bucket for a specific value, that would let us refine min/max.  Something like 
the below would work

{code}
class EstimatedHistogramReservoirSnapshot
...
public long getBucketValue(long value)
{
int index = findIndex(bucketOffsets, value);
return bucketOffsets[index];
}
{code}

[~jrwest] since you have worked with this recently; thoughts?  Should we leave 
the max as is?

[1] - here are the first 30 buckets: 

{code}
0 = 1
1 = 2
2 = 3
3 = 4
4 = 5
5 = 6
6 = 7
7 = 8
8 = 10
9 = 12
10 = 14
11 = 17
12 = 20
13 = 24
14 = 29
15 = 35
16 = 42
17 = 50
18 = 60
19 = 72
20 = 86
21 = 103
22 = 124
23 = 149
24 = 179
25 = 215
26 = 258
27 = 310
28 = 372
29 = 446
{code}

> Improve BatchMetricsTest 
> -
>
> Key: CASSANDRA-15718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15718
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Stephen Mallette
>Assignee: Stephen Mallette
>Priority: Normal
>
> As noted in CASSANDRA-15582 {{BatchMetricsTest}} should test 
> {{BatchStatement.Type.COUNTER}} to cover all the {{BatchMetrics}}.  Some 
> changes were introduced to make this improvement at:
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics
> and the following suggestions were made in review (in addition to the 
> suggestion that a separate JIRA be created for this change) by [~dcapwell]:
> {quote}
> * I like the usage of BatchStatement.Type for the tests
> * honestly feel quick theories is better than random, but glad you added the 
> seed to all asserts =). Would still be better as a quick theories test since 
> you basically wrote a property anyways!
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R131
>  feel you should rename to expectedPartitionsPerLoggedBatch 
> {Count,Logged,Unlogged}
> * . pre is what the value is, post is what the value is expected to be 
> (rather than what it is).
> * 
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R150
>  this doesn't look correct. the batch has distinctPartitions mutations, so 
> shouldn't max reflect that? I ran the current test in a debugger and see that 
> that is the case (aka current test is wrong).
> most of the comments are nit picks, but the last one looks like a test bug to 
> me
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15718) Improve BatchMetricsTest

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088124#comment-17088124
 ] 

David Capwell commented on CASSANDRA-15718:
---

bq. even though we know the distinctPartitions I'm not sure that we know 
exactly what the returned value might be

{code}
public void update(long value)
{
long now = clock.getTime();
rescaleIfNeeded(now);

int index = findIndex(bucketOffsets, value);

updateBucket(decayingBuckets, index, 
Math.round(forwardDecayWeight(now)));
updateBucket(buckets, index, 1);
}

public long getMax()
{
final int lastBucket = decayingBuckets.length - 1;

if (decayingBuckets[lastBucket] > 0)
return Long.MAX_VALUE;

for (int i = lastBucket - 1; i >= 0; i--)
{
if (decayingBuckets[i] > 0)
return bucketOffsets[i];
}
return 0;
}
{code}

Yep, we loose the original value and min/max actually reflect the bucket, so we 
would need to know the bucketing to actually be able to correctly assert the 
value.  The only clean way I can see is if the snapshot had a method to convert 
a value to bucket, that would at least let us make sure we saw data for the 
correct bucket.

> Improve BatchMetricsTest 
> -
>
> Key: CASSANDRA-15718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15718
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Stephen Mallette
>Assignee: Stephen Mallette
>Priority: Normal
>
> As noted in CASSANDRA-15582 {{BatchMetricsTest}} should test 
> {{BatchStatement.Type.COUNTER}} to cover all the {{BatchMetrics}}.  Some 
> changes were introduced to make this improvement at:
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics
> and the following suggestions were made in review (in addition to the 
> suggestion that a separate JIRA be created for this change) by [~dcapwell]:
> {quote}
> * I like the usage of BatchStatement.Type for the tests
> * honestly feel quick theories is better than random, but glad you added the 
> seed to all asserts =). Would still be better as a quick theories test since 
> you basically wrote a property anyways!
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R131
>  feel you should rename to expectedPartitionsPerLoggedBatch 
> {Count,Logged,Unlogged}
> * . pre is what the value is, post is what the value is expected to be 
> (rather than what it is).
> * 
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R150
>  this doesn't look correct. the batch has distinctPartitions mutations, so 
> shouldn't max reflect that? I ran the current test in a debugger and see that 
> that is the case (aka current test is wrong).
> most of the comments are nit picks, but the last one looks like a test bug to 
> me
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15718) Improve BatchMetricsTest

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088114#comment-17088114
 ] 

David Capwell commented on CASSANDRA-15718:
---

sorry I have not looked sooner, checking out now.

> Improve BatchMetricsTest 
> -
>
> Key: CASSANDRA-15718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15718
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Stephen Mallette
>Assignee: Stephen Mallette
>Priority: Normal
>
> As noted in CASSANDRA-15582 {{BatchMetricsTest}} should test 
> {{BatchStatement.Type.COUNTER}} to cover all the {{BatchMetrics}}.  Some 
> changes were introduced to make this improvement at:
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics
> and the following suggestions were made in review (in addition to the 
> suggestion that a separate JIRA be created for this change) by [~dcapwell]:
> {quote}
> * I like the usage of BatchStatement.Type for the tests
> * honestly feel quick theories is better than random, but glad you added the 
> seed to all asserts =). Would still be better as a quick theories test since 
> you basically wrote a property anyways!
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R131
>  feel you should rename to expectedPartitionsPerLoggedBatch 
> {Count,Logged,Unlogged}
> * . pre is what the value is, post is what the value is expected to be 
> (rather than what it is).
> * 
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R150
>  this doesn't look correct. the batch has distinctPartitions mutations, so 
> shouldn't max reflect that? I ran the current test in a debugger and see that 
> that is the case (aka current test is wrong).
> most of the comments are nit picks, but the last one looks like a test bug to 
> me
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15729) Jenkins Test Results Report in plaintext for ASF ML

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088096#comment-17088096
 ] 

Michael Semb Wever commented on CASSANDRA-15729:


dtest-large fix committed as 
[c5df94bf04ba41d8a077af8f4703a1a98fb7cfc9|https://github.com/apache/cassandra-dtest/commit/c5df94bf04ba41d8a077af8f4703a1a98fb7cfc9]

> Jenkins Test Results Report in plaintext for ASF ML
> ---
>
> Key: CASSANDRA-15729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15729
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, CI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
>  Labels: Jenkins
> Fix For: 4.0-beta
>
>
> The Jenkins pipeline builds now aggregate all test reports.
> For example: 
> - https://ci-cassandra.apache.org/job/Cassandra-trunk/68/testReport/
> - 
> https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-trunk/detail/Cassandra-trunk/68/tests
> But Jenkins can only keep a limited amount of build history, so those links 
> are not permanent, can't be used as references, and don't help for bisecting 
> and blame on regressions (and flakey tests) over a longer period of time.
> The builds@ ML can provide a permanent record of test results. 
> This was first brought up in these two threads: 
> - 
> https://lists.apache.org/thread.html/re8122e4fdd8629e7fbca2abf27d72054b3bc0e3690ece8b8e66f618b%40%3Cdev.cassandra.apache.org%3E
> - 
> https://lists.apache.org/thread.html/ra5f6aeea89546825fe7ccc4a80898c62f8ed57decabf709d81d9c720%40%3Cdev.cassandra.apache.org%3E
> An example plaintext report, to demonstrate feasibility, is available here: 
> https://lists.apache.org/thread.html/r80d13f7af706bf8dfbf2387fab46004c1fbd3917b7bc339c49e69aa8%40%3Cbuilds.cassandra.apache.org%3E
> Hurdles:
>  - the ASF mailing lists won't except html, attachments, or any message body 
> over 1MB.
>  - packages are used as a differentiator in the final aggregated report. The 
> cqlsh and dtests currently don't specify it. It needs to be added as a 
> "dot-separated" prefix to the testsuite and testcase name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-dtest] branch master updated: Add `--only-resource-intensive-tests` command line option to only run the resource intensive annotated tests.

2020-04-20 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git


The following commit(s) were added to refs/heads/master by this push:
 new c5df94b  Add `--only-resource-intensive-tests` command line option to 
only run the resource intensive annotated tests.
c5df94b is described below

commit c5df94bf04ba41d8a077af8f4703a1a98fb7cfc9
Author: Mick Semb Wever 
AuthorDate: Sun Apr 19 18:04:42 2020 +0200

Add `--only-resource-intensive-tests` command line option to only run the 
resource intensive annotated tests.

Previously on the nightly builds the `dtest-large` job was used as a 
replacement for the `dtest` job. In the pipelines today both dtest and 
dtest-large are executed, so dtest-large re-executing the non-intensive tests 
is a waste.

 patch by Mick Semb Wever; reviewed by Eduard Tudenhöfner for 
CASSANDRA-15729
---
 conftest.py | 9 +
 1 file changed, 9 insertions(+)

diff --git a/conftest.py b/conftest.py
index 7cc3acc..34de30b 100644
--- a/conftest.py
+++ b/conftest.py
@@ -55,6 +55,8 @@ def pytest_addoption(parser):
  help="Control the number of data directories to create 
per instance")
 parser.addoption("--force-resource-intensive-tests", action="store_true", 
default=False,
  help="Forces the execution of tests marked as 
resource_intensive")
+parser.addoption("--only-resource-intensive-tests", action="store_true", 
default=False,
+ help="Only run tests marked as resource_intensive")
 parser.addoption("--skip-resource-intensive-tests", action="store_true", 
default=False,
  help="Skip all tests marked as resource_intensive")
 parser.addoption("--cassandra-dir", action="store", default=None,
@@ -476,6 +478,13 @@ def pytest_collection_modifyitems(items, config):
 deselect_test = True
 logger.info("SKIP: Deselecting resource_intensive test %s 
due to insufficient system resources" % item.name)
 
+if not item.get_closest_marker("resource_intensive") and not 
collect_only:
+only_resource_intensive = 
config.getoption("--only-resource-intensive-tests")
+if only_resource_intensive:
+deselect_test = True
+logger.info("SKIP: Deselecting non resource_intensive test %s 
as --only-resource-intensive-tests specified" % item.name)
+
+
 if item.get_closest_marker("no_vnodes"):
 if config.getoption("--use-vnodes"):
 deselect_test = True


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15730) Batch statement preparation fails if multiple tables and parameters are used

2020-04-20 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15730:
---
Fix Version/s: 4.0

> Batch statement preparation fails if multiple tables and parameters are used
> 
>
> Key: CASSANDRA-15730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Bryn Cooke
>Assignee: Bryn Cooke
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Batch statement preparation fails with an assertion error if multiple tables 
> and parameters are used.
> {{BEGIN BATCH }}
> {{ UPDATE tbl1 SET v1 = 1 WHERE k1 = ?}}
> {{ UPDATE tbl2 SET v2 = 2 WHERE k2 = ?}}
> {{APPLY BATCH}}
> The logic for affectsMultipleTables 
> [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java#L144]
>  looks inverted
>  This later causes an assertion failure 
> [here|https://github.com/apache/cassandra/blob/24c8a21c1c131abd89c6b646343ff098d1b3263b/src/java/org/apache/cassandra/cql3/VariableSpecifications.java#L75]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087629#comment-17087629
 ] 

Michael Semb Wever edited comment on CASSANDRA-15739 at 4/20/20, 9:22 PM:
--

Small comment on the dtest patch regarding naming.

Jenkins CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/67/pipeline].

EDIT: i botched the dtest build in the pipeline (bc i had hacked it to test 
[PR#66|https://github.com/apache/cassandra-dtest/pull/66]. A new build of it 
(unhacked) is 
[here|https://ci-cassandra.apache.org/job/Cassandra-devbranch-dtest/75/]).

(Had to run with the branch in thelastpickle fork, as the build scripts don't 
work when the forked repository has a different name.)




was (Author: michaelsembwever):
Small comment on the dtest patch regarding naming.

Jenkins CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/67/pipeline].

(Had to run with the branch in thelastpickle fork, as the build scripts don't 
work when the forked repository has a different name.)

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15449) Credentials out of sync after replacing the nodes

2020-04-20 Thread Jai Bheemsen Rao Dhanwada (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088090#comment-17088090
 ] 

Jai Bheemsen Rao Dhanwada commented on CASSANDRA-15449:
---

Any pointers here?
Today, I saw an issue on a 3 node cluster where, I just started adding new 
nodes (bootstrap) and see the issue.

In this case RF:3 
Consistency for Read Queries: Local_QUORUM.

As I mentioned initially I don't see any exceptions or errors in the Cassandra 
logs.

> Credentials out of sync after replacing the nodes
> -
>
> Key: CASSANDRA-15449
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15449
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jai Bheemsen Rao Dhanwada
>Priority: Normal
> Attachments: Screen Shot 2019-12-12 at 11.13.52 AM.png
>
>
> Hello,
> We are seeing a strange issue where, after replacing multiple C* nodes from 
> the clusters intermittently we see an issue where few nodes doesn't have any 
> credentials and the client queries fail.
> Here are the sequence of steps
> 1. on a Multi DC C* cluster(12 nodes in each DC), we replaced all the nodes 
> in one DC. 
> 2. The approach we took to replace the nodes is kill one node and launch a 
> new node with {{-Dcassandra.replace_address=}} and proceed with next node 
> once the node is bootstrapped and CQL is enabled.
>  3. This process works fine and all of a sudden, we started seeing our 
> application started failing with the below errors in the logs
> {quote}com.datastax.driver.core.exceptions.UnauthorizedException: User abc 
> has no SELECT permission on  or any of its parents at 
> com.datastax.driver.core.exceptions.UnauthorizedException.copy(UnauthorizedException.java:59)
>  at 
> com.datastax.driver.core.exceptions.UnauthorizedException.copy(UnauthorizedException.java:25)
>  at
> {quote}
> 4. At this stage we see that 3 nodes in the cluster takes zero traffic, while 
> rest of the nodes are serving ~100 requests. (attached the metrics)
>  5. We suspect some credentials sync issue and manually synced the 
> credentials and restarted the nodes with 0 requests, which fixed the problem.
> Also, one few C* nodes we see below exception immediately after the bootstrap 
> is completed and the process dies. is this contributing to the credentials 
> issue?
> NOTE:  The C* nodes with zero traffic and the nodes with the below exception 
> are not the same.
> {quote}ERROR [main] 2019-12-12 05:34:40,412 CassandraDaemon.java:583 - 
> Exception encountered during startup
>  java.lang.AssertionError: 
> org.apache.cassandra.exceptions.InvalidRequestException: Undefined name 
> salted_hash in selection clause
>  at 
> org.apache.cassandra.auth.PasswordAuthenticator.setup(PasswordAuthenticator.java:202)
>  ~[apache-cassandra-2.1.16.jar:2.1.16]
>  at org.apache.cassandra.auth.Auth.setup(Auth.java:144) 
> ~[apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:996)
>  ~[apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:740)
>  ~[apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:617)
>  ~[apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:391) 
> [apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:566)
>  [apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:655) 
> [apache-cassandra-2.1.16.jar:2.1.16]
>  Caused by: org.apache.cassandra.exceptions.InvalidRequestException: 
> Undefined name salted_hash in selection clause
>  at 
> org.apache.cassandra.cql3.statements.Selection.fromSelectors(Selection.java:292)
>  ~[apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1592)
>  ~[apache-cassandra-2.1.16.jar:2.1.16]
>  at 
> org.apache.cassandra.auth.PasswordAuthenticator.setup(PasswordAuthenticator.java:198)
>  ~[apache-cassandra-2.1.16.jar:2.1.16]
>  ... 7 common frames omitted
> {quote}
> Not sure why this is happening, is this a potential bug or any other pointers 
> to fix the problem.
> C* Version: 2.1.16
>  Client: Datastax Java Driver.
>  system_auth RF: 3, dc-1:3 and dc-2:3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15713) InstanceClassLoader fails to load with the following previously initiated loading for a different type with name "org/w3c/dom/Document"

2020-04-20 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-15713:
--
  Fix Version/s: 4.0
Source Control Link: 
https://github.com/apache/cassandra-in-jvm-dtest-api/commit/d59833f2223a85a4dc3f4ea597384588d5d008df
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> InstanceClassLoader fails to load with the following previously initiated 
> loading for a different type with name "org/w3c/dom/Document"
> ---
>
> Key: CASSANDRA-15713
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15713
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/dtest
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> java.lang.LinkageError: loader constraint violation: loader (instance of 
> org/apache/cassandra/distributed/shared/InstanceClassLoader) previously 
> initiated loading for a different type with name "org/w3c/dom/Document”
> This is caused when using dtest outside of the normal Cassandra context.  
> There is no API to add more exclusions so unable to work around this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15645) Can't send schema pull request: node /A.B.C.D is down

2020-04-20 Thread Jai Bheemsen Rao Dhanwada (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088063#comment-17088063
 ] 

Jai Bheemsen Rao Dhanwada commented on CASSANDRA-15645:
---

I had similar issue and looks like this is introduced in 3.11
https://fossies.org/diffs/apache-cassandra/3.10-src_vs_3.11.0-src/src/java/org/apache/cassandra/service/MigrationTask.java-diff.html

Does this cause any issue to the schema or data?

I am using C* version 3.11.3.

> Can't send schema pull request: node /A.B.C.D is down
> -
>
> Key: CASSANDRA-15645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15645
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema
>Reporter: Pierre Belanger apache.org
>Priority: Normal
>
> On a new cluster with Cassandra 3.11.5, each time a node joins the cluster 
> the schema pull request happens before at least 1 node is confirmed up.  On 
> the first node it's fine but node #2 and following are all complaining with 
> below WARN.
>  
> {noformat}
> INFO [MigrationStage:1] 2020-03-16 16:49:32,355 ColumnFamilyStore.java:426 - 
> Initializing system_auth.roles
> WARN [MigrationStage:1] 2020-03-16 16:49:32,368 MigrationTask.java:67 - Can't 
> send schema pull request: node /A.B.C.D is down.
> WARN [MigrationStage:1] 2020-03-16 16:49:32,369 MigrationTask.java:67 - Can't 
> send schema pull request: node /A.B.C.D is down.
> INFO [main] 2020-03-16 16:49:32,371 Gossiper.java:1780 - Waiting for gossip 
> to settle...
> INFO [GossipStage:1] 2020-03-16 16:49:32,493 Gossiper.java:1089 - InetAddress 
> /A.B.C.D is now UP
> INFO [HANDSHAKE-/10.205.45.19] 2020-03-16 16:49:32,545 
> OutboundTcpConnection.java:561 - Handshaking version with /A.B.C.D
> {noformat}
>  
> It's not urgent to fix but the WARN create noise for no reason.  Before 
> trying to pull the schema, shouldn't the process wait for gossip to have at 
> least 1 node "up"?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRASC-16:
--
Description: 
The sidecar project should be support many C* versions from 3.0 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  

  was:
The sidecar project should be support many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  


> Incorporate sidecar java agent, allowing project to work with existing 
> Cassandra releases
> -
>
> Key: CASSANDRASC-16
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> The sidecar project should be support many C* versions from 3.0 to 4.0 
> In order to provide a consistent set of APIs and de-couple the development 
> cadence of the sidecar project from the Cassandra project, it would be most 
> advantageous to use a java agent to enable new operational hooks for the 
> sidecars use.
> In the Management API we use an agent to support the following:
>  * Add a local CQL syntax for executing operations - example: CALL 
> compact('keyspace', 'table')
>  * Adds a local unix socket for the sidecar to communicate with c* via the 
> java driver
>  * Enables functionality required for sidecar
>  ** default system_auth to NTS
>  ** avoid setting up default cassandra superuser 
>  
> The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15582) 4.0 quality testing: metrics

2020-04-20 Thread Stephen Mallette (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088060#comment-17088060
 ] 

Stephen Mallette commented on CASSANDRA-15582:
--

Getting back to the discussion on comparing metrics on 3 and 4, I think it 
makes sense to work on the manual process of this effort first to see if it can 
at least have a reasonable documented approach. I spent a fair bit of time 
trying multiple ways to get two separate cassandra clusters running locally 
(one for 3 and one for 4). I'd found it easy enough to do by installing 
cassandra locally and editing some of the config a bit. I had less success with 
nicer approaches like ccm and docker. The latter was pretty annoying as that 
seems like such a simple way to get things working but I clearly was confounded 
by something in the step of remotely connecting to cassandra/JMX in a docker 
container. For now, I will continue my analysis with this simple rig I have 
currently working.

> Possible to print out what was added and what was removed?

I was able to get a list of newly added metric names - so metric names added in 
4 that were not in 3:

https://gist.github.com/spmallette/c443716e1c0de40b4a5bb0ef5422aeee

There are a fair number of them and many do not appear to exist in the 
documentation, so perhaps this is another area that needs some attention in 
relation to this ticket.

> How do we ensure that all code paths that generate the metrics are exercised? 

I think this was a good point as well. I've yet to come across a metric that 
wasn't initialized as start of cassandra, but I've only determined that by 
random diggings in code and so I can't say with confidence that this is always 
the case. If anyone is familiar with any metrics that are fired up as a result 
of specific code paths having to be exercised I'd be interested to know about 
them as it would mean some new surface area to consider in all this.

I think I will venture to try to better understand the nature of the property 
keys shifting between 3 and 4 to see if I can firm up my understanding of what 
is happening there. And, hopefully, I can figure out a nicer way for anyone to 
easily setup an environment to run some of this analysis if they wanted to.



> 4.0 quality testing: metrics
> 
>
> Key: CASSANDRA-15582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15582
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Romain Hardouin
>Priority: Normal
> Fix For: 4.0-rc
>
> Attachments: Screen Shot 2020-04-07 at 5.47.17 PM.png
>
>
> In past releases we've unknowingly broken metrics integrations and introduced 
> performance regressions in metrics collection and reporting. We strive in 4.0 
> to not do that. Metrics should work well!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRASC-16:
-

 Summary: Incorporate sidecar java agent, allowing project to work 
with existing Cassandra releases
 Key: CASSANDRASC-16
 URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
 Project: Sidecar for Apache Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani


The sidecar project should be supported by many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRASC-16:
-

Assignee: T Jake Luciani

> Incorporate sidecar java agent, allowing project to work with existing 
> Cassandra releases
> -
>
> Key: CASSANDRASC-16
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> The sidecar project should be supported by many C* versions from 2.1 to 4.0 
> In order to provide a consistent set of APIs and de-couple the development 
> cadence of the sidecar project from the Cassandra project, it would be most 
> advantageous to use a java agent to enable new operational hooks for the 
> sidecars use.
> In the Management API we use an agent to support the following:
>  * Add a local CQL syntax for executing operations - example: CALL 
> compact('keyspace', 'table')
>  * Adds a local unix socket for the sidecar to communicate with c* via the 
> java driver
>  * Enables functionality required for sidecar
>  ** default system_auth to NTS
>  ** avoid setting up default cassandra superuser 
>  
> The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRASC-16:
--
Description: 
The sidecar project should be support many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  

  was:
The sidecar project should be supported by many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  


> Incorporate sidecar java agent, allowing project to work with existing 
> Cassandra releases
> -
>
> Key: CASSANDRASC-16
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> The sidecar project should be support many C* versions from 2.1 to 4.0 
> In order to provide a consistent set of APIs and de-couple the development 
> cadence of the sidecar project from the Cassandra project, it would be most 
> advantageous to use a java agent to enable new operational hooks for the 
> sidecars use.
> In the Management API we use an agent to support the following:
>  * Add a local CQL syntax for executing operations - example: CALL 
> compact('keyspace', 'table')
>  * Adds a local unix socket for the sidecar to communicate with c* via the 
> java driver
>  * Enables functionality required for sidecar
>  ** default system_auth to NTS
>  ** avoid setting up default cassandra superuser 
>  
> The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15674) liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if IndexSummaryRedistribution gets interrupted

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088022#comment-17088022
 ] 

David Capwell edited comment on CASSANDRA-15674 at 4/20/20, 7:38 PM:
-

[~marcuse] pushed based off your feedback 
[here|https://github.com/apache/cassandra/pull/500/commits/27e5cd7132515ab0bd114731417913c40e4d7789].
 The only change from your branch is I still verify totalDiskSpaceUsed (was 
removed from your patch, not sure why).

[Circle 
CI|https://circleci.com/workflow-run/c5d5c7c5-9c75-4e7e-a122-9771663e5451]

The unit test failure looks to be CASSANDRA-15672, which isn't in my branch.


was (Author: dcapwell):
[~marcuse] pushed based off your feedback 
[here|https://github.com/apache/cassandra/pull/500/commits/27e5cd7132515ab0bd114731417913c40e4d7789].
 The only change from your branch is I still verify totalDiskSpaceUsed (was 
removed from your patch, not sure why).

[Circle 
CI|https://circleci.com/workflow-run/c5d5c7c5-9c75-4e7e-a122-9771663e5451]

> liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if 
> IndexSummaryRedistribution gets interrupted
> -
>
> Key: CASSANDRA-15674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15674
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction, Observability/Metrics
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> IndexSummaryRedistribution is a compaction task and as such extends Holder 
> and supports cancelation by throwing a CompactionInterruptedException.  The 
> issue is that IndexSummaryRedistribution tries to use transactions, but 
> mutates the sstable in-place; transaction is unable to roll back.
> This would be fine (only updates summary) if it wasn’t for the fact the task 
> attempts to also mutate the two metrics liveDiskSpaceUsed and 
> totalDiskSpaceUsed, since these can’t be rolled back any cancelation could 
> corrupt these metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-15674) liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if IndexSummaryRedistribution gets interrupted

2020-04-20 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-15674:
--
Comment: was deleted

(was: [Circle 
CI|https://circleci.com/workflow-run/c5d5c7c5-9c75-4e7e-a122-9771663e5451])

> liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if 
> IndexSummaryRedistribution gets interrupted
> -
>
> Key: CASSANDRA-15674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15674
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction, Observability/Metrics
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> IndexSummaryRedistribution is a compaction task and as such extends Holder 
> and supports cancelation by throwing a CompactionInterruptedException.  The 
> issue is that IndexSummaryRedistribution tries to use transactions, but 
> mutates the sstable in-place; transaction is unable to roll back.
> This would be fine (only updates summary) if it wasn’t for the fact the task 
> attempts to also mutate the two metrics liveDiskSpaceUsed and 
> totalDiskSpaceUsed, since these can’t be rolled back any cancelation could 
> corrupt these metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15674) liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if IndexSummaryRedistribution gets interrupted

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088022#comment-17088022
 ] 

David Capwell edited comment on CASSANDRA-15674 at 4/20/20, 7:35 PM:
-

[~marcuse] pushed based off your feedback 
[here|https://github.com/apache/cassandra/pull/500/commits/27e5cd7132515ab0bd114731417913c40e4d7789].
 The only change from your branch is I still verify totalDiskSpaceUsed (was 
removed from your patch, not sure why).

[Circle 
CI|https://circleci.com/workflow-run/c5d5c7c5-9c75-4e7e-a122-9771663e5451]


was (Author: dcapwell):
[~marcuse] pushed based off your feedback 
[here|https://github.com/apache/cassandra/pull/500/commits/27e5cd7132515ab0bd114731417913c40e4d7789].
 The only change from your branch is I still verify totalDiskSpaceUsed (was 
removed from your patch, not sure why).

> liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if 
> IndexSummaryRedistribution gets interrupted
> -
>
> Key: CASSANDRA-15674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15674
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction, Observability/Metrics
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> IndexSummaryRedistribution is a compaction task and as such extends Holder 
> and supports cancelation by throwing a CompactionInterruptedException.  The 
> issue is that IndexSummaryRedistribution tries to use transactions, but 
> mutates the sstable in-place; transaction is unable to roll back.
> This would be fine (only updates summary) if it wasn’t for the fact the task 
> attempts to also mutate the two metrics liveDiskSpaceUsed and 
> totalDiskSpaceUsed, since these can’t be rolled back any cancelation could 
> corrupt these metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15663) DESCRIBE KEYSPACE does not properly quote table names

2020-04-20 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088028#comment-17088028
 ] 

Benjamin Lerer commented on CASSANDRA-15663:


[~aholmber], [~Ge] If you prefer to continue with this ticket. It is fine 
for me :-). I just wanted to raise the fact that the problem will be fixed by 
another ticket.

> DESCRIBE KEYSPACE does not properly quote table names
> -
>
> Key: CASSANDRA-15663
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15663
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: Oskar Liljeblad
>Assignee: Aleksandr Sorokoumov
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 3.11.x, 4.0-alpha
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> How to reproduce (3.11.6) - cqlsh:
> {code}
> CREATE KEYSPACE test1 WITH replication = \{'class': 'SimpleStrategy', 
> 'replication_factor': '1'} AND durable_writes = true;
> CREATE TABLE test1."default" (id text PRIMARY KEY, data text, etag text);
> DESCRIBE KEYSPACE test1;
> {code}
> Output will be:
> {code}
> CREATE TABLE test1.default (
>  id text PRIMARY KEY,
>  data text,
>  etag text
> ) WITH [..]
> {code}
> Output should be:
> {code}
> CREATE TABLE test1."default" (
>  id text PRIMARY KEY,
>  data text,
>  etag text
> ) WITH [..]
> {code}
>  If you try to run {{CREATE TABLE test1.default [..]}} you will get an error 
> SyntaxException: line 1:19 no viable alternative at input 'default' (CREATE 
> TABLE test1.[default]...)
> Oskar Liljeblad
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15674) liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if IndexSummaryRedistribution gets interrupted

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088022#comment-17088022
 ] 

David Capwell commented on CASSANDRA-15674:
---

[~marcuse] pushed based off your feedback 
[here|https://github.com/apache/cassandra/pull/500/commits/27e5cd7132515ab0bd114731417913c40e4d7789].
 The only change from your branch is I still verify totalDiskSpaceUsed (was 
removed from your patch, not sure why).

> liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if 
> IndexSummaryRedistribution gets interrupted
> -
>
> Key: CASSANDRA-15674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15674
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction, Observability/Metrics
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> IndexSummaryRedistribution is a compaction task and as such extends Holder 
> and supports cancelation by throwing a CompactionInterruptedException.  The 
> issue is that IndexSummaryRedistribution tries to use transactions, but 
> mutates the sstable in-place; transaction is unable to roll back.
> This would be fine (only updates summary) if it wasn’t for the fact the task 
> attempts to also mutate the two metrics liveDiskSpaceUsed and 
> totalDiskSpaceUsed, since these can’t be rolled back any cancelation could 
> corrupt these metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15674) liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if IndexSummaryRedistribution gets interrupted

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088023#comment-17088023
 ] 

David Capwell commented on CASSANDRA-15674:
---

[Circle 
CI|https://circleci.com/workflow-run/c5d5c7c5-9c75-4e7e-a122-9771663e5451]

> liveDiskSpaceUsed and totalDiskSpaceUsed get corrupted if 
> IndexSummaryRedistribution gets interrupted
> -
>
> Key: CASSANDRA-15674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15674
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction, Observability/Metrics
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> IndexSummaryRedistribution is a compaction task and as such extends Holder 
> and supports cancelation by throwing a CompactionInterruptedException.  The 
> issue is that IndexSummaryRedistribution tries to use transactions, but 
> mutates the sstable in-place; transaction is unable to roll back.
> This would be fine (only updates summary) if it wasn’t for the fact the task 
> attempts to also mutate the two metrics liveDiskSpaceUsed and 
> totalDiskSpaceUsed, since these can’t be rolled back any cancelation could 
> corrupt these metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: ninja-fix: remove blank index.html in doc/ directory. Directory listing is intentional (and nobody links to it anyhow)

2020-04-20 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new c1628ab  ninja-fix: remove blank index.html in doc/ directory. 
Directory listing is intentional (and nobody links to it anyhow)
c1628ab is described below

commit c1628ab2c95dbab717d02d9b4b28b6e4531590bf
Author: mck 
AuthorDate: Mon Apr 20 21:03:43 2020 +0200

ninja-fix: remove blank index.html in doc/ directory. Directory listing is 
intentional (and nobody links to it anyhow)
---
 content/doc/index.html | 0
 src/doc/index.html | 0
 2 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/content/doc/index.html b/content/doc/index.html
deleted file mode 100644
index e69de29..000
diff --git a/src/doc/index.html b/src/doc/index.html
deleted file mode 100644
index e69de29..000


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15733) jvm dtest builder should be provided to the factory and expose state

2020-04-20 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088007#comment-17088007
 ] 

David Capwell commented on CASSANDRA-15733:
---

since the dtest changes are OKed, ill send a patch for all 4 branches.

> jvm dtest builder should be provided to the factory and expose state
> 
>
> Key: CASSANDRA-15733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/dtest
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Currently the builder is rather heavy and creates configs plus call the 
> factory with specific fields only, this isn’t that flexible and makes it 
> harder to have custom cluster definitions which require additional fields to 
> be defined.  To solve this we should make the builder be sent to the factory 
> and expose the state so the factory can get all the fields it needs, the 
> factory should also be in charge of creating the configs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12144) Undeletable / duplicate rows after upgrading from 2.2.4 to 3.0.7

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-12144:
--
Component/s: Local/SSTable

> Undeletable / duplicate rows after upgrading from 2.2.4 to 3.0.7
> 
>
> Key: CASSANDRA-12144
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12144
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/SSTable
>Reporter: Stanislav Vishnevskiy
>Assignee: Alex Petrov
>Priority: Normal
> Fix For: 3.0.9, 3.8
>
>
> We upgraded our cluster today and now have a some rows that refuse to delete.
> Here are some example traces.
> https://gist.github.com/vishnevskiy/36aa18c468344ea22d14f9fb9b99171d
> Even weirder.
> Updating the row and querying it back results in 2 rows even though the id is 
> the clustering key.
> {noformat}
> user_id| id | since| type
> ---++--+--
> 116138050710536192 | 153047019424972800 | null |0
> 116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> And then deleting it again only removes the new one.
> {noformat}
> cqlsh:discord_relationships> DELETE FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
> cqlsh:discord_relationships> SELECT * FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
>  user_id| id | since| type
> ++--+--
>  116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> We tried repairing, compacting, scrubbing. No Luck.
> Not sure what to do. Is anyone aware of this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13152) UPDATE on counter columns with empty list as argument in IN disables cluster

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-13152:
--
Component/s: Local/SSTable
 CQL/Interpreter

> UPDATE on counter columns with empty list as argument in IN disables cluster
> 
>
> Key: CASSANDRA-13152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13152
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter, Local/SSTable
> Environment: Linux Ubuntu 16
> 3 Virtual machines
>Reporter: jorge collinet
>Assignee: Benjamin Lerer
>Priority: Urgent
> Fix For: 3.0.11, 3.11.0, 4.0
>
>
> On a 3 node cluster
> with this table (replication factor of 2):
> {code}
> CREATE TABLE tracking.item_items_rec_history (
>   reference_id bigint,
>   country text,
>   portal text,
>   app_name text,
>   recommended_id bigint,
>   counter counter,
>   PRIMARY KEY (reference_id, country, portal, app_name, recommended_id)
> );
> {code}
> If I execute 
> {code}
> UPDATE user_items_rec_history 
> SET counter = counter + 1 
> WHERE reference_id = 1 AND country = '' AND portal = '' AND app_name = '' AND 
> recommended_id IN ();
> {code}
> Take note that the IN is empty
> The cluster starts to malfunction and responds a lot of timeouts to any query.
> After resetting some of the nodes, the cluster starts to function normally 
> again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-13265:
--
Component/s: Messaging/Internode

> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
>Priority: Normal
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: cassandra-13265-2.2-dtest_stdout.txt, 
> cassandra-13265-trun-dtest_stdout.txt, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13620) Don't skip corrupt sstables on startup

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-13620:
--
Component/s: Local/SSTable

> Don't skip corrupt sstables on startup
> --
>
> Key: CASSANDRA-13620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13620
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/SSTable
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 3.0.15, 3.11.1, 4.0
>
> Attachments: 13620-3.0.png, 13620-3.11.png, 13620-trunk.png
>
>
> If we get an IOException when opening an sstable on startup, we just 
> [skip|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L563-L567]
>  it and continue starting
> we should use the DiskFailurePolicy and never explicitly catch an IOException 
> here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13646) Bind parameters of collection types are not properly validated

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-13646:
--
Component/s: CQL/Interpreter

> Bind parameters of collection types are not properly validated
> --
>
> Key: CASSANDRA-13646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13646
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Normal
> Fix For: 2.2.11, 3.0.15, 3.11.1, 4.0
>
>
> It looks like C* is not validating properly the bind parameters for 
> collection types. If an element of the collection is invalid the value will 
> not be rejected and might cause an Exception later on.
> The problem can be reproduced with the following test:
> {code}
> @Test
> public void testInvalidQueries() throws Throwable
> {
> createTable("CREATE TABLE %s (k int PRIMARY KEY, s 
> frozen>>)");
> execute("INSERT INTO %s (k, s) VALUES (0, ?)", 
> set(tuple(1,"1",1.0,1), tuple(2,"2",2.0,2)));
> }
> {code}
> The invalid Tuple will cause an "IndexOutOfBoundsException: Index: 3, Size: 3"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15666) Race condition when completing stream sessions

2020-04-20 Thread Sergio Bossa (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087966#comment-17087966
 ] 

Sergio Bossa commented on CASSANDRA-15666:
--

Good to merge for me.

> Race condition when completing stream sessions
> --
>
> Key: CASSANDRA-15666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15666
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Streaming and Messaging
>Reporter: Sergio Bossa
>Assignee: ZhaoYang
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{StreamSession#prepareAsync()}} executes, as the name implies, 
> asynchronously from the IO thread: this opens up for race conditions between 
> the sending of the {{PrepareSynAckMessage}} and the call to 
> {{StreamSession#maybeCompleted()}}. I.e., the following could happen:
> 1) Node A sends {{PrepareSynAckMessage}} from the {{prepareAsync()}} thread.
> 2) Node B receives it and starts streaming.
> 3) Node A receives the streamed file and sends {{ReceivedMessage}}.
> 4) At this point, if this was the only file to stream, both nodes are ready 
> to close the session via {{maybeCompleted()}}, but:
> a) Node A will call it twice from both the IO thread and the thread at #1, 
> closing the session and its channels.
> b) Node B will attempt to send a {{CompleteMessage}}, but will fail because 
> the session has been closed in the meantime.
> There are other subtle variations of the pattern above, depending on the 
> order of concurrently sent/received messages.
> I believe the best fix would be to modify the message exchange so that:
> 1) Only the "follower" is allowed to send the {{CompleteMessage}}.
> 2) Only the "initiator" is allowed to close the session and its channels 
> after receiving the {{CompleteMessage}}.
> By doing so, the message exchange logic would be easier to reason about, 
> which is overall a win anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13776) Adding a field to an UDT can corrupte the tables using it

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-13776:
--
Component/s: Local/SSTable

> Adding a field to an UDT can corrupte the tables using it
> -
>
> Key: CASSANDRA-13776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13776
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/SSTable
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Urgent
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> Adding a field to an UDT which is used as a {{Set}} element or as a {{Map}} 
> element can corrupt the table.
> The problem can be reproduced using the following test case:
> {code}
> @Test
> public void testReadAfterAlteringUserTypeNestedWithinSet() throws 
> Throwable
> {
> String ut1 = createType("CREATE TYPE %s (a int)");
> String columnType = KEYSPACE + "." + ut1;
> try
> {
> createTable("CREATE TABLE %s (x int PRIMARY KEY, y set columnType + ">>)");
> disableCompaction();
> execute("INSERT INTO %s (x, y) VALUES(1, ?)", set(userType(1), 
> userType(2)));
> assertRows(execute("SELECT * FROM %s"), row(1, set(userType(1), 
> userType(2;
> flush();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1), userType(2;
> execute("ALTER TYPE " + KEYSPACE + "." + ut1 + " ADD b int");
> execute("UPDATE %s SET y = y + ? WHERE x = 1",
> set(userType(1, 1), userType(1, 2), userType(2, 1)));
> flush();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1),
>   userType(1, 1),
>   userType(1, 2),
>   userType(2),
>   userType(2, 1;
> compact();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1),
>   userType(1, 1),
>   userType(1, 2),
>   userType(2),
>   userType(2, 1;
> }
> finally
> {
> enableCompaction();
> }
> }
> {code} 
> There are in fact 2 problems:
> # When the {{sets}} from the 2 versions are merged the {{ColumnDefinition}} 
> being picked up can be the older one. In which case when the tuples are 
> sorted it my lead to an {{IndexOutOfBoundsException}}.
> # During compaction, the old column definition can be the one being kept for 
> the SSTable metadata. If it is the case the SSTable will not be readable any 
> more and will be marked as {{corrupted}}.
> If one of the tables using the type has a Materialized View attached to it, 
> the MV updates can also fail with {{IndexOutOfBoundsException}}.
> This problem can be reproduced using the following test:
> {code}
> @Test
> public void testAlteringUserTypeNestedWithinSetWithView() throws Throwable
> {
> String columnType = typeWithKs(createType("CREATE TYPE %s (a int)"));
> createTable("CREATE TABLE %s (pk int, c int, v int, s set columnType + ">>, PRIMARY KEY (pk, c))");
> execute("CREATE MATERIALIZED VIEW " + keyspace() + ".view1 AS SELECT 
> c, pk, v FROM %s WHERE pk IS NOT NULL AND c IS NOT NULL AND v IS NOT NULL 
> PRIMARY KEY (c, pk)");
> execute("INSERT INTO %s (pk, c, v, s) VALUES(?, ?, ?, ?)", 1, 1, 1, 
> set(userType(1), userType(2)));
> flush();
> execute("ALTER TYPE " + columnType + " ADD b int");
> execute("UPDATE %s SET s = s + ?, v = ? WHERE pk = ? AND c = ?",
> set(userType(1, 1), userType(1, 2), userType(2, 1)), 2, 1, 1);
> assertRows(execute("SELECT * FROM %s WHERE pk = ? AND c = ?", 1, 1),
>row(1, 1, 2, set(userType(1),
> userType(1, 1),
> userType(1, 2),
> userType(2),
> userType(2, 1;
> }
> {code}  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15718) Improve BatchMetricsTest

2020-04-20 Thread Stephen Mallette (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087953#comment-17087953
 ] 

Stephen Mallette commented on CASSANDRA-15718:
--

[~dcapwell] (or others who might be interested) I was wonder if you'd had a 
moment to consider my comments above as well as the changes in the branch?

> Improve BatchMetricsTest 
> -
>
> Key: CASSANDRA-15718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15718
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Stephen Mallette
>Assignee: Stephen Mallette
>Priority: Normal
>
> As noted in CASSANDRA-15582 {{BatchMetricsTest}} should test 
> {{BatchStatement.Type.COUNTER}} to cover all the {{BatchMetrics}}.  Some 
> changes were introduced to make this improvement at:
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics
> and the following suggestions were made in review (in addition to the 
> suggestion that a separate JIRA be created for this change) by [~dcapwell]:
> {quote}
> * I like the usage of BatchStatement.Type for the tests
> * honestly feel quick theories is better than random, but glad you added the 
> seed to all asserts =). Would still be better as a quick theories test since 
> you basically wrote a property anyways!
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R131
>  feel you should rename to expectedPartitionsPerLoggedBatch 
> {Count,Logged,Unlogged}
> * . pre is what the value is, post is what the value is expected to be 
> (rather than what it is).
> * 
> * 
> https://github.com/apache/cassandra/compare/trunk...spmallette:CASSANDRA-15582-trunk-batchmetrics#diff-8948cec1f9d33f10b15c38de80141548R150
>  this doesn't look correct. the batch has distinctPartitions mutations, so 
> shouldn't max reflect that? I ran the current test in a debugger and see that 
> that is the case (aka current test is wrong).
> most of the comments are nit picks, but the last one looks like a test bug to 
> me
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13849) GossipStage blocks because of race in ActiveRepairService

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-13849:
--
Component/s: Cluster/Gossip

> GossipStage blocks because of race in ActiveRepairService
> -
>
> Key: CASSANDRA-13849
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13849
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Tom van der Woerdt
>Assignee: Sergey Lapukhov
>Priority: Normal
>  Labels: patch
> Fix For: 3.0.16, 3.11.2, 4.0
>
> Attachments: CAS-13849.patch, CAS-13849_2.patch, CAS-13849_3.patch
>
>
> Bad luck caused a kernel panic in a cluster, and that took another node with 
> it because GossipStage stopped responding.
> I think it's pretty obvious what's happening, here are the relevant excerpts 
> from the stack traces :
> {noformat}
> "Thread-24004" #393781 daemon prio=5 os_prio=0 tid=0x7efca9647400 
> nid=0xe75c waiting on condition [0x7efaa47fe000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x00052b63a7e8> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:332)
> - locked <0x0002e6bc99f0> (a 
> org.apache.cassandra.service.ActiveRepairService)
> at 
> org.apache.cassandra.repair.RepairRunnable.runMayThrow(RepairRunnable.java:211)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$3/1498438472.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:748)
> "GossipTasks:1" #367 daemon prio=5 os_prio=0 tid=0x7efc5e971000 
> nid=0x700b waiting for monitor entry [0x7dfb839fe000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.service.ActiveRepairService.removeParentRepairSession(ActiveRepairService.java:421)
> - waiting to lock <0x0002e6bc99f0> (a 
> org.apache.cassandra.service.ActiveRepairService)
> at 
> org.apache.cassandra.service.ActiveRepairService.convict(ActiveRepairService.java:776)
> at 
> org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:306)
> at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:775) 
>   
>  at 
> org.apache.cassandra.gms.Gossiper.access$800(Gossiper.java:67)
> at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:187)
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$3/1498438472.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:748)
> "GossipStage:1" #320 daemon prio=5 os_prio=0 tid=0x7efc5b9f2c00 
> nid=0x6fcd waiting for monitor entry [0x7e260186a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.service.ActiveRepairService.removeParentRepairSession(ActiveRepairService.java:421)
>

[jira] [Updated] (CASSANDRA-14330) Handle repeat open bound from SRP in read repair

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-14330:
--
Component/s: Local/SSTable
 Consistency/Coordination

> Handle repeat open bound from SRP in read repair
> 
>
> Key: CASSANDRA-14330
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14330
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Local/SSTable
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 3.0.17, 3.11.3, 4.0
>
>
> If there is an open range tombstone in an iterator, a short read protection 
> request for it will include a repeat open bound. Currently, {{DataResolver}} 
> doesn't expect this, and will raise an assertion, timing out the request:
> {code:java}
> java.lang.AssertionError: Error merging RTs on test.test: merged=null, 
> versions=[Marker EXCL_START_BOUND(0)@0, null], sources={[/127.0.0.1, 
> /127.0.0.2]}, responses:
> /127.0.0.1 => [test.test] key=0 
> partition_deletion=deletedAt=-9223372036854775808, localDeletion=2147483647 
> columns=[[] | []]
>Row[info=[ts=1] ]: ck=0 | ,
>/127.0.0.2 => [test.test] key=0 
> partition_deletion=deletedAt=-9223372036854775808, localDeletion=2147483647 
> columns=[[] | []]
>Row[info=[ts=-9223372036854775808] del=deletedAt=1, 
> localDeletion=1521572669 ]: ck=0 |
>Row[info=[ts=1] ]: ck=1 | 
> {code}
> As this is a completely normal/common scenario, we should allow for this, and 
> relax the assertion.
> Additionally, the linked branch makes the re-throwing {{AssertionError}} more 
> detailed and more correct: the responses are now printed out in the correct 
> order, respecting {{isReversed}}, the command causing the assertion is now 
> logged, as is {{isReversed}} itself, and local deletion times for RTs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14515) Short read protection in presence of almost-purgeable range tombstones may cause permanent data loss

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-14515:
--
Component/s: Local/SSTable
 Consistency/Coordination

> Short read protection in presence of almost-purgeable range tombstones may 
> cause permanent data loss
> 
>
> Key: CASSANDRA-14515
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14515
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Local/SSTable
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Urgent
> Fix For: 3.0.17, 3.11.3, 4.0
>
>
> Because read responses don't necessarily close their open RT bounds, it's 
> possible to lose data during short read protection, if a closing bound is 
> compacted away between two adjacent reads from a node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-in-jvm-dtest-api] branch master updated: Revert "Cluster builder should be provided to the factory and expose state"

2020-04-20 Thread ifesdjeen
This is an automated email from the ASF dual-hosted git repository.

ifesdjeen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-in-jvm-dtest-api.git


The following commit(s) were added to refs/heads/master by this push:
 new 43e6d54  Revert "Cluster builder should be provided to the factory and 
expose state"
43e6d54 is described below

commit 43e6d54a0f396598ecbffc52d6fb2f4f17bd69c6
Author: Alex Petrov 
AuthorDate: Mon Apr 20 18:37:22 2020 +0200

Revert "Cluster builder should be provided to the factory and expose state"

This reverts commit 50fdfefa11248e7b93507b8e66322dc7a5056744.
---
 .../shared/{AbstractBuilder.java => Builder.java}  | 130 ++---
 .../distributed/shared/DistributedTestBase.java|   2 +-
 2 files changed, 65 insertions(+), 67 deletions(-)

diff --git 
a/src/main/java/org/apache/cassandra/distributed/shared/AbstractBuilder.java 
b/src/main/java/org/apache/cassandra/distributed/shared/Builder.java
similarity index 71%
rename from 
src/main/java/org/apache/cassandra/distributed/shared/AbstractBuilder.java
rename to src/main/java/org/apache/cassandra/distributed/shared/Builder.java
index 993b8a3..b3b7db0 100644
--- a/src/main/java/org/apache/cassandra/distributed/shared/AbstractBuilder.java
+++ b/src/main/java/org/apache/cassandra/distributed/shared/Builder.java
@@ -25,7 +25,6 @@ import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.Objects;
 import java.util.function.Consumer;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
@@ -37,14 +36,17 @@ import org.apache.cassandra.distributed.api.TokenSupplier;
 
 import static 
org.apache.cassandra.distributed.api.TokenSupplier.evenlyDistributedTokens;
 
-public abstract class AbstractBuilder>
+public abstract class Builder
 {
-public interface Factory>
+
+private final int BROADCAST_PORT = 7012;
+
+public interface Factory
 {
-C newCluster(B builder);
+C newCluster(File root, Versions.Version version, 
List configs, ClassLoader sharedClassLoader);
 }
 
-private final Factory factory;
+private final Factory factory;
 private int nodeCount;
 private int subnet;
 private Map nodeIdTopology;
@@ -52,45 +54,12 @@ public abstract class AbstractBuilder configUpdater;
-private ClassLoader sharedClassLoader = 
Thread.currentThread().getContextClassLoader();
 
-public AbstractBuilder(Factory factory)
+public Builder(Factory factory)
 {
 this.factory = factory;
 }
 
-public int getNodeCount() {
-return nodeCount;
-}
-
-public int getSubnet() {
-return subnet;
-}
-
-public Map getNodeIdTopology() {
-return nodeIdTopology;
-}
-
-public TokenSupplier getTokenSupplier() {
-return tokenSupplier;
-}
-
-public File getRoot() {
-return root;
-}
-
-public Versions.Version getVersion() {
-return version;
-}
-
-public Consumer getConfigUpdater() {
-return configUpdater;
-}
-
-public ClassLoader getSharedClassLoader() {
-return sharedClassLoader;
-}
-
 public C start() throws IOException
 {
 C cluster = createWithoutStarting();
@@ -106,50 +75,79 @@ public abstract class AbstractBuilder 
nodeId,
 nodeId -> 
NetworkTopology.dcAndRack(dcName(0), rackName(0;
+}
+
+root.mkdirs();
+
+ClassLoader sharedClassLoader = 
Thread.currentThread().getContextClassLoader();
+
+List configs = new ArrayList<>();
 
 // TODO: make token allocation strategy configurable
 if (tokenSupplier == null)
 tokenSupplier = evenlyDistributedTokens(nodeCount);
 
-return factory.newCluster((B) this);
+for (int i = 0; i < nodeCount; ++i)
+{
+int nodeNum = i + 1;
+configs.add(createInstanceConfig(nodeNum));
+}
+
+return factory.newCluster(root, version, configs, sharedClassLoader);
 }
 
-public B withSharedClassLoader(ClassLoader sharedClassLoader)
+public IInstanceConfig newInstanceConfig(C cluster)
 {
-this.sharedClassLoader = Objects.requireNonNull(sharedClassLoader, 
"sharedClassLoader");
-return (B) this;
+return createInstanceConfig(cluster.size() + 1);
 }
 
-public B withTokenSupplier(TokenSupplier tokenSupplier)
+protected IInstanceConfig createInstanceConfig(int nodeNum)
+{
+String ipPrefix = "127.0." + subnet + ".";
+String seedIp = ipPrefix + "1";
+String ipAddress = ipPrefix + nodeNum;
+long token = tokenSupplier.token(nodeNum);
+
+NetworkTopology topology = NetworkTopology.build(ipPrefix, 
BROADCAST_PORT, nodeIdTopology);
+
+IInstanceConfig config = generateConfig(nodeNum, ipAddress, topology, 
root, 

[cassandra-in-jvm-dtest-api] 01/02: Cluster builder should be provided to the factory and expose state

2020-04-20 Thread ifesdjeen
This is an automated email from the ASF dual-hosted git repository.

ifesdjeen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-in-jvm-dtest-api.git

commit 50fdfefa11248e7b93507b8e66322dc7a5056744
Author: David Capwell 
AuthorDate: Wed Apr 15 13:30:08 2020 -0700

Cluster builder should be provided to the factory and expose state

Patch by David Capwell, reviewed by Alex Petrov for CASSANDRA-15733.
---
 .../shared/{Builder.java => AbstractBuilder.java}  | 130 +++--
 .../distributed/shared/DistributedTestBase.java|   2 +-
 2 files changed, 67 insertions(+), 65 deletions(-)

diff --git a/src/main/java/org/apache/cassandra/distributed/shared/Builder.java 
b/src/main/java/org/apache/cassandra/distributed/shared/AbstractBuilder.java
similarity index 71%
rename from src/main/java/org/apache/cassandra/distributed/shared/Builder.java
rename to 
src/main/java/org/apache/cassandra/distributed/shared/AbstractBuilder.java
index b3b7db0..993b8a3 100644
--- a/src/main/java/org/apache/cassandra/distributed/shared/Builder.java
+++ b/src/main/java/org/apache/cassandra/distributed/shared/AbstractBuilder.java
@@ -25,6 +25,7 @@ import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Objects;
 import java.util.function.Consumer;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
@@ -36,17 +37,14 @@ import org.apache.cassandra.distributed.api.TokenSupplier;
 
 import static 
org.apache.cassandra.distributed.api.TokenSupplier.evenlyDistributedTokens;
 
-public abstract class Builder
+public abstract class AbstractBuilder>
 {
-
-private final int BROADCAST_PORT = 7012;
-
-public interface Factory
+public interface Factory>
 {
-C newCluster(File root, Versions.Version version, 
List configs, ClassLoader sharedClassLoader);
+C newCluster(B builder);
 }
 
-private final Factory factory;
+private final Factory factory;
 private int nodeCount;
 private int subnet;
 private Map nodeIdTopology;
@@ -54,12 +52,45 @@ public abstract class Builder
 private File root;
 private Versions.Version version;
 private Consumer configUpdater;
+private ClassLoader sharedClassLoader = 
Thread.currentThread().getContextClassLoader();
 
-public Builder(Factory factory)
+public AbstractBuilder(Factory factory)
 {
 this.factory = factory;
 }
 
+public int getNodeCount() {
+return nodeCount;
+}
+
+public int getSubnet() {
+return subnet;
+}
+
+public Map getNodeIdTopology() {
+return nodeIdTopology;
+}
+
+public TokenSupplier getTokenSupplier() {
+return tokenSupplier;
+}
+
+public File getRoot() {
+return root;
+}
+
+public Versions.Version getVersion() {
+return version;
+}
+
+public Consumer getConfigUpdater() {
+return configUpdater;
+}
+
+public ClassLoader getSharedClassLoader() {
+return sharedClassLoader;
+}
+
 public C start() throws IOException
 {
 C cluster = createWithoutStarting();
@@ -75,79 +106,50 @@ public abstract class Builder
 if (nodeCount <= 0)
 throw new IllegalStateException("Cluster must have at least one 
node");
 
+root.mkdirs();
+
 if (nodeIdTopology == null)
-{
 nodeIdTopology = IntStream.rangeClosed(1, nodeCount).boxed()
   .collect(Collectors.toMap(nodeId -> 
nodeId,
 nodeId -> 
NetworkTopology.dcAndRack(dcName(0), rackName(0;
-}
-
-root.mkdirs();
-
-ClassLoader sharedClassLoader = 
Thread.currentThread().getContextClassLoader();
-
-List configs = new ArrayList<>();
 
 // TODO: make token allocation strategy configurable
 if (tokenSupplier == null)
 tokenSupplier = evenlyDistributedTokens(nodeCount);
 
-for (int i = 0; i < nodeCount; ++i)
-{
-int nodeNum = i + 1;
-configs.add(createInstanceConfig(nodeNum));
-}
-
-return factory.newCluster(root, version, configs, sharedClassLoader);
+return factory.newCluster((B) this);
 }
 
-public IInstanceConfig newInstanceConfig(C cluster)
+public B withSharedClassLoader(ClassLoader sharedClassLoader)
 {
-return createInstanceConfig(cluster.size() + 1);
+this.sharedClassLoader = Objects.requireNonNull(sharedClassLoader, 
"sharedClassLoader");
+return (B) this;
 }
 
-protected IInstanceConfig createInstanceConfig(int nodeNum)
-{
-String ipPrefix = "127.0." + subnet + ".";
-String seedIp = ipPrefix + "1";
-String ipAddress = ipPrefix + nodeNum;
-long token = tokenSupplier.token(nodeNum);
-
-NetworkTopology topology = 

[cassandra-in-jvm-dtest-api] 02/02: Fix compile errors

2020-04-20 Thread ifesdjeen
This is an automated email from the ASF dual-hosted git repository.

ifesdjeen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-in-jvm-dtest-api.git

commit b41ea494427e8eaf18682bacab72d273023844a4
Author: Alex Petrov 
AuthorDate: Mon Apr 20 18:29:34 2020 +0200

Fix compile errors
---
 pom.xml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/pom.xml b/pom.xml
index ed69c31..df733c6 100644
--- a/pom.xml
+++ b/pom.xml
@@ -61,6 +61,7 @@
 
 
 README.md
+CHANGES.txt
 
 
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-in-jvm-dtest-api] branch master updated (7ddfe52 -> b41ea49)

2020-04-20 Thread ifesdjeen
This is an automated email from the ASF dual-hosted git repository.

ifesdjeen pushed a change to branch master
in repository 
https://gitbox.apache.org/repos/asf/cassandra-in-jvm-dtest-api.git.


from 7ddfe52  Support for replacing logback with alternate logger config 
(like log4j2)
 new 50fdfef  Cluster builder should be provided to the factory and expose 
state
 new b41ea49  Fix compile errors

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pom.xml|   1 +
 .../shared/{Builder.java => AbstractBuilder.java}  | 130 +++--
 .../distributed/shared/DistributedTestBase.java|   2 +-
 3 files changed, 68 insertions(+), 65 deletions(-)
 rename src/main/java/org/apache/cassandra/distributed/shared/{Builder.java => 
AbstractBuilder.java} (71%)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15742) Cassandra-Stress : Performance degraded with Cassandra on Single node cluster

2020-04-20 Thread Jon Meredith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Meredith updated CASSANDRA-15742:
-
Resolution: Invalid
Status: Resolved  (was: Triage Needed)

Thanks for sharing your benchmarking results. Cassandra does not make any 
guarantees of relative performance to other systems, so this is not a bug as we 
would define it.

JIRA is a place where you can record any defects you discover with Cassandra or 
ideas for possible improvement. You could try searching for tuning guides in 
books and blog posts and see if you can improve the performance.

If you discover anything you would like to share, you could contact the Users 
mailing list described here http://cassandra.apache.org/community/

> Cassandra-Stress : Performance degraded with Cassandra on Single node cluster
> -
>
> Key: CASSANDRA-15742
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15742
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Boopalan 
>Priority: Normal
>
> Steps to recreate:
>  # I have created RAID 0 with 8 NVMe disks and created a ext4 filesystem 
> (800GB volume)
>  # Replace all default storage /var/lib/cassandra to " /mnt" in 
> cassandra.yaml file.
>  # Run cassandra-stress tool for performance benchmarking, am getting lower 
> op/s than Aerospike NoSQL. 
> SSD Drive is capable of 
> Read Throughput (max MiB/s, 128KiB) : 3.1K
> Read Throughput (max MiB/s, 128KiB) : 1.8K
> Read IOPS (max, Rnd 4KiB) : 467K
> Read Throughput (max, Rnd 4KiB) :65K
> {noformat}
> root@cassandra-master:~# cassandra-stress write n=100 -rate threads=64
>  Stress Settings 
> Command:
>   Type: write
>   Count: 1,000,000
>   No Warmup: false
>   Consistency Level: LOCAL_ONE
>   Target Uncertainty: not applicable
>   Key Size (bytes): 10
>   Counter Increment Distibution: add=fixed(1)
> Rate:
>   Auto: false
>   Thread Count: 64
>   OpsPer Sec: 0
> Population:
>   Sequence: 1..100
>   Order: ARBITRARY
>   Wrap: true
> Insert:
>   Revisits: Uniform:  min=1,max=100
>   Visits: Fixed:  key=1
>   Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed:  key=1
>   Batch Type: not batching
> Columns:
>   Max Columns Per Key: 5
>   Column Names: [C0, C1, C2, C3, C4]
>   Comparator: AsciiType
>   Timestamp: null
>   Variable Column Count: false
>   Slice: false
>   Size Distribution: Fixed:  key=34
>   Count Distribution: Fixed:  key=5
> Errors:
>   Ignore: false
>   Tries: 10
> Log:
>   No Summary: false
>   No Settings: false
>   File: null
>   Interval Millis: 1000
>   Level: NORMAL
> Mode:
>   API: JAVA_DRIVER_NATIVE
>   Connection Style: CQL_PREPARED
>   CQL Version: CQL3
>   Protocol Version: V4
>   Username: null
>   Password: null
>   Auth Provide Class: null
>   Max Pending Per Connection: 128
>   Connections Per Host: 8
>   Compression: NONE
> Node:
>   Nodes: [localhost]
>   Is White List: false
>   Datacenter: null
> Schema:
>   Keyspace: keyspace1
>   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
>   Replication Strategy Pptions: {replication_factor=1}
>   Table Compression: null
>   Table Compaction Strategy: null
>   Table Compaction Strategy Options: {}
> Transport:
>   factory=org.apache.cassandra.thrift.TFramedTransportFactory; 
> truststore=null; truststore-password=null; keystore=null; 
> keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; 
> ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; 
> Port:
>   Native Port: 9042
>   Thrift Port: 9160
>   JMX Port: 7199
> Send To Daemon:
>   *not set*
> Graph:
>   File: null
>   Revision: unknown
>   Title: null
>   Operation: WRITE
> TokenRange:
>   Wrap: false
>   Split Factor: 1
> WARN  19:01:48,641 You listed localhost/0:0:0:0:0:0:0:1:9042 in your contact 
> points, but it wasn't found in the control host's system.peers at startup
> Connected to cluster: Test Cluster, max pending requests per connection 128, 
> max connections per host 8
> Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
> Created keyspaces. Sleeping 1s for propagation.
> Sleeping 2s...
> Warming up WRITE with 5 iterations...
> Running WRITE with 64 threads for 100 iteration
> type       total ops,    op/s,    pk/s,   row/s,    mean,     med,     .95,   
>   .99,    .999,     max,   time,   stderr, errors,  gc: #,  max ms,  sum ms,  
> sdv ms,      mb
> total,        120763,  120763,  120763,  120763,     0.5,     0.4,     1.0,   
>   2.1,    24.9,    29.1,    1.0,  0.0,      0,      1,      60,      60,  
>      0,    1499
> total,        254564,  133801,  133801,  133801,     0.5,     0.3,     1.0,   
>   1.8,     8.7,    64.5,    2.0,  0.04931,      0,      0,       0,       0,  
>      0,       

[jira] [Commented] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Dinesh Joshi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087897#comment-17087897
 ] 

Dinesh Joshi commented on CASSANDRA-15739:
--

Thanks [~mck]. I have addressed your comments.

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15593) A possible read repair bug

2020-04-20 Thread Alex Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087888#comment-17087888
 ] 

Alex Petrov commented on CASSANDRA-15593:
-

[~james1] could you answer the question above?

What are the responses from the nodes that make you think that read repair 
isn't triggered? 

It is sort of expected that when you run with local_one against coordinator 
that owns the range, you won't trigger read repair. 

> A possible read repair bug
> --
>
> Key: CASSANDRA-15593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15593
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Antonio
>Assignee: Alex Lumpov
>Priority: Normal
>
> cassandra version:2.1.15
> i have one dc and 3nodes 
> 1. 
> {code}
> create KEYSPACE test WITH replication = \{'class': 'NetworkTopologyStrategy', 
> 'DC1':'3' } and durable_writes = 'true';
> {code}
> 2. 
> {code}
> create table test(a int , b int , c int , primary key(a)) with 
> dclocal_read_repair_chance = 1.0;
> {code}
> 3. 
> {code}
> insert one row into table test,instert into test(a, b, c) values (1, 1, 1);
> {code}
>  
> and remove sstable on two nodes and result below:
> {code}
>     node1:have correct row 1 1 1
>     node2:doesn't have rf
>     node3:doesn't have rf
> {code}
> 4. and i use local_one select one by one like this:
> {code}
>     node1 un,node2 dn,node3 dn:return 1 1 1
>     node1 dn,node2 un,node3 dn:return null
>     node1 dn,node2 dn,node3 dn:return null 
> {code}
>     this prove node1 have correct rf
> 5. and i let all node un,user local_quarum to {{select , select * from test 
> where a = 1;}}
>     but the read repair does't work every time(i test many times),that's the 
> problem(same in 3.0.15)
>  
> i hope if i set dclocal_read_repair_chance = 1.0,every time i read by 
> local_quarum, if any rf digest does't match,read repair will work,and repair 
> all nodes
>  
> i.m not sure does's the problem happends in this code()
> wish for your reply,thanks
> {code}
> public void response(MessageIn message)
> {
> resolver.preprocess(message);int n = waitingFor(message)
>   ? recievedUpdater.incrementAndGet(this)
>   : received;if (n >= blockfor && 
> resolver.isDataPresent())
> {
> this mean if return responses >= rf/2 +1 and a data response 
> return,it start compare,does't all response
> condition.signalAll();// kick off a background digest 
> comparison if this is a result that (may have) arrived after// 
> the original resolve that get() kicks off as soon as the condition is 
> signaledif (blockfor < endpoints.size() && n == endpoints.size())
> {
> TraceState traceState = Tracing.instance.get();   
>  if (traceState != null)
> traceState.trace("Initiating read-repair");
> StageManager.getStage(Stage.READ_REPAIR).execute(new 
> AsyncRepairRunner(traceState));
> }
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-20 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15694:
-
Reviewers: Dinesh Joshi

> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-20 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15694:
-
Test and Documentation Plan: Unit, dtests, CircleCI/Jenkins
 Status: Patch Available  (was: Open)

> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15593) A possible read repair bug

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15593:

Description: 
cassandra version:2.1.15

i have one dc and 3nodes 

1. 
{code}
create KEYSPACE test WITH replication = \{'class': 'NetworkTopologyStrategy', 
'DC1':'3' } and durable_writes = 'true';
{code}

2. 

{code}
create table test(a int , b int , c int , primary key(a)) with 
dclocal_read_repair_chance = 1.0;
{code}

3. 
{code}
insert one row into table test,instert into test(a, b, c) values (1, 1, 1);
{code}
 
and remove sstable on two nodes and result below:
{code}
    node1:have correct row 1 1 1

    node2:doesn't have rf

    node3:doesn't have rf
{code}
4. and i use local_one select one by one like this:
{code}
    node1 un,node2 dn,node3 dn:return 1 1 1

    node1 dn,node2 un,node3 dn:return null

    node1 dn,node2 dn,node3 dn:return null 
{code}
    this prove node1 have correct rf

5. and i let all node un,user local_quarum to {{select , select * from test 
where a = 1;}}

    but the read repair does't work every time(i test many times),that's the 
problem(same in 3.0.15)

 

i hope if i set dclocal_read_repair_chance = 1.0,every time i read by 
local_quarum, if any rf digest does't match,read repair will work,and repair 
all nodes

 

i.m not sure does's the problem happends in this code()

wish for your reply,thanks

{code}
public void response(MessageIn message)
{
resolver.preprocess(message);int n = waitingFor(message)
  ? recievedUpdater.incrementAndGet(this)
  : received;if (n >= blockfor && resolver.isDataPresent())
{
this mean if return responses >= rf/2 +1 and a data response 
return,it start compare,does't all response
condition.signalAll();// kick off a background digest 
comparison if this is a result that (may have) arrived after// the 
original resolve that get() kicks off as soon as the condition is signaled  
  if (blockfor < endpoints.size() && n == endpoints.size())
{
TraceState traceState = Tracing.instance.get();
if (traceState != null)
traceState.trace("Initiating read-repair");
StageManager.getStage(Stage.READ_REPAIR).execute(new 
AsyncRepairRunner(traceState));
}
}
}
{code}

  was:
cassandra version:2.1.15

i have one dc and 3nodes 

1. 
{code}
create KEYSPACE test WITH replication = \{'class': 'NetworkTopologyStrategy', 
'DC1':'3' } and durable_writes = 'true';
{code}

2. 

{code}
create table test(a int , b int , c int , primary key(a)) with 
dclocal_read_repair_chance = 1.0;
{code}

3. 
{code}
insert one row into table test,instert into test(a, b, c) values (1, 1, 1);
{code}
 
and remove sstable on two nodes and result below:
{code}
    node1:have correct row 1 1 1

    node2:doesn't have rf

    node3:doesn't have rf
{code}
4. and i use local_one select one by one like this:
{code}
    node1 un,node2 dn,node3 dn:return 1 1 1

    node1 dn,node2 un,node3 dn:return null

    node1 dn,node2 dn,node3 dn:return null 
{code}
    this prove node1 have correct rf

5. and i let all node un,user local_quarum to select , select * from test where 
a = 1;

    but the read repair does't work every time(i test many times),that's the 
problem(same in 3.0.15)

 

i hope if i set dclocal_read_repair_chance = 1.0,every time i read by 
local_quarum, if any rf digest does't match,read repair will work,and repair 
all nodes

 

i.m not sure does's the problem happends in this code()

wish for your reply,thanks

{code}
public void response(MessageIn message)
{
resolver.preprocess(message);int n = waitingFor(message)
  ? recievedUpdater.incrementAndGet(this)
  : received;if (n >= blockfor && resolver.isDataPresent())
{
this mean if return responses >= rf/2 +1 and a data response 
return,it start compare,does't all response
condition.signalAll();// kick off a background digest 
comparison if this is a result that (may have) arrived after// the 
original resolve that get() kicks off as soon as the condition is signaled  
  if (blockfor < endpoints.size() && n == endpoints.size())
{
TraceState traceState = Tracing.instance.get();
if (traceState != null)
traceState.trace("Initiating read-repair");
StageManager.getStage(Stage.READ_REPAIR).execute(new 
AsyncRepairRunner(traceState));
}
}
}
{code}


> A possible read repair bug
> --
>
> Key: CASSANDRA-15593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15593
> Project: Cassandra
>  Issue Type: Bug
>

[jira] [Updated] (CASSANDRA-15593) A possible read repair bug

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15593:

Description: 
cassandra version:2.1.15

i have one dc and 3nodes 

1. 
{code}
create KEYSPACE test WITH replication = \{'class': 'NetworkTopologyStrategy', 
'DC1':'3' } and durable_writes = 'true';
{code}

2. 

{code}
create table test(a int , b int , c int , primary key(a)) with 
dclocal_read_repair_chance = 1.0;
{code}

3. 
{code}
insert one row into table test,instert into test(a, b, c) values (1, 1, 1);
{code}
 
and remove sstable on two nodes and result below:
{code}
    node1:have correct row 1 1 1

    node2:doesn't have rf

    node3:doesn't have rf
{code}
4. and i use local_one select one by one like this:
{code}
    node1 un,node2 dn,node3 dn:return 1 1 1

    node1 dn,node2 un,node3 dn:return null

    node1 dn,node2 dn,node3 dn:return null 
{code}
    this prove node1 have correct rf

5. and i let all node un,user local_quarum to select , select * from test where 
a = 1;

    but the read repair does't work every time(i test many times),that's the 
problem(same in 3.0.15)

 

i hope if i set dclocal_read_repair_chance = 1.0,every time i read by 
local_quarum, if any rf digest does't match,read repair will work,and repair 
all nodes

 

i.m not sure does's the problem happends in this code()

wish for your reply,thanks

{code}
public void response(MessageIn message)
{
resolver.preprocess(message);int n = waitingFor(message)
  ? recievedUpdater.incrementAndGet(this)
  : received;if (n >= blockfor && resolver.isDataPresent())
{
this mean if return responses >= rf/2 +1 and a data response 
return,it start compare,does't all response
condition.signalAll();// kick off a background digest 
comparison if this is a result that (may have) arrived after// the 
original resolve that get() kicks off as soon as the condition is signaled  
  if (blockfor < endpoints.size() && n == endpoints.size())
{
TraceState traceState = Tracing.instance.get();
if (traceState != null)
traceState.trace("Initiating read-repair");
StageManager.getStage(Stage.READ_REPAIR).execute(new 
AsyncRepairRunner(traceState));
}
}
}
{code}

  was:
cassandra version:2.1.15

i have one dc and 3nodes 

1. create KEYSPACE test WITH replication = \{'class': 
'NetworkTopologyStrategy', 'DC1':'3' } and durable_writes = 'true';

2. create table test(a int , b int , c int , primary key(a)) with 
dclocal_read_repair_chance = 1.0;

3. insert one row into table test,instert into test(a, b, c) values (1, 1, 1); 
and remove sstable on two nodes and result below:
{code}
    node1:have correct row 1 1 1

    node2:doesn't have rf

    node3:doesn't have rf
{code}
4. and i use local_one select one by one like this:
{code}
    node1 un,node2 dn,node3 dn:return 1 1 1

    node1 dn,node2 un,node3 dn:return null

    node1 dn,node2 dn,node3 dn:return null 
{code}
    this prove node1 have correct rf

5. and i let all node un,user local_quarum to select , select * from test where 
a = 1;

    but the read repair does't work every time(i test many times),that's the 
problem(same in 3.0.15)

 

i hope if i set dclocal_read_repair_chance = 1.0,every time i read by 
local_quarum, if any rf digest does't match,read repair will work,and repair 
all nodes

 

i.m not sure does's the problem happends in this code()

wish for your reply,thanks

{code}
public void response(MessageIn message)
{
resolver.preprocess(message);int n = waitingFor(message)
  ? recievedUpdater.incrementAndGet(this)
  : received;if (n >= blockfor && resolver.isDataPresent())
{
this mean if return responses >= rf/2 +1 and a data response 
return,it start compare,does't all response
condition.signalAll();// kick off a background digest 
comparison if this is a result that (may have) arrived after// the 
original resolve that get() kicks off as soon as the condition is signaled  
  if (blockfor < endpoints.size() && n == endpoints.size())
{
TraceState traceState = Tracing.instance.get();
if (traceState != null)
traceState.trace("Initiating read-repair");
StageManager.getStage(Stage.READ_REPAIR).execute(new 
AsyncRepairRunner(traceState));
}
}
}
{code}


> A possible read repair bug
> --
>
> Key: CASSANDRA-15593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15593
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Antonio
>Assignee: Alex Lumpov

[jira] [Updated] (CASSANDRA-15593) A possible read repair bug

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15593:

Summary: A possible read repair bug  (was: seems reading repair bug)

> A possible read repair bug
> --
>
> Key: CASSANDRA-15593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15593
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Antonio
>Assignee: Alex Lumpov
>Priority: Normal
>
> cassandra version:2.1.15
> i have one dc and 3nodes 
> 1. create KEYSPACE test WITH replication = \{'class': 
> 'NetworkTopologyStrategy', 'DC1':'3' } and durable_writes = 'true';
> 2. create table test(a int , b int , c int , primary key(a)) with 
> dclocal_read_repair_chance = 1.0;
> 3. insert one row into table test,instert into test(a, b, c) values (1, 1, 
> 1); and remove sstable on two nodes and result below:
> {code}
>     node1:have correct row 1 1 1
>     node2:doesn't have rf
>     node3:doesn't have rf
> {code}
> 4. and i use local_one select one by one like this:
> {code}
>     node1 un,node2 dn,node3 dn:return 1 1 1
>     node1 dn,node2 un,node3 dn:return null
>     node1 dn,node2 dn,node3 dn:return null 
> {code}
>     this prove node1 have correct rf
> 5. and i let all node un,user local_quarum to select , select * from test 
> where a = 1;
>     but the read repair does't work every time(i test many times),that's the 
> problem(same in 3.0.15)
>  
> i hope if i set dclocal_read_repair_chance = 1.0,every time i read by 
> local_quarum, if any rf digest does't match,read repair will work,and repair 
> all nodes
>  
> i.m not sure does's the problem happends in this code()
> wish for your reply,thanks
> {code}
> public void response(MessageIn message)
> {
> resolver.preprocess(message);int n = waitingFor(message)
>   ? recievedUpdater.incrementAndGet(this)
>   : received;if (n >= blockfor && 
> resolver.isDataPresent())
> {
> this mean if return responses >= rf/2 +1 and a data response 
> return,it start compare,does't all response
> condition.signalAll();// kick off a background digest 
> comparison if this is a result that (may have) arrived after// 
> the original resolve that get() kicks off as soon as the condition is 
> signaledif (blockfor < endpoints.size() && n == endpoints.size())
> {
> TraceState traceState = Tracing.instance.get();   
>  if (traceState != null)
> traceState.trace("Initiating read-repair");
> StageManager.getStage(Stage.READ_REPAIR).execute(new 
> AsyncRepairRunner(traceState));
> }
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15064) Wrong ordering for timeuuid fields

2020-04-20 Thread Jon Meredith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Meredith updated CASSANDRA-15064:
-
Resolution: Won't Fix
Status: Resolved  (was: Open)

> Wrong ordering for timeuuid fields
> --
>
> Key: CASSANDRA-15064
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15064
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema
>Reporter: Andreas Andersen
>Assignee: Jon Meredith
>Priority: Normal
> Attachments: example.cql
>
>
> Hi!
> We're seeing some strange behavior for the ordering of timeuuid fields. They 
> seem to be sorted in the wrong order when the clock_seq_low field in a 
> timeuuid goes from 7f to 80. Consider the following example:
> {noformat}
> cqlsh:test> show version; 
> [cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | Native protocol v4] 
> cqlsh:test> CREATE TABLE t ( 
>     ... partition   int, 
>     ... t   timeuuid, 
>     ... i   int, 
>     ...  
>     ... PRIMARY KEY(partition, t) 
>     ... ) 
>     ... WITH CLUSTERING ORDER BY(t ASC); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b57e-f0def1d0755e, 1); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b57f-f0def1d0755e, 2); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b580-f0def1d0755e, 3); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b581-f0def1d0755e, 4); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b582-f0def1d0755e, 5); 
> cqlsh:test> SELECT * FROM t WHERE partition = 1 ORDER BY t ASC; 
>  
>  partition | t    | i 
> ---+--+--- 
>  1 | 84e2c963-4ef9-11e9-b580-f0def1d0755e | 3 
>  1 | 84e2c963-4ef9-11e9-b581-f0def1d0755e | 4 
>  1 | 84e2c963-4ef9-11e9-b582-f0def1d0755e | 5 
>  1 | 84e2c963-4ef9-11e9-b57e-f0def1d0755e | 1 
>  1 | 84e2c963-4ef9-11e9-b57f-f0def1d0755e | 2 
>  
> (5 rows) 
> cqlsh:test>
> {noformat}
> The expected behavior is that the rows are returned in the same order as they 
> were inserted (we inserted them with their clustering key in an ascending 
> order). Instead, the order "wraps" in the middle.
> This issue only arises when the 9th octet (clock_seq_low) in the uuid goes 
> from 7f to 80. A guess would be that the comparison is implemented as a 
> signed integer instead of an unsigned integer, as 0x7f = 127 and 0x80 = -128. 
> According to the RFC, the field should be treated as an unsigned integer: 
> [https://tools.ietf.org/html/rfc4122#section-4.1.2]
> Changing the field from a timeuuid to a uuid gives the expected correct 
> behavior:
> {noformat}
> cqlsh:test> CREATE TABLE t ( 
>     ... partition   int, 
>     ... t   uuid, 
>     ... i   int, 
>     ...  
>     ... PRIMARY KEY(partition, t) 
>     ... ) 
>     ... WITH CLUSTERING ORDER BY(t ASC); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b57e-f0def1d0755e, 1); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b57f-f0def1d0755e, 2); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b580-f0def1d0755e, 3); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b581-f0def1d0755e, 4); 
> cqlsh:test> INSERT INTO t(partition, t, i) VALUES(1, 
> 84e2c963-4ef9-11e9-b582-f0def1d0755e, 5); 
> cqlsh:test> SELECT * FROM t WHERE partition = 1 ORDER BY t ASC; 
>  
>  partition | t    | i 
> ---+--+--- 
>  1 | 84e2c963-4ef9-11e9-b57e-f0def1d0755e | 1 
>  1 | 84e2c963-4ef9-11e9-b57f-f0def1d0755e | 2 
>  1 | 84e2c963-4ef9-11e9-b580-f0def1d0755e | 3 
>  1 | 84e2c963-4ef9-11e9-b581-f0def1d0755e | 4 
>  1 | 84e2c963-4ef9-11e9-b582-f0def1d0755e | 5 
>  
> (5 rows) 
> cqlsh:test>{noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15714) Support in cassandra-in-jvm-dtest-api for replacing logback with alternate logger

2020-04-20 Thread Jon Meredith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Meredith updated CASSANDRA-15714:
-
  Fix Version/s: 4.0-alpha
  Since Version: 4.0-alpha
Source Control Link: 
https://gitbox.apache.org/repos/asf?p=cassandra-in-jvm-dtest-api.git;a=commit;h=7ddfe52d51639817c6c5be86c0c8e317e33620eba
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Not sure what version to set here as JIRA is based on main project versions, 
not the dependency version - is really fixed in 0.0.2

>  Support in cassandra-in-jvm-dtest-api for replacing logback with alternate 
> logger
> --
>
> Key: CASSANDRA-15714
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15714
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Jon Meredith
>Assignee: Jon Meredith
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0-alpha
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Not all forks use logback, and there is an (prematurely) closed ticket 
> indicating that it would be valuable CASSANDRA-13212.
>  
> Add support for making the log file configuration property and log file 
> pathname configurable rather than hard-coding to logback.
>  
> Also had to add 'org.w3c.dom' to the InstanceClassLoader so that log4j2 could 
> load its configuration, but looks like that can be handled with the changes 
> in CASSANDRA-15713



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15698) DOC - Review Download page

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15698:

Change Category: Operability
 Complexity: Low Hanging Fruit
Component/s: Documentation/Website
   Priority: Low  (was: Normal)
 Status: Open  (was: Triage Needed)

> DOC - Review Download page
> --
>
> Key: CASSANDRA-15698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15698
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Erick Ramirez
>Assignee: Erick Ramirez
>Priority: Low
>
> h2. Background
> With updates to the [Installing 
> Cassandra|http://cassandra.apache.org/doc/latest/getting_started/installing.html]
>  page in CASSANDRA-15466, there's an opportunity to review and tidy up the 
> [Downloading Cassandra|http://cassandra.apache.org/download/] page.
> h2. Scope
> Retire sections of the document relating to C* installation and direct users 
> to the Installation page instead so we don't have to main installation 
> instructions in multiple places.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15699) DOC - Add known installation issues to the Troubleshooting page

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15699:

Change Category: Operability
 Complexity: Low Hanging Fruit
   Priority: Low  (was: Normal)
 Status: Open  (was: Triage Needed)

> DOC - Add known installation issues to the Troubleshooting page
> ---
>
> Key: CASSANDRA-15699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15699
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Blog
>Reporter: Erick Ramirez
>Assignee: Erick Ramirez
>Priority: Low
>  Labels: docs
>
> h2. Background
> With updates to the [Installing 
> Cassandra|http://cassandra.apache.org/doc/latest/getting_started/installing.html]
>  page in CASSANDRA-15466, we should add known installation issues to the 
> [Troubleshooting|http://cassandra.apache.org/doc/latest/troubleshooting/index.html]
>  page.
> h2. Topics
> * GPG error unavailable public key with Debian installation
> * C* service disabled on systemd distributions
> * C* service does not automatically start on reboot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15699) DOC - Add known installation issues to the Troubleshooting page

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15699:

Labels: docs  (was: )

> DOC - Add known installation issues to the Troubleshooting page
> ---
>
> Key: CASSANDRA-15699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15699
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Blog
>Reporter: Erick Ramirez
>Assignee: Erick Ramirez
>Priority: Normal
>  Labels: docs
>
> h2. Background
> With updates to the [Installing 
> Cassandra|http://cassandra.apache.org/doc/latest/getting_started/installing.html]
>  page in CASSANDRA-15466, we should add known installation issues to the 
> [Troubleshooting|http://cassandra.apache.org/doc/latest/troubleshooting/index.html]
>  page.
> h2. Topics
> * GPG error unavailable public key with Debian installation
> * C* service disabled on systemd distributions
> * C* service does not automatically start on reboot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15699) DOC - Add known installation issues to the Troubleshooting page

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15699:

Component/s: Documentation/Blog

> DOC - Add known installation issues to the Troubleshooting page
> ---
>
> Key: CASSANDRA-15699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15699
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Blog
>Reporter: Erick Ramirez
>Assignee: Erick Ramirez
>Priority: Normal
>
> h2. Background
> With updates to the [Installing 
> Cassandra|http://cassandra.apache.org/doc/latest/getting_started/installing.html]
>  page in CASSANDRA-15466, we should add known installation issues to the 
> [Troubleshooting|http://cassandra.apache.org/doc/latest/troubleshooting/index.html]
>  page.
> h2. Topics
> * GPG error unavailable public key with Debian installation
> * C* service disabled on systemd distributions
> * C* service does not automatically start on reboot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15701) Does Cassandra 3.11.3/3.11.5 is affected by CVE-2019-10712 or not ?

2020-04-20 Thread Alex Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087877#comment-17087877
 ] 

Alex Petrov commented on CASSANDRA-15701:
-

This looks more like a question for a mailing list than a bug. I'd recommend 
opening an issue where you actually know the impact.

> Does  Cassandra 3.11.3/3.11.5  is affected by CVE-2019-10712 or not ?
> -
>
> Key: CASSANDRA-15701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15701
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: wht
>Priority: Normal
>
> Because  cassandra 3.11.3/3.11.5 rely on jackson-mapper-asl-1.9.13.jar which 
> has been reported a vulnerability CVE-2019-10172, 
> [https://nvd.nist.gov/vuln/detail/CVE-2019-10172], so I want to know if it 
> has an impact to cassandra. Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15714) Support in cassandra-in-jvm-dtest-api for replacing logback with alternate logger

2020-04-20 Thread Jon Meredith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Meredith updated CASSANDRA-15714:
-
Status: Ready to Commit  (was: Review In Progress)

>  Support in cassandra-in-jvm-dtest-api for replacing logback with alternate 
> logger
> --
>
> Key: CASSANDRA-15714
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15714
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Jon Meredith
>Assignee: Jon Meredith
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Not all forks use logback, and there is an (prematurely) closed ticket 
> indicating that it would be valuable CASSANDRA-13212.
>  
> Add support for making the log file configuration property and log file 
> pathname configurable rather than hard-coding to logback.
>  
> Also had to add 'org.w3c.dom' to the InstanceClassLoader so that log4j2 could 
> load its configuration, but looks like that can be handled with the changes 
> in CASSANDRA-15713



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15701) Does Cassandra 3.11.3/3.11.5 is affected by CVE-2019-10712 or not ?

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15701:

Resolution: Invalid
Status: Resolved  (was: Triage Needed)

> Does  Cassandra 3.11.3/3.11.5  is affected by CVE-2019-10712 or not ?
> -
>
> Key: CASSANDRA-15701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15701
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: wht
>Priority: Normal
>
> Because  cassandra 3.11.3/3.11.5 rely on jackson-mapper-asl-1.9.13.jar which 
> has been reported a vulnerability CVE-2019-10172, 
> [https://nvd.nist.gov/vuln/detail/CVE-2019-10172], so I want to know if it 
> has an impact to cassandra. Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15642) Inconsistent failure messages on distributed queries

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15642:

Change Category: Operability
 Complexity: Normal
   Priority: Low  (was: Normal)
 Status: Open  (was: Triage Needed)

> Inconsistent failure messages on distributed queries
> 
>
> Key: CASSANDRA-15642
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15642
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Kevin Gallardo
>Priority: Low
>
> As a follow up to some exploration I have done for CASSANDRA-15543, I 
> realized the following behavior in both {{ReadCallback}} and 
> {{AbstractWriteHandler}}:
>  - await for responses
>  - when all required number of responses have come back: unblock the wait
>  - when a single failure happens: unblock the wait
>  - when unblocked, look to see if the counter of failures is > 1 and if so 
> return an error message based on the {{failures}} map that's been filled
> Error messages that can result from this behavior can be a ReadTimeout, a 
> ReadFailure, a WriteTimeout or a WriteFailure.
> In case of a Write/ReadFailure, the user will get back an error looking like 
> the following:
> "Failure: Received X responses, and Y failures"
> (if this behavior I describe is incorrect, please correct me)
> This causes a usability problem. Since the handler will fail and throw an 
> exception as soon as 1 failure happens, the error message that is returned to 
> the user may not be accurate.
> (note: I am not entirely sure of the behavior in case of timeouts for now)
> For example, say a request at CL = QUORUM = 3, a failed request may complete 
> first, then a successful one completes, and another fails. If the exception 
> is thrown fast enough, the error message could say 
>  "Failure: Received 0 response, and 1 failure at CL = 3"
> Which:
> 1. doesn't make a lot of sense because the CL doesn't match the number of 
> results in the message, so you end up thinking "what happened with the rest 
> of the required CL?"
> 2. the information is incorrect. We did receive a successful response, only 
> it came after the initial failure.
> From that logic, I think it is safe to assume that the information returned 
> in the error message cannot be trusted in case of a failure. Only information 
> users should extract out of it is that at least 1 node has failed.
> For a big improvement in usability, the {{ReadCallback}} and 
> {{AbstractWriteResponseHandler}} could instead wait for all responses to come 
> back before unblocking the wait, or let it timeout. This is way, the users 
> will be able to have some trust around the information returned to them.
> Additionally, an error that happens first prevents a timeout to happen 
> because it fails immediately, and so potentially it hides problems with other 
> replicas. If we were to wait for all responses, we might get a timeout, in 
> that case we'd also be able to tell wether failures have happened *before* 
> that timeout, and have a more complete diagnostic where you can't detect both 
> errors at the same time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15694:

 Bug Category: Parent values: Code(13163)Level 1 values: Bug - Unclear 
Impact(13164)
   Complexity: Normal
Discovered By: User Report
 Severity: Low
   Status: Open  (was: Triage Needed)

> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15568) Message filtering should apply on the inboundSink in In-JVM dtest

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15568:

Resolution: Duplicate
Status: Resolved  (was: Triage Needed)

> Message filtering should apply on the inboundSink in In-JVM dtest
> -
>
> Key: CASSANDRA-15568
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15568
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/dtest
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
>
> The message filtering mechanism in the in-jvm dtest helps to simulate network 
> partition/delay. 
> The problem of the current approach that adds all filters to the 
> {{MessagingService#outboundSink}} is that a blocking filter blocks the 
> following filters to be evaluated since there is only a single thread that 
> evaluates them. It further blocks the other outing messages. The typical 
> internode messaging pattern is that the coordinator node sends out multiple 
> messages to other nodes upon receiving a query. The described blocking 
> messages can happen quite often.
> The problem can be solved by moving the message filtering to the 
> {{MessagingService#inboundSink}}, so that each inbounding message is 
> naturally filtered in parallel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15537) 4.0 quality testing: Local Read/Write Path: Upgrade and Diff Test

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15537:

Change Category: Quality Assurance
 Complexity: Normal
 Status: Open  (was: Triage Needed)

> 4.0 quality testing: Local Read/Write Path: Upgrade and Diff Test
> -
>
> Key: CASSANDRA-15537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15537
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Yifan Cai
>Priority: Normal
> Fix For: 4.0
>
>
> Execution of upgrade and diff tests via cassandra-diff have proven to be one 
> of the most effective approaches toward identifying issues with the local 
> read/write path. These include instances of data loss, data corruption, data 
> resurrection, incorrect responses to queries, incomplete responses, and 
> others. Upgrade and diff tests can be executed concurrent with fault 
> injection (such as host or network failure); as well as during mixed-version 
> scenarios (such as upgrading half of the instances in a cluster, and running 
> upgradesstables on only half of the upgraded instances).
> Upgrade and diff tests are expected to continue through the release cycle, 
> and are a great way for contributors to gain confidence in the correctness of 
> the database under their own workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15581) 4.0 quality testing: Compaction

2020-04-20 Thread Alex Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-15581:

Change Category: Quality Assurance
 Complexity: Normal
 Status: Open  (was: Triage Needed)

> 4.0 quality testing: Compaction
> ---
>
> Key: CASSANDRA-15581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15581
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest
>Reporter: Josh McKenzie
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 4.0-rc
>
>
> Alongside the local and distributed read/write paths, we'll also want to 
> validate compaction. CASSANDRA-6696 introduced substantial 
> changes/improvements that require testing (esp. JBOD).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14638) Column result order can change in 'SELECT *' results when upgrading from 2.1 to 3.0 causing response corruption for queries using prepared statements when static col

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-14638:
--
Component/s: CQL/Interpreter

> Column result order can change in 'SELECT *' results when upgrading from 2.1 
> to 3.0 causing response corruption for queries using prepared statements when 
> static columns are used
> --
>
> Key: CASSANDRA-14638
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14638
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
> Environment: Single C* node ccm cluster upgraded from C* 2.1.20 to 
> 3.0.17
>Reporter: Andy Tolbert
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 3.0.18, 3.11.4, 4.0
>
>
> When performing an upgrade from C* 2.1.20 to 3.0.17 I observed that the order 
> of columns returned from a 'SELECT *' query changes, particularly when static 
> columns are involved.
> This may not seem like that much of a problem, however if using Prepared 
> Statements, any clients that remain connected during the upgrade may 
> encounter issues consuming results from these queries, as data is reordered 
> and the client not aware of it.  The result definition is sent in the 
> original prepared statement response, so if order changes the client has no 
> way of knowing (until C* 4.0 via CASSANDRA-10786) without re-preparing, which 
> is non-trivial as most client drivers cache prepared statements.
> This could lead to reading the wrong values for columns, which could result 
> in some kind of deserialization exception or if the data types of the 
> switched columns are compatible, the wrong values.  This happens even if the 
> client attempts to retrieve a column value by name (i.e. row.getInt("colx")).
> Unfortunately I don't think there is an easy fix for this.  If the order was 
> changed back to the previous format, you risk issues for users upgrading from 
> older 3.0 version.  I think it would be nice to add a note in the NEWS file 
> in the 3.0 upgrade section that describes this issue, and how to work around 
> it (specify all column names of interest explicitly in query).
> Example schema and code to reproduce:
>  
> {noformat}
> create keyspace ks with replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> create table ks.tbl (p0 text,
>   p1 text,
>   m map static,
>   t text,
>   u text static,
>   primary key (p0, p1)
> );
> insert into ks.tbl (p0, p1, m, t, u) values ('p0', 'p1', { 'm0' : 'm1' }, 
> 't', 'u');{noformat}
>  
> When querying with 2.1 you'll observe the following order via cqlsh:
> {noformat}
>  p0 | p1 | m| u | t
> ++--+---+---
>  p0 | p1 | {'m0': 'm1'} | u | t{noformat}
>  
> With 3.0, observe that u and m are transposed:
>  
> {noformat}
>  p0 | p1 | u | m    | t
> ++---+--+---
>  p0 | p1 | u | {'m0': 'm1'} | t{noformat}
>  
>  
> {code:java}
> import com.datastax.driver.core.BoundStatement;
> import com.datastax.driver.core.Cluster;
> import com.datastax.driver.core.ColumnDefinitions;
> import com.datastax.driver.core.PreparedStatement;
> import com.datastax.driver.core.ResultSet;
> import com.datastax.driver.core.Row;
> import com.datastax.driver.core.Session;
> import com.google.common.util.concurrent.Uninterruptibles;
> import java.util.concurrent.TimeUnit;
> public class LiveUpgradeTest {
>   public static void main(String args[]) {
> Cluster cluster = Cluster.builder().addContactPoints("127.0.0.1").build();
> try {
>   Session session = cluster.connect();
>   PreparedStatement p = session.prepare("SELECT * from ks.tbl");
>   BoundStatement bs = p.bind();
>   // continually query every 30 seconds
>   while (true) {
> try {
>   ResultSet r = session.execute(bs);
>   Row row = r.one();
>   int i = 0;
>   // iterate over the result metadata in order printing the
>   // index, name, type, and length of the first row of data.
>   for (ColumnDefinitions.Definition d : r.getColumnDefinitions()) {
> System.out.println(
> i++
> + ": "
> + d.getName()
> + " -> "
> + d.getType()
> + " -> val = "
> + row.getBytesUnsafe(d.getName()).array().length);
>   }
> } catch (Throwable t) {
>   t.printStackTrace();
> } finally {
>   Uninterruptibles.sleepUninterruptibly(30, TimeUnit.SECONDS);
> }
>   }
> } finally {
>   cluster.close();
> }
>   }
> }
> {code}
> To 

[jira] [Updated] (CASSANDRA-15004) Anti-compaction briefly corrupts sstable state for reads

2020-04-20 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-15004:
--
Component/s: Local/Compaction

> Anti-compaction briefly corrupts sstable state for reads
> 
>
> Key: CASSANDRA-15004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15004
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Blake Eggleston
>Assignee: Benedict Elliott Smith
>Priority: Urgent
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> Since we use multiple sstable rewriters in anticompaction, the first call to 
> prepareToCommit will remove the original sstables from the tracker view 
> before the other rewriters add their sstables. This creates a brief window 
> where reads can miss data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087629#comment-17087629
 ] 

Michael Semb Wever edited comment on CASSANDRA-15739 at 4/20/20, 1:45 PM:
--

Small comment on the dtest patch regarding naming.

Jenkins CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/67/pipeline].

(Had to run with the branch in thelastpickle fork, as the build scripts don't 
work when the forked repository has a different name.)


was (Author: michaelsembwever):
Small comment on the dtest patch regarding naming.

Jenkins CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/66/pipeline].

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15742) Cassandra-Stress : Performance degraded with Cassandra on Single node cluster

2020-04-20 Thread Boopalan (Jira)
Boopalan  created CASSANDRA-15742:
-

 Summary: Cassandra-Stress : Performance degraded with Cassandra on 
Single node cluster
 Key: CASSANDRA-15742
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15742
 Project: Cassandra
  Issue Type: Bug
Reporter: Boopalan 


Steps to recreate:
 # I have created RAID 0 with 8 NVMe disks and created a ext4 filesystem (800GB 
volume)
 # Replace all default storage /var/lib/cassandra to " /mnt" in cassandra.yaml 
file.
 # Run cassandra-stress tool for performance benchmarking, am getting lower 
op/s than Aerospike NoSQL. 

SSD Drive is capable of 

Read Throughput (max MiB/s, 128KiB) : 3.1K

Read Throughput (max MiB/s, 128KiB) : 1.8K

Read IOPS (max, Rnd 4KiB) : 467K

Read Throughput (max, Rnd 4KiB) :65K
{noformat}
root@cassandra-master:~# cassandra-stress write n=100 -rate threads=64
 Stress Settings 
Command:
  Type: write
  Count: 1,000,000
  No Warmup: false
  Consistency Level: LOCAL_ONE
  Target Uncertainty: not applicable
  Key Size (bytes): 10
  Counter Increment Distibution: add=fixed(1)
Rate:
  Auto: false
  Thread Count: 64
  OpsPer Sec: 0
Population:
  Sequence: 1..100
  Order: ARBITRARY
  Wrap: true
Insert:
  Revisits: Uniform:  min=1,max=100
  Visits: Fixed:  key=1
  Row Population Ratio: Ratio: divisor=1.00;delegate=Fixed:  key=1
  Batch Type: not batching
Columns:
  Max Columns Per Key: 5
  Column Names: [C0, C1, C2, C3, C4]
  Comparator: AsciiType
  Timestamp: null
  Variable Column Count: false
  Slice: false
  Size Distribution: Fixed:  key=34
  Count Distribution: Fixed:  key=5
Errors:
  Ignore: false
  Tries: 10
Log:
  No Summary: false
  No Settings: false
  File: null
  Interval Millis: 1000
  Level: NORMAL
Mode:
  API: JAVA_DRIVER_NATIVE
  Connection Style: CQL_PREPARED
  CQL Version: CQL3
  Protocol Version: V4
  Username: null
  Password: null
  Auth Provide Class: null
  Max Pending Per Connection: 128
  Connections Per Host: 8
  Compression: NONE
Node:
  Nodes: [localhost]
  Is White List: false
  Datacenter: null
Schema:
  Keyspace: keyspace1
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Replication Strategy Pptions: {replication_factor=1}
  Table Compression: null
  Table Compaction Strategy: null
  Table Compaction Strategy Options: {}
Transport:
  factory=org.apache.cassandra.thrift.TFramedTransportFactory; truststore=null; 
truststore-password=null; keystore=null; keystore-password=null; 
ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; 
ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; 
Port:
  Native Port: 9042
  Thrift Port: 9160
  JMX Port: 7199
Send To Daemon:
  *not set*
Graph:
  File: null
  Revision: unknown
  Title: null
  Operation: WRITE
TokenRange:
  Wrap: false
  Split Factor: 1


WARN  19:01:48,641 You listed localhost/0:0:0:0:0:0:0:1:9042 in your contact 
points, but it wasn't found in the control host's system.peers at startup
Connected to cluster: Test Cluster, max pending requests per connection 128, 
max connections per host 8
Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
Created keyspaces. Sleeping 1s for propagation.
Sleeping 2s...
Warming up WRITE with 5 iterations...
Running WRITE with 64 threads for 100 iteration
type       total ops,    op/s,    pk/s,   row/s,    mean,     med,     .95,     
.99,    .999,     max,   time,   stderr, errors,  gc: #,  max ms,  sum ms,  sdv 
ms,      mb
total,        120763,  120763,  120763,  120763,     0.5,     0.4,     1.0,     
2.1,    24.9,    29.1,    1.0,  0.0,      0,      1,      60,      60,      
 0,    1499
total,        254564,  133801,  133801,  133801,     0.5,     0.3,     1.0,     
1.8,     8.7,    64.5,    2.0,  0.04931,      0,      0,       0,       0,      
 0,       0
total,        363914,  109350,  109350,  109350,     0.6,     0.3,     0.9,     
1.7,    20.9,   254.7,    3.0,  0.04089,      0,      1,     251,     251,      
 0,    1470
total,        514894,  150980,  150980,  150980,     0.4,     0.3,     0.8,     
1.6,     4.9,    21.8,    4.0,  0.03878,      0,      1,     107,     107,      
 0,    1466
total,        665446,  150552,  150552,  150552,     0.4,     0.3,     0.8,     
1.6,     4.8,   109.4,    5.0,  0.04428,      0,      0,       0,       0,      
 0,       0
total,        788349,  122903,  122903,  122903,     0.5,     0.3,     1.0,     
1.7,    27.5,   110.4,    6.0,  0.03855,      0,      1,     106,     106,      
 0,    1480
total,        915193,  126844,  126844,  126844,     0.5,     0.3,     1.0,     
1.7,     3.8,   109.4,    7.0,  0.03331,      0,      1,     107,     107,      
 0,    1477
total,       100,  128730,  128730,  128730,     0.5,     0.4,     1.0,     
1.6,     4.3,    20.6,    7.7,  0.03118,      0,      0,       0,       0,      
 0,       0


Results:
Op rate      

[jira] [Updated] (CASSANDRA-14973) Bring v5 driver out of beta, introduce v6 before 4.0 release is cut

2020-04-20 Thread Eduard Tudenhoefner (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eduard Tudenhoefner updated CASSANDRA-14973:

Reviewers: Eduard Tudenhoefner, Sam Tunnicliffe  (was: Sam Tunnicliffe)

> Bring v5 driver out of beta, introduce v6 before 4.0 release is cut
> ---
>
> Key: CASSANDRA-14973
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14973
> Project: Cassandra
>  Issue Type: Task
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Urgent
>  Labels: protocolv5
> Fix For: 4.0, 4.0-rc
>
>
> In http://issues.apache.org/jira/browse/CASSANDRA-12142, we’ve introduced 
> Beta flag for v5 protocol. However, up till now, v5 is in beta both in 
> [Cassandra|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/ProtocolVersion.java#L46]
>  and in 
> [java-driver|https://github.com/datastax/java-driver/blob/3.x/driver-core/src/main/java/com/datastax/driver/core/ProtocolVersion.java#L35].
>  
> Before the final 4.0 release is cut, we need to bring v5 out of beta and 
> finalise native protocol spec, and start bringing all new changes to v6 
> protocol, which will be in beta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15034) cassandra-stress fails to retry user profile insert and query operations

2020-04-20 Thread Dmitry Kropachev (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087675#comment-17087675
 ] 

Dmitry Kropachev commented on CASSANDRA-15034:
--

Dinesh,

This issue exists in all c-s versions, it is not possible it is being 
enviromental.

Please follow following recepy:

1. docker run --name cassandra cassandra:latest

2. cassandra-stress user profile=stress.yaml n=10 'ops(simple=1)' no-warmup 
cl=QUORUM -node 172.17.0.3 -rate threads=10

 

You will get:

at org.apache.cassandra.stress.Operation.error(Operation.java:127) at 
org.apache.cassandra.stress.Operation.error(Operation.java:127) at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:108)
 at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:466)java.io.IOException:
 Operation x10 on key(s) [0bf1ffced038d586fe38]: Error executing: 
(NoSuchElementException)
 at org.apache.cassandra.stress.Operation.error(Operation.java:127) at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:108)
 at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:466)com.datastax.driver.core.exceptions.NoHostAvailableException:
 All host(s) tried for query failed (tried: /172.17.0.3:9042 
(com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas 
available for query at consistency QUORUM (2 required but only 1 
alive)))java.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.io.IOException:
 Operation x10 on key(s) [347f2f5b27831048d6f5]: Error executing: 
(NoSuchElementException)
 at org.apache.cassandra.stress.Operation.error(Operation.java:127) at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:108)
 at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:466)com.datastax.driver.core.exceptions.NoHostAvailableException:
 All host(s) tried for query failed (tried: /172.17.0.3:9042 
(com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas 
available for query at consistency QUORUM (2 required but only 1 
alive)))java.util.NoSuchElementExceptioncom.datastax.driver.core.exceptions.NoHostAvailableException:
 All host(s) tried for query failed (tried: /172.17.0.3:9042 
(com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas 
available for query at consistency QUORUM (2 required but only 1 
alive)))java.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.util.NoSuchElementExceptionjava.io.IOException:
 Operation x10 on key(s) [e52ea483e772a62404b5]: Error executing: 
(NoSuchElementException)
 at org.apache.cassandra.stress.Operation.error(Operation.java:127) at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:108)
 at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:466)java.io.IOException:
 Operation x10 on key(s) [9ec3a2b7383babcf9d70]: Error executing: 
(NoSuchElementException)
 at org.apache.cassandra.stress.Operation.error(Operation.java:127) at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:105) at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:108)
 at org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:466)

> cassandra-stress fails to retry user profile insert and query operations 
> -
>
> Key: CASSANDRA-15034
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15034
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/stress
>Reporter: Shlomi Livne
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 0001-Fix-retry-of-userdefined-operations.patch, 
> stress.yaml
>
>
> cassandra-stress that is run with a user profile 

[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15739:
---
Reviewers: Jordan West, Michael Semb Wever, Michael Semb Wever  (was: 
Jordan West, Michael Semb Wever)
   Jordan West, Michael Semb Wever, Michael Semb Wever  (was: 
Jordan West, Michael Semb Wever)
   Status: Review In Progress  (was: Patch Available)

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087629#comment-17087629
 ] 

Michael Semb Wever commented on CASSANDRA-15739:


Small comment on the dtest patch regarding naming.

Jenkins CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/66/pipeline].

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15560) Change io.compressor.LZ4Compressor to LZ4SafeDecompressor

2020-04-20 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087627#comment-17087627
 ] 

Berenguer Blasi commented on CASSANDRA-15560:
-

[~jrwest]do you mind I take this one?

> Change io.compressor.LZ4Compressor to LZ4SafeDecompressor
> -
>
> Key: CASSANDRA-15560
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15560
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Compression
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Fix For: 4.0, 4.0-rc
>
>
> CASSANDRA-15556 and related tickets showed that LZ4FastDecompressor can crash 
> the JVM and that LZ4SafeDecompressor performs better w/o the crash risk — its 
> also not deprecated. While we protect ourselves by checksumming the 
> compressed data but that doesn’t mean we should leave deprecated code that 
> can segfault the jvm (providing a potential DDOS vector among other things) 
> in crucial places like io.compress. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15729) Jenkins Test Results Report in plaintext for ASF ML

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087602#comment-17087602
 ] 

Michael Semb Wever edited comment on CASSANDRA-15729 at 4/20/20, 11:02 AM:
---

First CI run is 
[here|https://ci-cassandra.apache.org/job/Cassandra-devbranch-dtest/72/]. This 
tests the dtest-large [fix|https://github.com/apache/cassandra-dtest/pull/66] 
and pytest packaging 
[fix|https://github.com/apache/cassandra-builds/compare/master...thelastpickle:mck/jenkins-test-report-format].


was (Author: michaelsembwever):
First CI run is 
[here|https://ci-cassandra.apache.org/job/Cassandra-devbranch-dtest/72/]. This 
tests the dtest-large [fix|https://github.com/apache/cassandra-dtest/pull/66] 
and builds pytest packaging 
[fix|https://github.com/apache/cassandra-builds/compare/master...thelastpickle:mck/jenkins-test-report-format].

> Jenkins Test Results Report in plaintext for ASF ML
> ---
>
> Key: CASSANDRA-15729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15729
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, CI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
>  Labels: Jenkins
> Fix For: 4.0-beta
>
>
> The Jenkins pipeline builds now aggregate all test reports.
> For example: 
> - https://ci-cassandra.apache.org/job/Cassandra-trunk/68/testReport/
> - 
> https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-trunk/detail/Cassandra-trunk/68/tests
> But Jenkins can only keep a limited amount of build history, so those links 
> are not permanent, can't be used as references, and don't help for bisecting 
> and blame on regressions (and flakey tests) over a longer period of time.
> The builds@ ML can provide a permanent record of test results. 
> This was first brought up in these two threads: 
> - 
> https://lists.apache.org/thread.html/re8122e4fdd8629e7fbca2abf27d72054b3bc0e3690ece8b8e66f618b%40%3Cdev.cassandra.apache.org%3E
> - 
> https://lists.apache.org/thread.html/ra5f6aeea89546825fe7ccc4a80898c62f8ed57decabf709d81d9c720%40%3Cdev.cassandra.apache.org%3E
> An example plaintext report, to demonstrate feasibility, is available here: 
> https://lists.apache.org/thread.html/r80d13f7af706bf8dfbf2387fab46004c1fbd3917b7bc339c49e69aa8%40%3Cbuilds.cassandra.apache.org%3E
> Hurdles:
>  - the ASF mailing lists won't except html, attachments, or any message body 
> over 1MB.
>  - packages are used as a differentiator in the final aggregated report. The 
> cqlsh and dtests currently don't specify it. It needs to be added as a 
> "dot-separated" prefix to the testsuite and testcase name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15729) Jenkins Test Results Report in plaintext for ASF ML

2020-04-20 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087602#comment-17087602
 ] 

Michael Semb Wever commented on CASSANDRA-15729:


First CI run is 
[here|https://ci-cassandra.apache.org/job/Cassandra-devbranch-dtest/72/]. This 
tests the dtest-large [fix|https://github.com/apache/cassandra-dtest/pull/66] 
and builds pytest packaging 
[fix|https://github.com/apache/cassandra-builds/compare/master...thelastpickle:mck/jenkins-test-report-format].

> Jenkins Test Results Report in plaintext for ASF ML
> ---
>
> Key: CASSANDRA-15729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15729
> Project: Cassandra
>  Issue Type: Task
>  Components: Build, CI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
>  Labels: Jenkins
> Fix For: 4.0-beta
>
>
> The Jenkins pipeline builds now aggregate all test reports.
> For example: 
> - https://ci-cassandra.apache.org/job/Cassandra-trunk/68/testReport/
> - 
> https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-trunk/detail/Cassandra-trunk/68/tests
> But Jenkins can only keep a limited amount of build history, so those links 
> are not permanent, can't be used as references, and don't help for bisecting 
> and blame on regressions (and flakey tests) over a longer period of time.
> The builds@ ML can provide a permanent record of test results. 
> This was first brought up in these two threads: 
> - 
> https://lists.apache.org/thread.html/re8122e4fdd8629e7fbca2abf27d72054b3bc0e3690ece8b8e66f618b%40%3Cdev.cassandra.apache.org%3E
> - 
> https://lists.apache.org/thread.html/ra5f6aeea89546825fe7ccc4a80898c62f8ed57decabf709d81d9c720%40%3Cdev.cassandra.apache.org%3E
> An example plaintext report, to demonstrate feasibility, is available here: 
> https://lists.apache.org/thread.html/r80d13f7af706bf8dfbf2387fab46004c1fbd3917b7bc339c49e69aa8%40%3Cbuilds.cassandra.apache.org%3E
> Hurdles:
>  - the ASF mailing lists won't except html, attachments, or any message body 
> over 1MB.
>  - packages are used as a differentiator in the final aggregated report. The 
> cqlsh and dtests currently don't specify it. It needs to be added as a 
> "dot-separated" prefix to the testsuite and testcase name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: ninja-fix: fix missing icons

2020-04-20 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 767c59e  ninja-fix: fix missing icons
767c59e is described below

commit 767c59e7e416b1f46dabcba0c109ea295a3f0c06
Author: mck 
AuthorDate: Mon Apr 20 11:44:47 2020 +0200

ninja-fix: fix missing icons
---
 content/icons/back.gif | Bin 0 -> 42 bytes
 src/icons/back.gif | Bin 0 -> 42 bytes
 2 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/content/icons/back.gif b/content/icons/back.gif
new file mode 100644
index 000..f191b28
Binary files /dev/null and b/content/icons/back.gif differ
diff --git a/src/icons/back.gif b/src/icons/back.gif
new file mode 100644
index 000..f191b28
Binary files /dev/null and b/src/icons/back.gif differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: ninja-fix: fix missing icons

2020-04-20 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new cbd687e  ninja-fix: fix missing icons
cbd687e is described below

commit cbd687e1cd11e0252f4ead7b6ef32a3cddaaa55e
Author: mck 
AuthorDate: Mon Apr 20 11:43:19 2020 +0200

ninja-fix: fix missing icons
---
 content/icons/blank.gif  | Bin 0 -> 42 bytes
 content/icons/folder.gif | Bin 0 -> 42 bytes
 src/icons/blank.gif  | Bin 0 -> 42 bytes
 src/icons/folder.gif | Bin 0 -> 42 bytes
 4 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/content/icons/blank.gif b/content/icons/blank.gif
new file mode 100644
index 000..f191b28
Binary files /dev/null and b/content/icons/blank.gif differ
diff --git a/content/icons/folder.gif b/content/icons/folder.gif
new file mode 100644
index 000..f191b28
Binary files /dev/null and b/content/icons/folder.gif differ
diff --git a/src/icons/blank.gif b/src/icons/blank.gif
new file mode 100644
index 000..f191b28
Binary files /dev/null and b/src/icons/blank.gif differ
diff --git a/src/icons/folder.gif b/src/icons/folder.gif
new file mode 100644
index 000..f191b28
Binary files /dev/null and b/src/icons/folder.gif differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-20 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087482#comment-17087482
 ] 

Stefan Miklosovic commented on CASSANDRA-15694:
---

[~djoshi] PR prepared here, please let me know if there is anything I should do 
to make this happen.

 

[https://github.com/apache/cassandra/pull/546]

> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15665) StreamManager should clearly differentiate between "initiator" and "receiver" sessions

2020-04-20 Thread ZhaoYang (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-15665:
-
Test and Documentation Plan: CI pending: 
[https://circleci.com/workflow-run/bd115daa-88d4-49f6-b710-fe8a38315cc8]  (was: 
CI pending: 
[https://circleci.com/workflow-run/b8aed041-cb39-4e86-9a8f-189c02a08857])

> StreamManager should clearly differentiate between "initiator" and "receiver" 
> sessions
> --
>
> Key: CASSANDRA-15665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15665
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Streaming and Messaging
>Reporter: Sergio Bossa
>Assignee: ZhaoYang
>Priority: Normal
> Fix For: 4.0
>
>
> {{StreamManager}} does currently a suboptimal job in differentiating between 
> stream sessions (in form of {{StreamResultFuture}}) which have been either 
> initiated or "received", for the following reasons:
> 1) Naming is IMO confusing: a "receiver" session could actually both send and 
> receive files, so technically an initiator is also a receiver.
> 2) {{StreamManager#findSession()}}  assumes we should first looking into 
> "initiator" sessions, then into "receiver" ones: this is a dangerous 
> assumptions, in particular for test environments where the same process could 
> work as both an initiator and a receiver.
> I would recommend the following changes:
> 1) Rename "receiver" with "follower" everywhere the former is used.
> 2) Introduce a new flag into {{StreamMessageHeader}} to signal if the message 
> comes from an initiator or follower session, in order to correctly 
> differentiate and look for sessions in {{StreamManager}}.
> While my arguments above might seem trivial, I believe they will improve 
> clarity and save from potential bugs/headaches at testing time, and doing 
> such changes now that we're revamping streaming for 4.0 seems the right time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org