[jira] [Commented] (CASSANDRA-19650) CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x
[ https://issues.apache.org/jira/browse/CASSANDRA-19650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849261#comment-17849261 ] Jacek Lewandowski commented on CASSANDRA-19650: --- Thank you [~mck] for reviewing and running the tests! > CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x > > > Key: CASSANDRA-19650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19650 > Project: Cassandra > Issue Type: Bug > Components: Build, Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x > > Attachments: CASSANDRA-19650_50_84_ci_summary.html, > CASSANDRA-19650_50_84_ci_summary.htmlresults_details.tar.xz > > > CCM interprets {{CASSANDRA_USE_JDK11}} only by its existence in the > environment rather than by its actual value (true/false). > I can see two solutions: > - make it interpret {{CASSANDRA_USE_JDK11}} properly > - do not take into account {{CASSANDRA_USE_JDK11}} in the current env and set > it or unset it automatically when starting a node basing on which Java > version was selected -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19650) CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x
[ https://issues.apache.org/jira/browse/CASSANDRA-19650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19650: -- Since Version: NA Source Control Link: https://github.com/riptano/ccm/commit/4aae08061347075f25964db7aebc889719ffc83b Resolution: Fixed Status: Resolved (was: Ready to Commit) > CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x > > > Key: CASSANDRA-19650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19650 > Project: Cassandra > Issue Type: Bug > Components: Build, Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x > > Attachments: CASSANDRA-19650_50_84_ci_summary.html, > CASSANDRA-19650_50_84_ci_summary.htmlresults_details.tar.xz > > > CCM interprets {{CASSANDRA_USE_JDK11}} only by its existence in the > environment rather than by its actual value (true/false). > I can see two solutions: > - make it interpret {{CASSANDRA_USE_JDK11}} properly > - do not take into account {{CASSANDRA_USE_JDK11}} in the current env and set > it or unset it automatically when starting a node basing on which Java > version was selected -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19650) CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x
[ https://issues.apache.org/jira/browse/CASSANDRA-19650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19650: -- Test and Documentation Plan: manual test Status: Patch Available (was: Open) > CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x > > > Key: CASSANDRA-19650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19650 > Project: Cassandra > Issue Type: Bug > Components: Build, Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: NA > > > CCM interprets {{CASSANDRA_USE_JDK11}} only by its existence in the > environment rather than by its actual value (true/false). > I can see two solutions: > - make it interpret {{CASSANDRA_USE_JDK11}} properly > - do not take into account {{CASSANDRA_USE_JDK11}} in the current env and set > it or unset it automatically when starting a node basing on which Java > version was selected -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19650) CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x
[ https://issues.apache.org/jira/browse/CASSANDRA-19650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19650: -- Bug Category: Parent values: Correctness(12982)Level 1 values: Test Failure(12990) Complexity: Low Hanging Fruit Discovered By: User Report Fix Version/s: NA Severity: Low Status: Open (was: Triage Needed) > CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x > > > Key: CASSANDRA-19650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19650 > Project: Cassandra > Issue Type: Bug > Components: Build, Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: NA > > > CCM interprets {{CASSANDRA_USE_JDK11}} only by its existence in the > environment rather than by its actual value (true/false). > I can see two solutions: > - make it interpret {{CASSANDRA_USE_JDK11}} properly > - do not take into account {{CASSANDRA_USE_JDK11}} in the current env and set > it or unset it automatically when starting a node basing on which Java > version was selected -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-19650) CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x
[ https://issues.apache.org/jira/browse/CASSANDRA-19650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski reassigned CASSANDRA-19650: - Assignee: Jacek Lewandowski > CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x > > > Key: CASSANDRA-19650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19650 > Project: Cassandra > Issue Type: Bug > Components: Build, Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > > CCM interprets {{CASSANDRA_USE_JDK11}} only by its existence in the > environment rather than by its actual value (true/false). > I can see two solutions: > - make it interpret {{CASSANDRA_USE_JDK11}} properly > - do not take into account {{CASSANDRA_USE_JDK11}} in the current env and set > it or unset it automatically when starting a node basing on which Java > version was selected -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19650) CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x
Jacek Lewandowski created CASSANDRA-19650: - Summary: CCM wrongly interprets CASSANDRA_USE_JDK11 for Cassandra 4.x Key: CASSANDRA-19650 URL: https://issues.apache.org/jira/browse/CASSANDRA-19650 Project: Cassandra Issue Type: Bug Components: Build, Test/dtest/python Reporter: Jacek Lewandowski CCM interprets {{CASSANDRA_USE_JDK11}} only by its existence in the environment rather than by its actual value (true/false). I can see two solutions: - make it interpret {{CASSANDRA_USE_JDK11}} properly - do not take into account {{CASSANDRA_USE_JDK11}} in the current env and set it or unset it automatically when starting a node basing on which Java version was selected -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19636: -- Fix Version/s: NA Since Version: NA Source Control Link: https://github.com/riptano/ccm/commit/dade9ece5e0199813e180f7915cc342832a2329e Resolution: Fixed Status: Resolved (was: Ready to Commit) > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: NA > > Attachments: CASSANDRA-19636_50_75_ci_summary.html, > CASSANDRA-19636_50_75_results_details.tar.xz, > CASSANDRA-19636_trunk_76_ci_summary.html, > CASSANDRA-19636_trunk_76_results_details.tar.xz > > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19636: -- Status: Ready to Commit (was: Review In Progress) > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Attachments: CASSANDRA-19636_50_75_ci_summary.html, > CASSANDRA-19636_50_75_results_details.tar.xz, > CASSANDRA-19636_trunk_76_ci_summary.html, > CASSANDRA-19636_trunk_76_results_details.tar.xz > > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847061#comment-17847061 ] Jacek Lewandowski commented on CASSANDRA-19636: --- [~aweisberg] can I assume that you are ok with this ticket? > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Attachments: CASSANDRA-19636_50_75_ci_summary.html, > CASSANDRA-19636_50_75_results_details.tar.xz, > CASSANDRA-19636_trunk_76_ci_summary.html, > CASSANDRA-19636_trunk_76_results_details.tar.xz > > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846984#comment-17846984 ] Jacek Lewandowski commented on CASSANDRA-19636: --- Yes, this is a separate thing, I'll create a ticket for further cleanups. > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Attachments: CASSANDRA-19636_50_75_ci_summary.html, > CASSANDRA-19636_50_75_results_details.tar.xz, > CASSANDRA-19636_trunk_76_ci_summary.html, > CASSANDRA-19636_trunk_76_results_details.tar.xz > > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846966#comment-17846966 ] Jacek Lewandowski commented on CASSANDRA-19636: --- This reverts my previous changes so in terms of automatic selection of Java version is works mostly as before. {quote} * Allowing specifying JDK version as a parameter and then look up the actual JDK location from JAVAX_HOME {quote} This is implemented > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Attachments: CASSANDRA-19636_50_75_ci_summary.html, > CASSANDRA-19636_50_75_results_details.tar.xz, > CASSANDRA-19636_trunk_76_ci_summary.html, > CASSANDRA-19636_trunk_76_results_details.tar.xz > > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19636: -- Test and Documentation Plan: automatic ccm tests, manual Status: Patch Available (was: Open) > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19636: -- Bug Category: Parent values: Correctness(12982)Level 1 values: Test Failure(12990) Complexity: Low Hanging Fruit Discovered By: User Report Severity: Low Status: Open (was: Triage Needed) > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
Jacek Lewandowski created CASSANDRA-19636: - Summary: Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM Key: CASSANDRA-19636 URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 Project: Cassandra Issue Type: Bug Components: Test/dtest/python Reporter: Jacek Lewandowski CCM fails to select the right Java version for Cassandra 5 binary distribution. There are also two additional changes proposed here: * add {{--jvm-version}} argument to let the user explicitly select Java version when starting a node from command line * fail if {{java}} command is available on the {{PATH}} and points to a different Java version than Java distribution defined in {{JAVA_HOME}} because there is no obvious way for the user to figure out which one is going to be used -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-19636) Fix CCM for Cassandra 5.0 and add arg to the command line which let the user explicitly select JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski reassigned CASSANDRA-19636: - Assignee: Jacek Lewandowski > Fix CCM for Cassandra 5.0 and add arg to the command line which let the user > explicitly select JVM > -- > > Key: CASSANDRA-19636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19636 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > > CCM fails to select the right Java version for Cassandra 5 binary > distribution. > There are also two additional changes proposed here: > * add {{--jvm-version}} argument to let the user explicitly select Java > version when starting a node from command line > * fail if {{java}} command is available on the {{PATH}} and points to a > different Java version than Java distribution defined in {{JAVA_HOME}} > because there is no obvious way for the user to figure out which one is going > to be used > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19624) ModificationStatement#casInternal leaks RowIterator
[ https://issues.apache.org/jira/browse/CASSANDRA-19624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19624: -- Test and Documentation Plan: regression tests Status: Patch Available (was: Open) > ModificationStatement#casInternal leaks RowIterator > --- > > Key: CASSANDRA-19624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19624 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Michael Marshall >Assignee: Michael Marshall >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 40m > Remaining Estimate: 0h > > In the `ModificationStatement` class, the `casInternal` method opens a row > iterator without closing it, causing the iterator to leak resources. Here is > a link to the relevant code in the `trunk` branch. > [https://github.com/apache/cassandra/blob/a77a2d10b1d247ed920b75df79f982a3b7c6a431/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java#L680-L684] > > It seems that `cassandra-3.0.30` has the bug, but `cassandra-2.2.19` does not > have it. Is it correct to target `cassandra-3.0.30`? > What is the best practice for testing this kind of bug fix? It seems like a > low complexity fix. This is my first contribution to the Cassandra community, > so any guidance is appreciated. Thanks! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19624) ModificationStatement#casInternal leaks RowIterator
[ https://issues.apache.org/jira/browse/CASSANDRA-19624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19624: -- Reviewers: Jacek Lewandowski > ModificationStatement#casInternal leaks RowIterator > --- > > Key: CASSANDRA-19624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19624 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Marshall >Assignee: Michael Marshall >Priority: Normal > Time Spent: 10m > Remaining Estimate: 0h > > In the `ModificationStatement` class, the `casInternal` method opens a row > iterator without closing it, causing the iterator to leak resources. Here is > a link to the relevant code in the `trunk` branch. > [https://github.com/apache/cassandra/blob/a77a2d10b1d247ed920b75df79f982a3b7c6a431/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java#L680-L684] > > It seems that `cassandra-3.0.30` has the bug, but `cassandra-2.2.19` does not > have it. Is it correct to target `cassandra-3.0.30`? > What is the best practice for testing this kind of bug fix? It seems like a > low complexity fix. This is my first contribution to the Cassandra community, > so any guidance is appreciated. Thanks! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Fix Version/s: 4.0.13 4.1.5 5.0-rc 5.1 (was: 5.x) (was: 4.0.x) (was: 4.1.x) (was: 5.0.x) Source Control Link: https://github.com/apache/cassandra/commit/f92998190ccfc688e22d035318848a2f61987585 Resolution: Fixed Status: Resolved (was: Ready to Commit) > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Legacy/Core, Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0-rc, 5.1 > > Attachments: ci_summary.html, results_details.tar.xz > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Attachment: results_details.tar.xz ci_summary.html > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Legacy/Core, Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Attachments: ci_summary.html, results_details.tar.xz > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Status: Ready to Commit (was: Review In Progress) > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Legacy/Core, Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Attachments: ci_summary.html, results_details.tar.xz > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Reviewers: Stefan Miklosovic Status: Review In Progress (was: Patch Available) > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Legacy/Core, Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12937) Default setting (yaml) for SSTable compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837230#comment-17837230 ] Jacek Lewandowski commented on CASSANDRA-12937: --- [~samt] let's leave aside the discussion about whether CQL is good or not as serialization format - I didn't mean that actually. I meant that we store the raw CQL passed by the user which is known before processing the transformation. The resolved table settings are available after the transformation and require rebuilding the statement as we do in "describe". Ok, it looks like the table should simply not store the resolved default but just the "default marker" to use whatever default is set on the node. That is, if we not provide any compression/compaction/memtable configuration explicitly, we will apply the node specific defaults from the yaml. > Default setting (yaml) for SSTable compression > -- > > Key: CASSANDRA-12937 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12937 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Michael Semb Wever >Assignee: Stefan Miklosovic >Priority: Low > Labels: AdventCalendar2021 > Fix For: 5.x > > Time Spent: 8h > Remaining Estimate: 0h > > In many situations the choice of compression for sstables is more relevant to > the disks attached than to the schema and data. > This issue is to add to cassandra.yaml a default value for sstable > compression that new tables will inherit (instead of the defaults found in > {{CompressionParams.DEFAULT}}. > Examples where this can be relevant are filesystems that do on-the-fly > compression (btrfs, zfs) or specific disk configurations or even specific C* > versions (see CASSANDRA-10995 ). > +Additional information for newcomers+ > Some new fields need to be added to {{cassandra.yaml}} to allow specifying > the field required for defining the default compression parameters. In > {{DatabaseDescriptor}} a new {{CompressionParams}} field should be added for > the default compression. This field should be initialized in > {{DatabaseDescriptor.applySimpleConfig()}}. At the different places where > {{CompressionParams.DEFAULT}} was used the code should call > {{DatabaseDescriptor#getDefaultCompressionParams}} that should return some > copy of configured {{CompressionParams}}. > Some unit test using {{OverrideConfigurationLoader}} should be used to test > that the table schema use the new default when a new table is created (see > CreateTest for some example). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12937) Default setting (yaml) for SSTable compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837191#comment-17837191 ] Jacek Lewandowski commented on CASSANDRA-12937: --- {quote}Ideally we should store the value that is actually resolved during initial execution on each node so that it can be re-used if/when the transformation is reapplied. {quote} So, if nodes had configurations out-of-sync, they would end up with local schemas with different compression, compaction, etc. settings each? I didn't mean storing defaults as defaults, just materializing all the settings when the table is created. Currently it is not easy because the schema transformation is stored as a raw CQL > Default setting (yaml) for SSTable compression > -- > > Key: CASSANDRA-12937 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12937 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Michael Semb Wever >Assignee: Stefan Miklosovic >Priority: Low > Labels: AdventCalendar2021 > Fix For: 5.x > > Time Spent: 8h > Remaining Estimate: 0h > > In many situations the choice of compression for sstables is more relevant to > the disks attached than to the schema and data. > This issue is to add to cassandra.yaml a default value for sstable > compression that new tables will inherit (instead of the defaults found in > {{CompressionParams.DEFAULT}}. > Examples where this can be relevant are filesystems that do on-the-fly > compression (btrfs, zfs) or specific disk configurations or even specific C* > versions (see CASSANDRA-10995 ). > +Additional information for newcomers+ > Some new fields need to be added to {{cassandra.yaml}} to allow specifying > the field required for defining the default compression parameters. In > {{DatabaseDescriptor}} a new {{CompressionParams}} field should be added for > the default compression. This field should be initialized in > {{DatabaseDescriptor.applySimpleConfig()}}. At the different places where > {{CompressionParams.DEFAULT}} was used the code should call > {{DatabaseDescriptor#getDefaultCompressionParams}} that should return some > copy of configured {{CompressionParams}}. > Some unit test using {{OverrideConfigurationLoader}} should be used to test > that the table schema use the new default when a new table is created (see > CreateTest for some example). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-12937) Default setting (yaml) for SSTable compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837176#comment-17837176 ] Jacek Lewandowski edited comment on CASSANDRA-12937 at 4/15/24 11:01 AM: - The problem with the failing test is probably that the default configuration for compression parameters (and other defaults for table / keyspace creation/alteration) should be part of the schema transformation data and stored in TCM log. This is it not an issues related to this ticket because it applies to various settings; For example, even without this PR, similar test would fail while manipulating a value of " cassandra.sstable_compression_default" property. Then, we would have the same problem with default compaction and memtable options which are also got from the configuration, was (Author: jlewandowski): The problem with the failing test is probably that the default configuration for compression parameters (and other defaults for table / keyspace creation/alteration) should be part of the schema transformation data and stored in TCM log. > Default setting (yaml) for SSTable compression > -- > > Key: CASSANDRA-12937 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12937 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Michael Semb Wever >Assignee: Stefan Miklosovic >Priority: Low > Labels: AdventCalendar2021 > Fix For: 5.x > > Time Spent: 8h > Remaining Estimate: 0h > > In many situations the choice of compression for sstables is more relevant to > the disks attached than to the schema and data. > This issue is to add to cassandra.yaml a default value for sstable > compression that new tables will inherit (instead of the defaults found in > {{CompressionParams.DEFAULT}}. > Examples where this can be relevant are filesystems that do on-the-fly > compression (btrfs, zfs) or specific disk configurations or even specific C* > versions (see CASSANDRA-10995 ). > +Additional information for newcomers+ > Some new fields need to be added to {{cassandra.yaml}} to allow specifying > the field required for defining the default compression parameters. In > {{DatabaseDescriptor}} a new {{CompressionParams}} field should be added for > the default compression. This field should be initialized in > {{DatabaseDescriptor.applySimpleConfig()}}. At the different places where > {{CompressionParams.DEFAULT}} was used the code should call > {{DatabaseDescriptor#getDefaultCompressionParams}} that should return some > copy of configured {{CompressionParams}}. > Some unit test using {{OverrideConfigurationLoader}} should be used to test > that the table schema use the new default when a new table is created (see > CreateTest for some example). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12937) Default setting (yaml) for SSTable compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837176#comment-17837176 ] Jacek Lewandowski commented on CASSANDRA-12937: --- The problem with the failing test is probably that the default configuration for compression parameters (and other defaults for table / keyspace creation/alteration) should be part of the schema transformation data and stored in TCM log. > Default setting (yaml) for SSTable compression > -- > > Key: CASSANDRA-12937 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12937 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Michael Semb Wever >Assignee: Stefan Miklosovic >Priority: Low > Labels: AdventCalendar2021 > Fix For: 5.x > > Time Spent: 8h > Remaining Estimate: 0h > > In many situations the choice of compression for sstables is more relevant to > the disks attached than to the schema and data. > This issue is to add to cassandra.yaml a default value for sstable > compression that new tables will inherit (instead of the defaults found in > {{CompressionParams.DEFAULT}}. > Examples where this can be relevant are filesystems that do on-the-fly > compression (btrfs, zfs) or specific disk configurations or even specific C* > versions (see CASSANDRA-10995 ). > +Additional information for newcomers+ > Some new fields need to be added to {{cassandra.yaml}} to allow specifying > the field required for defining the default compression parameters. In > {{DatabaseDescriptor}} a new {{CompressionParams}} field should be added for > the default compression. This field should be initialized in > {{DatabaseDescriptor.applySimpleConfig()}}. At the different places where > {{CompressionParams.DEFAULT}} was used the code should call > {{DatabaseDescriptor#getDefaultCompressionParams}} that should return some > copy of configured {{CompressionParams}}. > Some unit test using {{OverrideConfigurationLoader}} should be used to test > that the table schema use the new default when a new table is created (see > CreateTest for some example). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18954) Transformations should be pure so that replaying them results in the same outcome regardless of the node state or configuration
[ https://issues.apache.org/jira/browse/CASSANDRA-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837173#comment-17837173 ] Jacek Lewandowski commented on CASSANDRA-18954: --- [~samt] - CASSANDRA-12937 problem is caused by the fact that the transformations are not pure. It is not enough that they are side-effects-free, they also cannot depend on any external properties other than the current cluster state and the store transformation data. I haven't looked at the fixed you mentioned, but in my PR there was just an example fix for one schema transformation - alter-table, one of the applied fixes was to not verify "enableDropCompactStorage" from the configuration in replay mode because the configuration is not the part of the transformation data. That could lead to different outcomes after replaying a log, in particular to inability to start the cluster after changing the configuration. > Transformations should be pure so that replaying them results in the same > outcome regardless of the node state or configuration > --- > > Key: CASSANDRA-18954 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18954 > Project: Cassandra > Issue Type: Bug > Components: Transactional Cluster Metadata >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > > Discussed on Slack -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18954) Transformations should be pure so that replaying them results in the same outcome regardless of the node state or configuration
[ https://issues.apache.org/jira/browse/CASSANDRA-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835787#comment-17835787 ] Jacek Lewandowski commented on CASSANDRA-18954: --- I don't know [~samt] to be honest. My patch is pretty old and I don't know what you guys did in those tickets. > Transformations should be pure so that replaying them results in the same > outcome regardless of the node state or configuration > --- > > Key: CASSANDRA-18954 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18954 > Project: Cassandra > Issue Type: Bug > Components: Transactional Cluster Metadata >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > > Discussed on Slack -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19549) Test failure: rebuild_test.TestRebuild.test_resumable_rebuild
Jacek Lewandowski created CASSANDRA-19549: - Summary: Test failure: rebuild_test.TestRebuild.test_resumable_rebuild Key: CASSANDRA-19549 URL: https://issues.apache.org/jira/browse/CASSANDRA-19549 Project: Cassandra Issue Type: Bug Components: Test/dtest/python Reporter: Jacek Lewandowski Interrupted exception thrown during shutdown and caught by {{JVMStabilityInspector}} - does not look serious but we may want to ignore interrupted exception during shutdown. https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1326/workflows/021d350a-4b62-44af-9650-f5a0eb105522/jobs/70413/tests {noformat} failed on teardown with "Unexpected error found in node logs (see stdout for full details). Errors: [[node2] 'ERROR [NettyStreaming-Outbound-/127.0.0.3.7000:3] 2024-04-09 08:32:19,662 JVMStabilityInspector.java:70 - Exception in thread Thread NettyStreaming-Outbound-/127.0.0.3.7000:3,5,NettyStreaming-Outbound-/127.0.0.3.7000] org.apache.cassandra.utils.concurrent.UncheckedInterruptedException: java.lang.InterruptedException at org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.acquirePermit(StreamingMultiplexedChannel.java:373) at org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.run(StreamingMultiplexedChannel.java:309) at org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96) at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61) at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.InterruptedException: null at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081) at java.base/java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:592) at org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.acquirePermit(StreamingMultiplexedChannel.java:356) ... 8 common frames omitted', [node2] 'ERROR [NettyStreaming-Outbound-/127.0.0.3.7000:3] 2024-04-09 08:32:19,664 ExecutionFailure.java:72 - Unexpected error while handling unexpected error org.apache.cassandra.utils.concurrent.UncheckedInterruptedException: java.lang.InterruptedException at org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:142) at org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:170) at org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:89) at org.apache.cassandra.utils.JVMStabilityInspector.uncaughtException(JVMStabilityInspector.java:78) at org.apache.cassandra.concurrent.ExecutionFailure.handle(ExecutionFailure.java:67) at org.apache.cassandra.concurrent.FutureTask.tryFailure(FutureTask.java:86) at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:75) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.InterruptedException: null at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081) at java.base/java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:592) at org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.acquirePermit(StreamingMultiplexedChannel.java:356) at org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.run(StreamingMultiplexedChannel.java:309) at org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96) at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61) at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71) ... 4 common frames omitted']" Unexpected error found in node logs (see stdout for full details). Errors: [[node2] 'ERROR [NettyStreaming-Outbound-/127.0.0.3.7000:3] 2024-04-09 08:32:19,662 JVMStabilityInspector.java:70 - Exception in thread Thread[NettyStreaming-Outbound-/127.0.0.3.7000:3,5,NettyStreaming-Outbound-/127.0.0.3.7000]
[jira] [Updated] (CASSANDRA-19549) Test failure: rebuild_test.TestRebuild.test_resumable_rebuild
[ https://issues.apache.org/jira/browse/CASSANDRA-19549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19549: -- Fix Version/s: 5.0.x > Test failure: rebuild_test.TestRebuild.test_resumable_rebuild > - > > Key: CASSANDRA-19549 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19549 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/python >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 5.0.x > > > Interrupted exception thrown during shutdown and caught by > {{JVMStabilityInspector}} - does not look serious but we may want to ignore > interrupted exception during shutdown. > https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1326/workflows/021d350a-4b62-44af-9650-f5a0eb105522/jobs/70413/tests > {noformat} > failed on teardown with "Unexpected error found in node logs (see stdout for > full details). > Errors: [[node2] 'ERROR [NettyStreaming-Outbound-/127.0.0.3.7000:3] > 2024-04-09 08:32:19,662 JVMStabilityInspector.java:70 - Exception in thread > Thread > NettyStreaming-Outbound-/127.0.0.3.7000:3,5,NettyStreaming-Outbound-/127.0.0.3.7000] > org.apache.cassandra.utils.concurrent.UncheckedInterruptedException: > java.lang.InterruptedException > at > org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.acquirePermit(StreamingMultiplexedChannel.java:373) > at > org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.run(StreamingMultiplexedChannel.java:309) > at org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96) > at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61) > at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.base/java.lang.Thread.run(Thread.java:833) > Caused by: java.lang.InterruptedException: null > at > java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081) > at > java.base/java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:592) > at > org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.acquirePermit(StreamingMultiplexedChannel.java:356) > ... 8 common frames omitted', [node2] 'ERROR > [NettyStreaming-Outbound-/127.0.0.3.7000:3] 2024-04-09 08:32:19,664 > ExecutionFailure.java:72 - Unexpected error while handling unexpected error > org.apache.cassandra.utils.concurrent.UncheckedInterruptedException: > java.lang.InterruptedException > at > org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:142) > at > org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:170) > at > org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:89) > at > org.apache.cassandra.utils.JVMStabilityInspector.uncaughtException(JVMStabilityInspector.java:78) > at > org.apache.cassandra.concurrent.ExecutionFailure.handle(ExecutionFailure.java:67) > at > org.apache.cassandra.concurrent.FutureTask.tryFailure(FutureTask.java:86) > at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:75) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.base/java.lang.Thread.run(Thread.java:833) > Caused by: java.lang.InterruptedException: null > at > java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081) > at > java.base/java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:592) > at > org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.acquirePermit(StreamingMultiplexedChannel.java:356) > at > org.apache.cassandra.streaming.async.StreamingMultiplexedChannel$FileStreamTask.run(StreamingMultiplexedChannel.java:309) > at org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96) > at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61) > at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71) > ... 4 common frames
[jira] [Updated] (CASSANDRA-19548) IntergerIntervalsTest may fail due to integer overflow
[ https://issues.apache.org/jira/browse/CASSANDRA-19548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19548: -- Summary: IntergerIntervalsTest may fail due to integer overflow (was: IntergetIntervalsTest may fail due to integer overflow) > IntergerIntervalsTest may fail due to integer overflow > -- > > Key: CASSANDRA-19548 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19548 > Project: Cassandra > Issue Type: Bug > Components: Test/unit >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.1 > > > {noformat} > junit.framework.AssertionFailedError > at > org.apache.cassandra.utils.IntegerInterval$Set.add(IntegerInterval.java:138) > at > org.apache.cassandra.utils.IntegerIntervalsTest.lambda$testSetAddMultiThread$5(IntegerIntervalsTest.java:252) > at > java.base/jdk.internal.util.random.RandomSupport$RandomIntsSpliterator.forEachRemaining(RandomSupport.java:1002) > at > java.base/java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:617) > at > org.apache.cassandra.utils.IntegerIntervalsTest.testSetAddMultiThread(IntegerIntervalsTest.java:252) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > {noformat} > https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1326/workflows/021d350a-4b62-44af-9650-f5a0eb105522/jobs/70420/tests -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19548) IntergetIntervalsTest may fail due to integer overflow
[ https://issues.apache.org/jira/browse/CASSANDRA-19548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19548: -- Fix Version/s: 4.0.x 4.1.x 5.0.x 5.1 > IntergetIntervalsTest may fail due to integer overflow > -- > > Key: CASSANDRA-19548 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19548 > Project: Cassandra > Issue Type: Bug > Components: Test/unit >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.1 > > > {noformat} > junit.framework.AssertionFailedError > at > org.apache.cassandra.utils.IntegerInterval$Set.add(IntegerInterval.java:138) > at > org.apache.cassandra.utils.IntegerIntervalsTest.lambda$testSetAddMultiThread$5(IntegerIntervalsTest.java:252) > at > java.base/jdk.internal.util.random.RandomSupport$RandomIntsSpliterator.forEachRemaining(RandomSupport.java:1002) > at > java.base/java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:617) > at > org.apache.cassandra.utils.IntegerIntervalsTest.testSetAddMultiThread(IntegerIntervalsTest.java:252) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > {noformat} > https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1326/workflows/021d350a-4b62-44af-9650-f5a0eb105522/jobs/70420/tests -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19548) IntergetIntervalsTest may fail due to integer overflow
Jacek Lewandowski created CASSANDRA-19548: - Summary: IntergetIntervalsTest may fail due to integer overflow Key: CASSANDRA-19548 URL: https://issues.apache.org/jira/browse/CASSANDRA-19548 Project: Cassandra Issue Type: Bug Components: Test/unit Reporter: Jacek Lewandowski {noformat} junit.framework.AssertionFailedError at org.apache.cassandra.utils.IntegerInterval$Set.add(IntegerInterval.java:138) at org.apache.cassandra.utils.IntegerIntervalsTest.lambda$testSetAddMultiThread$5(IntegerIntervalsTest.java:252) at java.base/jdk.internal.util.random.RandomSupport$RandomIntsSpliterator.forEachRemaining(RandomSupport.java:1002) at java.base/java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:617) at org.apache.cassandra.utils.IntegerIntervalsTest.testSetAddMultiThread(IntegerIntervalsTest.java:252) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) {noformat} https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1326/workflows/021d350a-4b62-44af-9650-f5a0eb105522/jobs/70420/tests -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18974) Failed test: BatchTest
[ https://issues.apache.org/jira/browse/CASSANDRA-18974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-18974: -- Description: [https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1042/workflows/5e568327-53a6-4214-aba8-23dc6ac717a2/jobs/42694/tests] [https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1324/workflows/4ab8e461-b506-4868-aeb4-fcb4ebee89e4/jobs/70028/tests] {{testTableWithClusteringInLoggedBatch}} {noformat} com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during BATCH write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write) at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:85) at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:23) at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:35) at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:293) at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:58) at org.apache.cassandra.cql3.BatchTest.sendBatch(BatchTest.java:183) at org.apache.cassandra.cql3.BatchTest.testTableWithClusteringInLoggedBatch(BatchTest.java:129) Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during BATCH write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write) at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:113) at com.datastax.driver.core.Responses$Error.asException(Responses.java:167) at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:651) at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1290) at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1208) at com.datastax.shaded.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at com.datastax.shaded.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at com.datastax.shaded.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at com.datastax.shaded.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at com.datastax.shaded.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312) at com.datastax.shaded.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at com.datastax.shaded.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) at
[jira] [Commented] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833432#comment-17833432 ] Jacek Lewandowski commented on CASSANDRA-19479: --- There are some tests running for 4. 0 - https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1317/workflows/ec01696c-8075-4f77-834a-c55be2491100 > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Legacy/Core, Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0, 5.1 > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Component/s: Legacy/Core > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Legacy/Core, Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0, 5.1 > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Summary: Fix type issues and provide tests for type compatibility between 4.1 and 5.0 (was: Provide tests for type compatibility between 4.1 and 5.0) > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0, 5.1 > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Description: This is a part of CASSANDRA-14476 - we should verify whether the type compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix the remaining issues. The implemented tests verify the following: - assumed compatibility between primitive types - equals method symmetricity - freezing/unfreezing - value compatibility by using a serializer of one type to deserialize a value serialized using a serializer of another type - serialization compatibility by serializing a row with a column of one type as a column of another type for simple and complex cells (multicell types) - (comparison) compatibility by comparing serialized values of one type using a comparator of another type; for multicell types - build rows and compare cell paths of a complex type using a cell path comparator of another complex type - verify whether types that are (value/serialization/comparison) compatible in a previous release are still compatible with this release - store the compatibility matrix in a compressed JSON file so that we can copy it to future releases to assert backward compatibility (similar approach to LegacySSTableTest) - verify that type serializers are different for non-compatible type pairs which use custom comparisons Additionally: - the equals method in {{TupleType}} and {{UserType}} was fixed to be symmetric. Previously, comparing two values gave a different outcome when inverted. - fixed a condition in comparison method of {{AbstractCompositeType}} - ported a fix for composite and dynamic composite types which adds a distinct serializers for them so that the serializers for those types and for {{BytesType}} are considered different; similar thing was done for {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} serializer (see https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} generation has a problem with inverse alias-type mapping which makes it vulnerable to problems when the same type has two different aliases was: This is a part of CASSANDRA-14476 - we should verify whether the type compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix the remaining issues. The implemented tests verify the following: - assumed compatibility between primitive types - equals method symmetricity - freezing/unfreezing - value compatibility by using a serializer of one type to deserialize a value serialized using a serializer of another type - serialization compatibility by serializing a row with a column of one type as a column of another type for simple and complex cells (multicell types) - (comparison) compatibility by comparing serialized values of one type using a comparator of another type; for multicell types - build rows and compare cell paths of a complex type using a cell path comparator of another complex type - verify whether types that are (value/serialization/comparison) compatible in a previous release are still compatible with this release - store the compatibility matrix in a compressed JSON file so that we can copy it to future releases to assert backward compatibility (similar approach to LegacySSTableTest) Additionally, the equals method in TupleType and UserType was fixed to be symmetric. Previously, comparing two values gave a different outcome when inverted. > Provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0, 5.1 > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify
[jira] [Updated] (CASSANDRA-19479) Provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Test and Documentation Plan: regression tests Status: Patch Available (was: In Progress) > Provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0, 5.1 > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > Additionally, the equals method in TupleType and UserType was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Description: This is a part of CASSANDRA-14476 - we should verify whether the type compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix the remaining issues. The implemented tests verify the following: - assumed compatibility between primitive types - equals method symmetricity - freezing/unfreezing - value compatibility by using a serializer of one type to deserialize a value serialized using a serializer of another type - serialization compatibility by serializing a row with a column of one type as a column of another type for simple and complex cells (multicell types) - (comparison) compatibility by comparing serialized values of one type using a comparator of another type; for multicell types - build rows and compare cell paths of a complex type using a cell path comparator of another complex type - verify whether types that are (value/serialization/comparison) compatible in a previous release are still compatible with this release - store the compatibility matrix in a compressed JSON file so that we can copy it to future releases to assert backward compatibility (similar approach to LegacySSTableTest) Additionally, the equals method in TupleType and UserType was fixed to be symmetric. Previously, comparing two values gave a different outcome when inverted. was: Part of CASSANDRA-14476, we should verify whether the type compatibility matrix is upgradable from 4.0 and 4.1 to 5.0 and if not, fix the remaining issues. The test were implemented under CASSANDRA-14476, we need to verify that in that certain upgrade paths. > Provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0, 5.1 > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > Additionally, the equals method in TupleType and UserType was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Fix Version/s: 4.0.13 4.1.5 5.1 (was: 5.0-rc) > Provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.13, 4.1.5, 5.0, 5.1 > > > Part of CASSANDRA-14476, we should verify whether the type compatibility > matrix is upgradable from 4.0 and 4.1 to 5.0 and if not, fix the remaining > issues. > The test were implemented under CASSANDRA-14476, we need to verify that in > that certain upgrade paths. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19479: -- Change Category: Quality Assurance Complexity: Normal Fix Version/s: 5.0-rc 5.0 Status: Open (was: Triage Needed) > Provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.0 > > > Part of CASSANDRA-14476, we should verify whether the type compatibility > matrix is upgradable from 4.0 and 4.1 to 5.0 and if not, fix the remaining > issues. > The test were implemented under CASSANDRA-14476, we need to verify that in > that certain upgrade paths. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-19479) Provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski reassigned CASSANDRA-19479: - Assignee: Jacek Lewandowski > Provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > > Part of CASSANDRA-14476, we should verify whether the type compatibility > matrix is upgradable from 4.0 and 4.1 to 5.0 and if not, fix the remaining > issues. > The test were implemented under CASSANDRA-14476, we need to verify that in > that certain upgrade paths. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19479) Provide tests for type compatibility between 4.1 and 5.0
Jacek Lewandowski created CASSANDRA-19479: - Summary: Provide tests for type compatibility between 4.1 and 5.0 Key: CASSANDRA-19479 URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 Project: Cassandra Issue Type: Task Components: Test/unit Reporter: Jacek Lewandowski Part of CASSANDRA-14476, we should verify whether the type compatibility matrix is upgradable from 4.0 and 4.1 to 5.0 and if not, fix the remaining issues. The test were implemented under CASSANDRA-14476, we need to verify that in that certain upgrade paths. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18504) Added support for type VECTOR
[ https://issues.apache.org/jira/browse/CASSANDRA-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825998#comment-17825998 ] Jacek Lewandowski commented on CASSANDRA-18504: --- Why {{SSTableHeaderFix}} got removed? Aren't we affected anymore by the issue? Can we have a upgrade dtest to prove that? > Added support for type VECTOR > -- > > Key: CASSANDRA-18504 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18504 > Project: Cassandra > Issue Type: Improvement > Components: Cluster/Schema, CQL/Syntax >Reporter: David Capwell >Assignee: David Capwell >Priority: Normal > Fix For: 5.0-alpha1, 5.0 > > Time Spent: 20h 40m > Remaining Estimate: 0h > > Based off several mailing list threads (see "[POLL] Vector type for ML”, > "[DISCUSS] New data type for vector search”, and "Adding vector search to SAI > with heirarchical navigable small world graph index”), its desirable to add a > new type “VECTOR” that has the following properties > 1) fixed length array > 2) elements may not be null > 3) flatten array (aka multi-cell = false) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types
[ https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824810#comment-17824810 ] Jacek Lewandowski commented on CASSANDRA-14476: --- Here it is: https://github.com/apache/cassandra/pull/3169 > ShortType and ByteType are incorrectly considered variable-length types > --- > > Key: CASSANDRA-14476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14476 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Vladimir Krivopalov >Assignee: Jacek Lewandowski >Priority: Low > Labels: lhf > Fix For: 5.0.x, 5.1 > > > The AbstractType class has a method valueLengthIfFixed() that returns -1 for > data types with a variable length and a positive value for types with a fixed > length. This is primarily used for efficient serialization and > deserialization. > > It turns out that there is an inconsistency in types ShortType and ByteType > as those are in fact fixed-length types (2 bytes and 1 byte, respectively) > but they don't have the valueLengthIfFixed() method overloaded and it returns > -1 as if they were of variable length. > > It would be good to fix that at some appropriate point, for example, when > introducing a new version of SSTables format, to keep the meaning of the > function consistent across data types. Saving some bytes in serialized format > is a minor but pleasant bonus. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types
[ https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824785#comment-17824785 ] Jacek Lewandowski commented on CASSANDRA-14476: --- I'm going to first submit some tests which will validate serialization and comparisons, as well asl backward compatibility. > ShortType and ByteType are incorrectly considered variable-length types > --- > > Key: CASSANDRA-14476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14476 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Vladimir Krivopalov >Assignee: Jacek Lewandowski >Priority: Low > Labels: lhf > Fix For: 5.0.x, 5.1 > > > The AbstractType class has a method valueLengthIfFixed() that returns -1 for > data types with a variable length and a positive value for types with a fixed > length. This is primarily used for efficient serialization and > deserialization. > > It turns out that there is an inconsistency in types ShortType and ByteType > as those are in fact fixed-length types (2 bytes and 1 byte, respectively) > but they don't have the valueLengthIfFixed() method overloaded and it returns > -1 as if they were of variable length. > > It would be good to fix that at some appropriate point, for example, when > introducing a new version of SSTables format, to keep the meaning of the > function consistent across data types. Saving some bytes in serialized format > is a minor but pleasant bonus. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types
[ https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822240#comment-17822240 ] Jacek Lewandowski edited comment on CASSANDRA-14476 at 3/8/24 1:05 PM: --- There are more problems with type compatibility: 1. Fixed length types reported as variable length: *ByteType*, *ShortType*, *CounterColumnType*, *SimpleDateType*, *TimeType*, and types like *TupleType*, *UserType* when all subtypes are of fixed length 2. Value compatibility issues: * *IntegerType* should be compatible with *ShortType*, *ByteType*, *SimpleDateType*, and *TimeType* - all of them are simple integers serialized with Big-Endian byte order * *LongType* is compatible with *TimestampType* and *TimestampType* is compatible with *LongType*, which makes a cycle in the type compatibility hierarchy - I don't know if it is ok because the relation {{isValueCompatibleWith}} is used when merging data from different sources to determine the resulting type. It may end up with a result depending on the order of data sources. Is it ok for compaction and querying? - I don't know. * *TimeType* is compatible with *LongType*, but it should be opposite as the *LongType* is more generic than *TimeType* * *SimpleDateType* is compatible with *Int32Type*, but is should be opposite as the *Int32Type* is more generic than *SimpleDateType* 3. Painful lack of tests for this stuff 4. {{isCompatibleWith}} seems to be used for very few things: * validating the return type of the replaced function or aggregate * validating the new table metadata against the previous metadata - the new metadata must have all the types compatible with the previous metadata. Some conclusions: * for the return type of functions and aggregates, it does not matter whether the compared types are multi-cell or not, all in all we deal with opaque value - it would be enough to ensure value compatibility (compose/decompose) and comparison consistency. * I suspect a bug there, though - the return type is required to satisfy {{returnType.isCompatibleWith(existingAggregate.returnType())}} condition. I believe the condition should be the opposite - assuming that relation {{isCompatibleWith}} is a partial order, the *existing return type should be the same or more generic than the new type* so that the function will continue to work correctly with the existing usages. If we allow changing the type from, say, {{UTF8}} to {{Bytes}} (which is valid according to the current condition), the usages expecting {{UTF8}} return type will stop working. * For the metadata compatibility checks, we never use multi-cell serialized values for sorting. If a multi-cell type is ever used in an order requiring context (part of the primary key), it is always frozen. Therefore, there is no need to consider different rules for multi-cell / frozen variants. --- I haven't investigated the compatibility of complex types yet was (Author: jlewandowski): There are more problems with type compatibility: 1. Fixed length types reported as variable length: *ByteType*, *ShortType*, *CounterColumnType*, *SimpleDateType*, *TimeType*, and types like *TupleType*, *UserType* when all subtypes are of fixed length 2. Value compatibility issues: * *IntegerType* should be compatible with *ShortType*, *ByteType*, *SimpleDateType* and *TimeType* - all of them are simple integers serialized with Big-Endian byte order * *LongType* is compatible with *TimestampType* and *TimestampType* is compatible with *LongType* which makes a cycle in the type compatibility hierarchy - I don't know if it is ok because the relation {{isValueCompatibleWith}} is used when merging data from different sources in order to determine the resulting type. It may end up with a result depending on the order of data sources - is it ok for compaction and querying? * *TimeType* is compatible with *LongType*, but it should be opposite as the *LongType* is more generic than *TimeType* * *SimpleDateType* is compatible with *Int32Type*, but is should be opposite as the *Int32Type* is more generic than *SimpleDateType* 3. Painful lack of tests for this stuff --- I haven't investigated the compatibility of complex types yet > ShortType and ByteType are incorrectly considered variable-length types > --- > > Key: CASSANDRA-14476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14476 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Vladimir Krivopalov >Assignee: Jacek Lewandowski >Priority: Low > Labels: lhf > Fix For: 5.0.x, 5.1 > > > The AbstractType class has a method valueLengthIfFixed() that returns -1 for > data types with a variable length and a positive value
[jira] [Commented] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types
[ https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822240#comment-17822240 ] Jacek Lewandowski commented on CASSANDRA-14476: --- There are more problems with type compatibility: 1. Fixed length types reported as variable length: *ByteType*, *ShortType*, *CounterColumnType*, *SimpleDateType*, *TimeType*, and types like *TupleType*, *UserType* when all subtypes are of fixed length 2. Value compatibility issues: * *IntegerType* should be compatible with *ShortType*, *ByteType*, *SimpleDateType* and *TimeType* - all of them are simple integers serialized with Big-Endian byte order * *LongType* is compatible with *TimestampType* and *TimestampType* is compatible with *LongType* which makes a cycle in the type compatibility hierarchy - I don't know if it is ok because the relation {{isValueCompatibleWith}} is used when merging data from different sources in order to determine the resulting type. It may end up with a result depending on the order of data sources - is it ok for compaction and querying? * *TimeType* is compatible with *LongType*, but it should be opposite as the *LongType* is more generic than *TimeType* * *SimpleDateType* is compatible with *Int32Type*, but is should be opposite as the *Int32Type* is more generic than *SimpleDateType* 3. Painful lack of tests for this stuff --- I haven't investigated the compatibility of complex types yet > ShortType and ByteType are incorrectly considered variable-length types > --- > > Key: CASSANDRA-14476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14476 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Vladimir Krivopalov >Assignee: Jacek Lewandowski >Priority: Low > Labels: lhf > Fix For: 5.0.x, 5.1 > > > The AbstractType class has a method valueLengthIfFixed() that returns -1 for > data types with a variable length and a positive value for types with a fixed > length. This is primarily used for efficient serialization and > deserialization. > > It turns out that there is an inconsistency in types ShortType and ByteType > as those are in fact fixed-length types (2 bytes and 1 byte, respectively) > but they don't have the valueLengthIfFixed() method overloaded and it returns > -1 as if they were of variable length. > > It would be good to fix that at some appropriate point, for example, when > introducing a new version of SSTables format, to keep the meaning of the > function consistent across data types. Saving some bytes in serialized format > is a minor but pleasant bonus. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types
[ https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-14476: -- Fix Version/s: 5.0.x 5.1 > ShortType and ByteType are incorrectly considered variable-length types > --- > > Key: CASSANDRA-14476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14476 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Vladimir Krivopalov >Assignee: Jacek Lewandowski >Priority: Low > Labels: lhf > Fix For: 5.0.x, 5.1 > > > The AbstractType class has a method valueLengthIfFixed() that returns -1 for > data types with a variable length and a positive value for types with a fixed > length. This is primarily used for efficient serialization and > deserialization. > > It turns out that there is an inconsistency in types ShortType and ByteType > as those are in fact fixed-length types (2 bytes and 1 byte, respectively) > but they don't have the valueLengthIfFixed() method overloaded and it returns > -1 as if they were of variable length. > > It would be good to fix that at some appropriate point, for example, when > introducing a new version of SSTables format, to keep the meaning of the > function consistent across data types. Saving some bytes in serialized format > is a minor but pleasant bonus. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types
[ https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821308#comment-17821308 ] Jacek Lewandowski commented on CASSANDRA-14476: --- In 5.0 the problem affects more types: {{ByteType}}, {{ShortType}}, {{SimpleDateType}}, {{TimeType}}, {{TimestampType}}. I'm going to fix it and move the original method checking for whether the type serialization is variable or fixed length directly to {{TypeSerializer}}. I'll also provide some upgrade tests to make sure the old sstables can be read without problems. I don't think we need to bump SSTable version though because it does not change anything with serialization. It may certainly break some implicit casting in CQL though. > ShortType and ByteType are incorrectly considered variable-length types > --- > > Key: CASSANDRA-14476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14476 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Vladimir Krivopalov >Assignee: Jacek Lewandowski >Priority: Low > Labels: lhf > > The AbstractType class has a method valueLengthIfFixed() that returns -1 for > data types with a variable length and a positive value for types with a fixed > length. This is primarily used for efficient serialization and > deserialization. > > It turns out that there is an inconsistency in types ShortType and ByteType > as those are in fact fixed-length types (2 bytes and 1 byte, respectively) > but they don't have the valueLengthIfFixed() method overloaded and it returns > -1 as if they were of variable length. > > It would be good to fix that at some appropriate point, for example, when > introducing a new version of SSTables format, to keep the meaning of the > function consistent across data types. Saving some bytes in serialized format > is a minor but pleasant bonus. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types
[ https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski reassigned CASSANDRA-14476: - Assignee: Jacek Lewandowski (was: Jearvon Dharrie) > ShortType and ByteType are incorrectly considered variable-length types > --- > > Key: CASSANDRA-14476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14476 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core >Reporter: Vladimir Krivopalov >Assignee: Jacek Lewandowski >Priority: Low > Labels: lhf > > The AbstractType class has a method valueLengthIfFixed() that returns -1 for > data types with a variable length and a positive value for types with a fixed > length. This is primarily used for efficient serialization and > deserialization. > > It turns out that there is an inconsistency in types ShortType and ByteType > as those are in fact fixed-length types (2 bytes and 1 byte, respectively) > but they don't have the valueLengthIfFixed() method overloaded and it returns > -1 as if they were of variable length. > > It would be good to fix that at some appropriate point, for example, when > introducing a new version of SSTables format, to keep the meaning of the > function consistent across data types. Saving some bytes in serialized format > is a minor but pleasant bonus. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19408) Define smoke tests set to run with both JVMs
Jacek Lewandowski created CASSANDRA-19408: - Summary: Define smoke tests set to run with both JVMs Key: CASSANDRA-19408 URL: https://issues.apache.org/jira/browse/CASSANDRA-19408 Project: Cassandra Issue Type: Task Reporter: Jacek Lewandowski -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19407) Drop one JVM tests run from pre-commit pipeline
Jacek Lewandowski created CASSANDRA-19407: - Summary: Drop one JVM tests run from pre-commit pipeline Key: CASSANDRA-19407 URL: https://issues.apache.org/jira/browse/CASSANDRA-19407 Project: Cassandra Issue Type: Task Reporter: Jacek Lewandowski -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19406) Optimize pre-commit test suite
Jacek Lewandowski created CASSANDRA-19406: - Summary: Optimize pre-commit test suite Key: CASSANDRA-19406 URL: https://issues.apache.org/jira/browse/CASSANDRA-19406 Project: Cassandra Issue Type: Epic Reporter: Jacek Lewandowski -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-18824: -- Fix Version/s: 3.0.30 3.11.17 4.0.12 4.1.4 5.0-rc 5.1 (was: 3.0.x) (was: 3.11.x) (was: 5.x) (was: 4.0.x) (was: 4.1.x) (was: 5.0.x) Source Control Link: https://github.com/apache/cassandra/commit/5be57829b03ef980933ba52ecc0549787f653da4 Resolution: Fixed Status: Resolved (was: Ready to Commit) > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.30, 3.11.17, 4.0.12, 4.1.4, 5.0-rc, 5.1 > > Time Spent: 1.5h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-18824: -- Reviewers: Brandon Williams, Jacek Lewandowski, Jacek Lewandowski (was: Brandon Williams, Jacek Lewandowski) Brandon Williams, Jacek Lewandowski, Jacek Lewandowski (was: Brandon Williams, Jacek Lewandowski) Status: Review In Progress (was: Patch Available) > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1.5h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-18824: -- Status: Ready to Commit (was: Review In Progress) > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1.5h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815251#comment-17815251 ] Jacek Lewandowski commented on CASSANDRA-18824: --- ok, merging > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1.5h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814322#comment-17814322 ] Jacek Lewandowski commented on CASSANDRA-18824: --- I've created https://issues.apache.org/jira/browse/CASSANDRA-19363 and https://issues.apache.org/jira/browse/CASSANDRA-19364 as a result of investigating the flakiness. The fact that it didn't fail in 5k runs, assuming all of those runs were executed under very similar cluster conditions, can be misleading. Adding a slight delay in an async code of pending ranges calculator leads to consistent test failures even on 4.0. This is not related to this issue though - it is only the test added here which can accidentally detect the problem. Since those separate tickets are now created, I think we can merge this ticket. However, those who asked for this fix should be notified about those possible issues. > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1.5h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19363) Weird data loss in 3.11 flakiness during decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19363: -- Description: While testing CASSANDRA-18824 on 3.11, we noticed one flaky result of the newly added decommission test. It looked innocent; however, when digging into the logs, it turned out that, for some reason, the data that were being pumped into the cluster went to the decommissioned node instead of going to the working node. That is, the data were inserted into a 2-node cluster (RF=1) while, say, node1 got decommissioned. The expected behavior would be that the data land in node2 after that. However, for some reason, in this 1/1000 flaky test, the situation was the opposite, and the data went to the decommissioned node, resulting in a total loss. I haven't found the reason. I don't know if it is a test failure or a production code problem. I cannot prove that it is only a 3.11 problem. I'm creating this ticket because if this is a real issue and exists on newer branches, it is serious. The logs artifact is lost in CircleCI thus I'm attaching the one I've downloaded earlier, unfortunately it is cleaned up a bit. The relevant part is: {noformat} DEBUG [node1_isolatedExecutor:3] node1 ColumnFamilyStore.java:949 - Enqueuing flush of tbl: 38.965KiB (0%) on-heap, 0.000KiB (0%) off-heap DEBUG [node1_PerDiskMemtableFlushWriter_1:1] node1 Memtable.java:477 - Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), flushed range = (max(-3074457345618258603), max(3074457345618258602)] DEBUG [node1_PerDiskMemtableFlushWriter_2:1] node1 Memtable.java:477 - Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), flushed range = (max(3074457345618258602), max(9223372036854775807)] DEBUG [node1_PerDiskMemtableFlushWriter_0:1] node1 Memtable.java:477 - Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), flushed range = (min(-9223372036854775808), max(-3074457345618258603)] DEBUG [node1_PerDiskMemtableFlushWriter_2:1] node1 Memtable.java:506 - Completed flushing /node1/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-3-big-Data.db (1.059KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, position=47614) DEBUG [node1_PerDiskMemtableFlushWriter_1:1] node1 Memtable.java:506 - Completed flushing /node1/data1/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-2-big-Data.db (1.091KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, position=47614) DEBUG [node1_PerDiskMemtableFlushWriter_0:1] node1 Memtable.java:506 - Completed flushing /node1/data0/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db (1.260KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, position=47614) DEBUG [node1_MemtableFlushWriter:1] node1 ColumnFamilyStore.java:1267 - Flushed to [BigTableReader(path='/node1/data0/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db'), BigTableReader(path='/node1/data1/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-2-big-Data.db'), BigTableReader(path='/node1/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-3-big-Data.db')] (3 sstables, 17.521KiB), biggest 5.947KiB, smallest 5.773KiB DEBUG [node2_isolatedExecutor:1] node2 ColumnFamilyStore.java:949 - Enqueuing flush of tbl: 38.379KiB (0%) on-heap, 0.000KiB (0%) off-heap DEBUG [node2_PerDiskMemtableFlushWriter_0:1] node2 Memtable.java:477 - Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), flushed range = (null, null] DEBUG [node2_PerDiskMemtableFlushWriter_0:1] node2 Memtable.java:506 - Completed flushing /node2/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db (3.409KiB) for commitlog position CommitLogPosition(segmentId=1704397821653, position=54585) DEBUG [node2_MemtableFlushWriter:1] node2 ColumnFamilyStore.java:1267 - Flushed to [BigTableReader(path='/node2/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db')] (1 sstables, 7.731KiB), biggest 7.731KiB, {noformat} As one can see, node1 flushed 3 sstables of {{tbl}} although it is already decommissioned. Node 2 did not flush much. This is opposite to the passing run of the test. The test code is as follows: {code:java} try (Cluster cluster = init(builder().withNodes(2) .withTokenSupplier(evenlyDistributedTokens(2)) .withNodeIdTopology(NetworkTopology.singleDcNetworkTopology(2, "dc0", "rack0")) .withConfig(config -> config.with(NETWORK, GOSSIP)) .start(), 1)) {
[jira] [Updated] (CASSANDRA-19363) Weird data loss in 3.11 flakiness during decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19363: -- Attachment: bad.txt > Weird data loss in 3.11 flakiness during decommission > - > > Key: CASSANDRA-19363 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19363 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 3.11.x > > Attachments: bad.txt > > > While testing CASSANDRA-18824 on 3.11, we noticed one flaky result of the > newly added decommission test. It looked innocent; however, when digging into > the logs, it turned out that, for some reason, the data that were being > pumped into the cluster went to the decommissioned node instead of going to > the working node. > That is, the data were inserted into a 2-node cluster (RF=1) while, say, > node2 got decommissioned. The expected behavior would be that the data land > in node1 after that. However, for some reason, in this 1/1000 flaky test, the > situation was the opposite, and the data went to the decommissioned node, > resulting in a total loss. > I haven't found the reason. I don't know if it is a test failure or a > production code problem. I cannot prove that it is only a 3.11 problem. I'm > creating this ticket because if this is a real issue and exists on newer > branches, it is serious. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19364) Data loss during decommission possible due to a delayed and unsynced pending ranges calculation
[ https://issues.apache.org/jira/browse/CASSANDRA-19364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19364: -- Description: This possible issue has been discovered while inspecting flaky tests of CASSANDRA-18824. Pending ranges calculation is executed asynchronously when the node is decommissioned. If the data is inserted during decommissioning, and pending ranges calculation is delayed for some reason (it can be as it is not synchronous), we may end up with partial data loss. That can be just a wrong test. Thus, I perceive this ticket more like a memo for further investigation or discussion. Note that this has obviously been fixed by TCM. The test in question was: {code:java} try (Cluster cluster = init(builder().withNodes(2) .withTokenSupplier(evenlyDistributedTokens(2)) .withNodeIdTopology(NetworkTopology.singleDcNetworkTopology(2, "dc0", "rack0")) .withConfig(config -> config.with(NETWORK, GOSSIP)) .start(), 1)) { IInvokableInstance nodeToDecommission = cluster.get(1); IInvokableInstance nodeToRemainInCluster = cluster.get(2); // Start decomission on nodeToDecommission cluster.forEach(statusToDecommission(nodeToDecommission)); logger.info("Decommissioning node {}", nodeToDecommission.broadcastAddress()); // Add data to cluster while node is decomissioning int numRows = 100; cluster.schemaChange("CREATE TABLE IF NOT EXISTS " + KEYSPACE + ".tbl (pk int, ck int, v int, PRIMARY KEY (pk, ck))"); insertData(cluster, 1, numRows, ConsistencyLevel.ONE); // <--- HERE - when PRC is delayed, we get there only ~50% of inserted rows // Check data before cleanup on nodeToRemainInCluster assertEquals(100, nodeToRemainInCluster.executeInternal("SELECT * FROM " + KEYSPACE + ".tbl").length); } {code} was: This possible issue has been discovered while inspecting flaky tests of CASSANDRA-18824. Pending ranges calculation is executed asynchronously when the node is decommissioned. If the data is inserted during decommissioning, and pending ranges calculation is delayed for some reason (it can be as it is not synchronous), we may end up with partial data loss. That can be just a wrong test. Thus, I perceive this ticket more like a memo for further investigation or discussion. Note that this has obviously been fixed by TCM. > Data loss during decommission possible due to a delayed and unsynced pending > ranges calculation > --- > > Key: CASSANDRA-19364 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19364 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Jacek Lewandowski >Priority: Normal > > This possible issue has been discovered while inspecting flaky tests of > CASSANDRA-18824. Pending ranges calculation is executed asynchronously when > the node is decommissioned. If the data is inserted during decommissioning, > and pending ranges calculation is delayed for some reason (it can be as it is > not synchronous), we may end up with partial data loss. That can be just a > wrong test. Thus, I perceive this ticket more like a memo for further > investigation or discussion. > Note that this has obviously been fixed by TCM. > The test in question was: > {code:java} > try (Cluster cluster = init(builder().withNodes(2) > > .withTokenSupplier(evenlyDistributedTokens(2)) > > .withNodeIdTopology(NetworkTopology.singleDcNetworkTopology(2, "dc0", > "rack0")) > .withConfig(config -> > config.with(NETWORK, GOSSIP)) > .start(), 1)) > { > IInvokableInstance nodeToDecommission = cluster.get(1); > IInvokableInstance nodeToRemainInCluster = cluster.get(2); > // Start decomission on nodeToDecommission > cluster.forEach(statusToDecommission(nodeToDecommission)); > logger.info("Decommissioning node {}", > nodeToDecommission.broadcastAddress()); > // Add data to cluster while node is decomissioning > int numRows = 100; > cluster.schemaChange("CREATE TABLE IF NOT EXISTS " + KEYSPACE + > ".tbl (pk int, ck int, v int, PRIMARY KEY (pk, ck))"); > insertData(cluster, 1, numRows, ConsistencyLevel.ONE); // >
[jira] [Created] (CASSANDRA-19364) Data loss during decommission possible due to a delayed and unsynced pending ranges calculation
Jacek Lewandowski created CASSANDRA-19364: - Summary: Data loss during decommission possible due to a delayed and unsynced pending ranges calculation Key: CASSANDRA-19364 URL: https://issues.apache.org/jira/browse/CASSANDRA-19364 Project: Cassandra Issue Type: Bug Components: Consistency/Bootstrap and Decommission Reporter: Jacek Lewandowski This possible issue has been discovered while inspecting flaky tests of CASSANDRA-18824. Pending ranges calculation is executed asynchronously when the node is decommissioned. If the data is inserted during decommissioning, and pending ranges calculation is delayed for some reason (it can be as it is not synchronous), we may end up with partial data loss. That can be just a wrong test. Thus, I perceive this ticket more like a memo for further investigation or discussion. Note that this has obviously been fixed by TCM. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19363) Weird data loss in 3.11 flakiness during decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19363: -- Fix Version/s: 3.11.x > Weird data loss in 3.11 flakiness during decommission > - > > Key: CASSANDRA-19363 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19363 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 3.11.x > > > While testing CASSANDRA-18824 on 3.11, we noticed one flaky result of the > newly added decommission test. It looked innocent; however, when digging into > the logs, it turned out that, for some reason, the data that were being > pumped into the cluster went to the decommissioned node instead of going to > the working node. > That is, the data were inserted into a 2-node cluster (RF=1) while, say, > node2 got decommissioned. The expected behavior would be that the data land > in node1 after that. However, for some reason, in this 1/1000 flaky test, the > situation was the opposite, and the data went to the decommissioned node, > resulting in a total loss. > I haven't found the reason. I don't know if it is a test failure or a > production code problem. I cannot prove that it is only a 3.11 problem. I'm > creating this ticket because if this is a real issue and exists on newer > branches, it is serious. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19363) Weird data loss in 3.11 flakiness during decommission
Jacek Lewandowski created CASSANDRA-19363: - Summary: Weird data loss in 3.11 flakiness during decommission Key: CASSANDRA-19363 URL: https://issues.apache.org/jira/browse/CASSANDRA-19363 Project: Cassandra Issue Type: Bug Components: Consistency/Bootstrap and Decommission Reporter: Jacek Lewandowski While testing CASSANDRA-18824 on 3.11, we noticed one flaky result of the newly added decommission test. It looked innocent; however, when digging into the logs, it turned out that, for some reason, the data that were being pumped into the cluster went to the decommissioned node instead of going to the working node. That is, the data were inserted into a 2-node cluster (RF=1) while, say, node2 got decommissioned. The expected behavior would be that the data land in node1 after that. However, for some reason, in this 1/1000 flaky test, the situation was the opposite, and the data went to the decommissioned node, resulting in a total loss. I haven't found the reason. I don't know if it is a test failure or a production code problem. I cannot prove that it is only a 3.11 problem. I'm creating this ticket because if this is a real issue and exists on newer branches, it is serious. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13855) Implement Http Seed provider
[ https://issues.apache.org/jira/browse/CASSANDRA-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811217#comment-17811217 ] Jacek Lewandowski commented on CASSANDRA-13855: --- though we have already a variety of cloud providers implemented > Implement Http Seed provider > > > Key: CASSANDRA-13855 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13855 > Project: Cassandra > Issue Type: Improvement > Components: Legacy/Coordination, Legacy/Core >Reporter: Jon Haddad >Assignee: Claude Warren >Priority: Low > Labels: lhf > Fix For: 5.x > > Attachments: 0001-Add-URL-Seed-Provider-trunk.txt, signature.asc, > signature.asc, signature.asc > > Time Spent: 0.5h > Remaining Estimate: 0h > > Seems like including a dead simple seed provider that can fetch from a URL, 1 > line per seed, would be useful. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13855) Implement Http Seed provider
[ https://issues.apache.org/jira/browse/CASSANDRA-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811209#comment-17811209 ] Jacek Lewandowski commented on CASSANDRA-13855: --- Can you guys provide some justification beyond that it would be nice to have this feature? Are there still real use cases for this? > Implement Http Seed provider > > > Key: CASSANDRA-13855 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13855 > Project: Cassandra > Issue Type: Improvement > Components: Legacy/Coordination, Legacy/Core >Reporter: Jon Haddad >Assignee: Claude Warren >Priority: Low > Labels: lhf > Fix For: 5.x > > Attachments: 0001-Add-URL-Seed-Provider-trunk.txt, signature.asc, > signature.asc, signature.asc > > Time Spent: 0.5h > Remaining Estimate: 0h > > Seems like including a dead simple seed provider that can fetch from a URL, 1 > line per seed, would be useful. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13855) Implement Http Seed provider
[ https://issues.apache.org/jira/browse/CASSANDRA-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-13855: -- Reviewers: Jacek Lewandowski, Jon Haddad, Stefan Miklosovic (was: Jon Haddad, Stefan Miklosovic) > Implement Http Seed provider > > > Key: CASSANDRA-13855 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13855 > Project: Cassandra > Issue Type: Improvement > Components: Legacy/Coordination, Legacy/Core >Reporter: Jon Haddad >Assignee: Claude Warren >Priority: Low > Labels: lhf > Fix For: 5.x > > Attachments: 0001-Add-URL-Seed-Provider-trunk.txt, signature.asc, > signature.asc, signature.asc > > Time Spent: 0.5h > Remaining Estimate: 0h > > Seems like including a dead simple seed provider that can fetch from a URL, 1 > line per seed, would be useful. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811206#comment-17811206 ] Jacek Lewandowski commented on CASSANDRA-19239: --- Hey, sorry for the delay. The context is that - when running this {{NativeTransportEncryptionOptionsTest}} test on 5.0 it passes, in the diagrams above you can see the heap memory usage over the whole test. It looks OK to me; the peak usage is far below 1GB max heap size. With G1GC in J17 tests, the memory usage is a bit greater, but this is unrelated to the topic. On the charts for the trunk, however, you can find that the heap memory usage is greater; it actually nearly monotonically grows with subsequent test cases. It is not related to the metaspace, but I found it is correlated to the number of threads. I haven't attached a chart for that, though I saw the threads are started and stopped for individual test cases on 5.0, while some threads remain running on the trunk branch, and the total number of threads grows on average. This can also be verified by uncommenting the thread leak detector in the dtest cluster implementation. > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png, screenshot-1.png, screenshot-2.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18840) Leakage of references to SSTable on unsuccessful operations
[ https://issues.apache.org/jira/browse/CASSANDRA-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811196#comment-17811196 ] Jacek Lewandowski commented on CASSANDRA-18840: --- The issue is the assertion - it is unnecessary, actually - it is too early. The aborted streams are closed in the later step in the streaming session, so the fix is just to remove the assertion + provide a test case that verifies all references are eventually released. *Explanation* We open {{{}SSTableReader{}}}, and we get additional references to it for each {{{}CassandraOutgoingFile{}}}. In that method with assertion, we close only the primary {{SSTableReader}} reference, assuming all the remaining {{CassandraOutgoingFile}} were already closed. This is the case when everything is completed correctly. However, when it doesn't, the remaining {{CassandraOugoingFile}} objects are closed around [this place|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/streaming/StreamSession.java#L557]: {code:java} StreamSession.java logger.debug("[Stream #{}] Will close attached inbound {} and outbound {} channels", planId(), inbound, outbound); {code} While the method with the assertion is called earlier, concretely [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/streaming/StreamSession.java#L545]: {code:java} StreamSession.java streamResult.handleSessionComplete(this); {code} That's why the references are released regardless of that assertion anyway. > Leakage of references to SSTable on unsuccessful operations > --- > > Key: CASSANDRA-18840 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18840 > Project: Cassandra > Issue Type: Bug > Components: Local/SSTable >Reporter: Stefan Miklosovic >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0, 5.1 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > This is a little bit tricky to describe correctly as I can talk about the > symptoms only. I hit this issue when testing CASSANDRA-18781. > In a nutshell, when we go to bulkload an SSTable, it opens it in > SSTableLoader. If bulkloading fails on server side and exception is > propagated to the client, on releasing of references, it fails on this assert > (1). This practically means that we are leaking resources as something still > references that SSTable but it was not tidied up (on failure). On a happy > path, it is all de-referenced correctly. > I think that this might have implications beyond SSTable loading, e.g. this > could happen upon streaming too. > (1) > https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L245 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810045#comment-17810045 ] Jacek Lewandowski commented on CASSANDRA-19239: --- [~samt] / [~ifesdjeen] - there is a thread leak on trunk - do you have any hints how to fix it? > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png, screenshot-1.png, screenshot-2.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810030#comment-17810030 ] Jacek Lewandowski commented on CASSANDRA-19239: --- and now the biggest mystery - when I run this test from IntelliJ, the memory usage looks the same as on 5.0! > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png, screenshot-1.png, screenshot-2.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810023#comment-17810023 ] Jacek Lewandowski commented on CASSANDRA-19239: --- Let me see what we have on heap... > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png, screenshot-1.png, screenshot-2.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810022#comment-17810022 ] Jacek Lewandowski commented on CASSANDRA-19239: --- And actually it seems like we must have a leak in trunk :( this is from 5.0 / j11 !screenshot-1.png! 5.0 / j17 !screenshot-2.png! J17 usage is higher than J11, but trunk uses twice as much memory > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png, screenshot-1.png, screenshot-2.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19239: -- Attachment: screenshot-2.png > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png, screenshot-1.png, screenshot-2.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19239: -- Attachment: screenshot-1.png > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png, screenshot-1.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810018#comment-17810018 ] Jacek Lewandowski commented on CASSANDRA-19239: --- Referring to your other comment - why only on trunk - well, trunk differs significantly from 5.0 (TCM and other stuff) thus it uses more memory. On the other hand, as you can see, it is not that much difference between what we have a and what we need. I'll try to analyze the memory usage on 5.0 as well to have a complete view on the problem. > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810016#comment-17810016 ] Jacek Lewandowski commented on CASSANDRA-19239: --- If I change those settings, I'll do that only for the tests. I'm not going to touch the production configuration. Anyway, the immediate solution is to increase the heap size for tests. Those tests use {{medium}} configuration in CircleCI which means 2 vcpus and 4 GB. Increasing the heap size from 1G to 1.5G or 2GB should be fine. 1GB is pretty little for JVM dtests, especially if we run multiple nodes. Also, the diagrams above show that the cleanup is not perfect and we have a lot of leftovers after each test case. > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19126) Streaming appears to be incompatible with different storage_compatibility_mode settings
[ https://issues.apache.org/jira/browse/CASSANDRA-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19126: -- Fix Version/s: 5.1 (was: 5.x) Since Version: 5.0-beta1 Source Control Link: https://github.com/apache/cassandra/commit/d422eb1f353d27264665bfe3357dac1160814ea1 Resolution: Fixed Status: Resolved (was: Ready to Commit) > Streaming appears to be incompatible with different > storage_compatibility_mode settings > --- > > Key: CASSANDRA-19126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19126 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Streaming, Legacy/Streaming and Messaging, > Messaging/Internode, Tool/bulk load >Reporter: Branimir Lambov >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.1 > > > In particular, SSTableLoader appears to be incompatible with > storage_compatibility_mode: NONE, which manifests as a failure of > {{org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest}} > when the flag is turned on (found during CASSANDRA-18753 testing). Setting > {{storage_compatibility_mode: NONE}} in the tool configuration yaml does not > help (according to the docs, this setting is not picked up). > This is likely a bigger problem as the acceptable streaming version for C* 5 > is 12 only in legacy mode and 13 only in none, i.e. two C* 5 nodes do not > appear to be able to stream with each other if their setting for the > compatibility mode is different. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19126) Streaming appears to be incompatible with different storage_compatibility_mode settings
[ https://issues.apache.org/jira/browse/CASSANDRA-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19126: -- Reviewers: Berenguer Blasi, Branimir Lambov (was: Branimir Lambov, Jacek Lewandowski) > Streaming appears to be incompatible with different > storage_compatibility_mode settings > --- > > Key: CASSANDRA-19126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19126 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Streaming, Legacy/Streaming and Messaging, > Messaging/Internode, Tool/bulk load >Reporter: Branimir Lambov >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.x > > > In particular, SSTableLoader appears to be incompatible with > storage_compatibility_mode: NONE, which manifests as a failure of > {{org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest}} > when the flag is turned on (found during CASSANDRA-18753 testing). Setting > {{storage_compatibility_mode: NONE}} in the tool configuration yaml does not > help (according to the docs, this setting is not picked up). > This is likely a bigger problem as the acceptable streaming version for C* 5 > is 12 only in legacy mode and 13 only in none, i.e. two C* 5 nodes do not > appear to be able to stream with each other if their setting for the > compatibility mode is different. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19126) Streaming appears to be incompatible with different storage_compatibility_mode settings
[ https://issues.apache.org/jira/browse/CASSANDRA-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-19126: -- Status: Ready to Commit (was: Changes Suggested) > Streaming appears to be incompatible with different > storage_compatibility_mode settings > --- > > Key: CASSANDRA-19126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19126 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Streaming, Legacy/Streaming and Messaging, > Messaging/Internode, Tool/bulk load >Reporter: Branimir Lambov >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.x > > > In particular, SSTableLoader appears to be incompatible with > storage_compatibility_mode: NONE, which manifests as a failure of > {{org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest}} > when the flag is turned on (found during CASSANDRA-18753 testing). Setting > {{storage_compatibility_mode: NONE}} in the tool configuration yaml does not > help (according to the docs, this setting is not picked up). > This is likely a bigger problem as the acceptable streaming version for C* 5 > is 12 only in legacy mode and 13 only in none, i.e. two C* 5 nodes do not > appear to be able to stream with each other if their setting for the > compatibility mode is different. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809924#comment-17809924 ] Jacek Lewandowski edited comment on CASSANDRA-19239 at 1/23/24 12:12 PM: - I think I have some idea for this mystery. Java 17 uses G1GC and G1GC is known for using more memory that declared. But it is not important in this case. We have {{G1HeapRegionSize=16m}} in G1 settings which is the problem. The region size large enough to cause 30% greater heap usage in case of that particular test. Another problem, which can be also responsible for simulation tests failures is that we force set 1G heap size for tests regardless any Xmx parameters defined for particular test tasks. I'm going to fix all of those issues and we will see. tl;dr with that parameter set 1m !image-2024-01-23-13-11-50-313.png! with that parameter set 16m !image-2024-01-23-13-12-33-954.png! was (Author: jlewandowski): I think I have some idea for this mystery. Java 17 uses G1GC and G1GC is known for using more memory that declared. But it is not important in this case. We have {{G1HeapRegionSize=16m}} in G1 settings which is the problem. The region size large enough to cause 30% greater heap usage in case of that particular test. Another problem, which can be also responsible for simulation tests failures is that we force set 1G heap size for tests regardless any Xmx parameters defined for particular test tasks. I'm going to fix all of those issues and we will see. > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > Attachments: image-2024-01-23-13-11-50-313.png, > image-2024-01-23-13-12-33-954.png > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809924#comment-17809924 ] Jacek Lewandowski commented on CASSANDRA-19239: --- I think I have some idea for this mystery. Java 17 uses G1GC and G1GC is known for using more memory that declared. But it is not important in this case. We have {{G1HeapRegionSize=16m}} in G1 settings which is the problem. The region size large enough to cause 30% greater heap usage in case of that particular test. Another problem, which can be also responsible for simulation tests failures is that we force set 1G heap size for tests regardless any Xmx parameters defined for particular test tasks. I'm going to fix all of those issues and we will see. > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski reassigned CASSANDRA-19239: - Assignee: Jacek Lewandowski > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809317#comment-17809317 ] Jacek Lewandowski commented on CASSANDRA-19239: --- Thanks [~e.dimitrova], I agree, if there is no 5.0 failure, it does not make sense to make it a blocker for 5.0-rc. > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19126) Streaming appears to be incompatible with different storage_compatibility_mode settings
[ https://issues.apache.org/jira/browse/CASSANDRA-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804419#comment-17804419 ] Jacek Lewandowski commented on CASSANDRA-19126: --- What is the conclusion here - is the PR acceptable as an immediate solution? [~blambov], [~Bereng] ? > Streaming appears to be incompatible with different > storage_compatibility_mode settings > --- > > Key: CASSANDRA-19126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19126 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Streaming, Legacy/Streaming and Messaging, > Messaging/Internode, Tool/bulk load >Reporter: Branimir Lambov >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.x > > > In particular, SSTableLoader appears to be incompatible with > storage_compatibility_mode: NONE, which manifests as a failure of > {{org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest}} > when the flag is turned on (found during CASSANDRA-18753 testing). Setting > {{storage_compatibility_mode: NONE}} in the tool configuration yaml does not > help (according to the docs, this setting is not picked up). > This is likely a bigger problem as the acceptable streaming version for C* 5 > is 12 only in legacy mode and 13 only in none, i.e. two C* 5 nodes do not > appear to be able to stream with each other if their setting for the > compatibility mode is different. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804407#comment-17804407 ] Jacek Lewandowski commented on CASSANDRA-18824: --- [~brandon.williams] thank you for running the tests. I didn't run them, was waiting for feedback > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1.5h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802896#comment-17802896 ] Jacek Lewandowski commented on CASSANDRA-18824: --- [~brandon.williams] - would you review my PRs? > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802776#comment-17802776 ] Jacek Lewandowski commented on CASSANDRA-18824: --- np, I'll handle that in the PRs > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18824) Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused missing replica
[ https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802659#comment-17802659 ] Jacek Lewandowski commented on CASSANDRA-18824: --- Compilation fails on some branches - I've created PRs yesterday, they are attached in the links section. I'm applying some fixes on each of them. When ready, I'll rerun the CI > Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused > missing replica > --- > > Key: CASSANDRA-18824 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18824 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Bootstrap and Decommission >Reporter: Szymon Miezal >Assignee: Szymon Miezal >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 1h > Remaining Estimate: 0h > > Node decommission triggers data transfer to other nodes. While this transfer > is in progress, > receiving nodes temporarily hold token ranges in a pending state. However, > the cleanup process currently doesn't consider these pending ranges when > calculating token ownership. > As a consequence, data that is already stored in sstables gets inadvertently > cleaned up. > STR: > * Create two node cluster > * Create keyspace with RF=1 > * Insert sample data (assert data is available when querying both nodes) > * Start decommission process of node 1 > * Start running cleanup in a loop on node 2 until decommission on node 1 > finishes > * Verify of all rows are in the cluster - it will fail as the previous step > removed some of the rows > It seems that the cleanup process does not take into account the pending > ranges, it uses only the local ranges - > [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466]. > There are two solutions to the problem. > One would be to change the cleanup process in a way that it start taking > pending ranges into account. Even thought it might sound tempting at first it > will require involving changes and a lot of testing effort. > Alternatively we could interrupt/prevent the cleanup process from running > when any pending range on a node is detected. That sounds like a reasonable > alternative to the problem and something that is relatively easy to implement. > The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this > ticket is to backport it to 3.x. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19126) Streaming appears to be incompatible with different storage_compatibility_mode settings
[ https://issues.apache.org/jira/browse/CASSANDRA-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802208#comment-17802208 ] Jacek Lewandowski commented on CASSANDRA-19126: --- So the rule will be like - if it is a client, it is assumed it runs outside the server and should accept all protocol versions. If it is a tool, it will just use the server configuration. As I looked at what we consider tools and clients, it seems tools operate on existing sstable files in data directories. BulkLoader was also considered a tool, but I think it is more appropriate to consider it a client. Changed that and made it work with client initialization. > Streaming appears to be incompatible with different > storage_compatibility_mode settings > --- > > Key: CASSANDRA-19126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19126 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Streaming, Legacy/Streaming and Messaging, > Messaging/Internode, Tool/bulk load >Reporter: Branimir Lambov >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.x > > > In particular, SSTableLoader appears to be incompatible with > storage_compatibility_mode: NONE, which manifests as a failure of > {{org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest}} > when the flag is turned on (found during CASSANDRA-18753 testing). Setting > {{storage_compatibility_mode: NONE}} in the tool configuration yaml does not > help (according to the docs, this setting is not picked up). > This is likely a bigger problem as the acceptable streaming version for C* 5 > is 12 only in legacy mode and 13 only in none, i.e. two C* 5 nodes do not > appear to be able to stream with each other if their setting for the > compatibility mode is different. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-16565) Remove dependency on sigar
[ https://issues.apache.org/jira/browse/CASSANDRA-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802122#comment-17802122 ] Jacek Lewandowski commented on CASSANDRA-16565: --- I've left few comments. > Remove dependency on sigar > -- > > Key: CASSANDRA-16565 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16565 > Project: Cassandra > Issue Type: Improvement > Components: Build >Reporter: David Capwell >Assignee: Claude Warren >Priority: Normal > Fix For: 5.x > > > sigar is used to check if the environment has good settings for running C*, > but requires we bundle a lot of native libraries to perform this check (which > can also be done elsewhere). This project also appears to be dead as the > last commit was around 6 years ago. > With the move to resolve artifacts rather than commit them, removing this > dependency would remove majority of the artifacts fetched from GitHub. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802118#comment-17802118 ] Jacek Lewandowski edited comment on CASSANDRA-19239 at 1/3/24 11:02 AM: https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1252/workflows/234ccc92-65f2-4adb-a68a-a5505398f4d0/jobs/63795/parallel-runs/7?filterBy=FAILED https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1252/workflows/33925172-aab5-43be-9707-4ffece98d926/jobs/63798 https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1252/workflows/b10132a7-1b4f-44d0-8808-f19a3b5fde69/jobs/63797 was (Author: jlewandowski): https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1252/workflows/234ccc92-65f2-4adb-a68a-a5505398f4d0/jobs/63795/parallel-runs/7?filterBy=FAILED > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.1 > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19239) jvm-dtests crash on java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802118#comment-17802118 ] Jacek Lewandowski commented on CASSANDRA-19239: --- https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/1252/workflows/234ccc92-65f2-4adb-a68a-a5505398f4d0/jobs/63795/parallel-runs/7?filterBy=FAILED > jvm-dtests crash on java 17 > --- > > Key: CASSANDRA-19239 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19239 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.1 > > > This is a similar problem to the one mentioned in > https://issues.apache.org/jira/browse/CASSANDRA-15981 > I'm filling it because I've noticed the same problem on JDK17, perhaps we > should also disable unloading classes with CMS for JDK17. > However, I'm in favour of moving tests to G1 instead. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-19126) Streaming appears to be incompatible with different storage_compatibility_mode settings
[ https://issues.apache.org/jira/browse/CASSANDRA-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799836#comment-17799836 ] Jacek Lewandowski edited comment on CASSANDRA-19126 at 1/2/24 8:40 AM: --- By running tests, I realized I don't know what the Scrubber should do, which resulting format should it choose? Maybe we need to provide that explicitly? what do you think? EDIT: After thinking about it a bit, Scrubber is not actually something to be run outside of the server, or at least, it would be pretty rare. Therefore, it should take the server configuration. I'll also consider letting the user provide the target format/version through the command line. was (Author: jlewandowski): By running tests, I realized I don't know what the Scrubber should do, which resulting format should it choose? Maybe we need to provide that explicitly? what do you think? > Streaming appears to be incompatible with different > storage_compatibility_mode settings > --- > > Key: CASSANDRA-19126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19126 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Streaming, Legacy/Streaming and Messaging, > Messaging/Internode, Tool/bulk load >Reporter: Branimir Lambov >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.x > > > In particular, SSTableLoader appears to be incompatible with > storage_compatibility_mode: NONE, which manifests as a failure of > {{org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest}} > when the flag is turned on (found during CASSANDRA-18753 testing). Setting > {{storage_compatibility_mode: NONE}} in the tool configuration yaml does not > help (according to the docs, this setting is not picked up). > This is likely a bigger problem as the acceptable streaming version for C* 5 > is 12 only in legacy mode and 13 only in none, i.e. two C* 5 nodes do not > appear to be able to stream with each other if their setting for the > compatibility mode is different. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19126) Streaming appears to be incompatible with different storage_compatibility_mode settings
[ https://issues.apache.org/jira/browse/CASSANDRA-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799836#comment-17799836 ] Jacek Lewandowski commented on CASSANDRA-19126: --- By running tests, I realized I don't know what the Scrubber should do, which resulting format should it choose? Maybe we need to provide that explicitly? what do you think? > Streaming appears to be incompatible with different > storage_compatibility_mode settings > --- > > Key: CASSANDRA-19126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19126 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Streaming, Legacy/Streaming and Messaging, > Messaging/Internode, Tool/bulk load >Reporter: Branimir Lambov >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.0-rc, 5.x > > > In particular, SSTableLoader appears to be incompatible with > storage_compatibility_mode: NONE, which manifests as a failure of > {{org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest}} > when the flag is turned on (found during CASSANDRA-18753 testing). Setting > {{storage_compatibility_mode: NONE}} in the tool configuration yaml does not > help (according to the docs, this setting is not picked up). > This is likely a bigger problem as the acceptable streaming version for C* 5 > is 12 only in legacy mode and 13 only in none, i.e. two C* 5 nodes do not > appear to be able to stream with each other if their setting for the > compatibility mode is different. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18902) Test failure: org.apache.cassandra.distributed.test.MigrationCoordinatorTest.explicitEndpointIgnore
[ https://issues.apache.org/jira/browse/CASSANDRA-18902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-18902: -- Since Version: 4.1.0 Source Control Link: https://github.com/apache/cassandra/commit/3edca0041caf95a03453c533dc70bdc62e6dabd9 Resolution: Fixed Status: Resolved (was: Ready to Commit) > Test failure: > org.apache.cassandra.distributed.test.MigrationCoordinatorTest.explicitEndpointIgnore > --- > > Key: CASSANDRA-18902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18902 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.1.x, 5.0-rc, 5.x > > > Repeated run from `cassandra-4.1` > [https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/941/workflows/46fc6cb7-135e-4862-b9d3-6996c0993de8] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18902) Test failure: org.apache.cassandra.distributed.test.MigrationCoordinatorTest.explicitEndpointIgnore
[ https://issues.apache.org/jira/browse/CASSANDRA-18902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-18902: -- Status: Ready to Commit (was: Review In Progress) > Test failure: > org.apache.cassandra.distributed.test.MigrationCoordinatorTest.explicitEndpointIgnore > --- > > Key: CASSANDRA-18902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18902 > Project: Cassandra > Issue Type: Bug > Components: Test/dtest/java >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.1.x, 5.0-rc, 5.x > > > Repeated run from `cassandra-4.1` > [https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/941/workflows/46fc6cb7-135e-4862-b9d3-6996c0993de8] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18263) Update gc settings in build.xml
[ https://issues.apache.org/jira/browse/CASSANDRA-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799768#comment-17799768 ] Jacek Lewandowski commented on CASSANDRA-18263: --- There is a reason to move to G1GC. We need to explicitly disable class unloading for CMS which causes OOM problems in some runs. We also need to devote much higher resources to the longer JVM Dtests. Since we use G1GC in production as a default configuration, we should use it in tests as well. I'm going to fix this because I'm tired dealing with OOM on CI every now and then. > Update gc settings in build.xml > --- > > Key: CASSANDRA-18263 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18263 > Project: Cassandra > Issue Type: Task > Components: Local/Config >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > As part of CASSANDRA-18027 we switched trunk to default to G1GC. We need to > update also our test settings in build.xml to test with what we default to in > trunk > CC [~mck] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-18263) Update gc settings in build.xml
[ https://issues.apache.org/jira/browse/CASSANDRA-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski reassigned CASSANDRA-18263: - Assignee: Jacek Lewandowski > Update gc settings in build.xml > --- > > Key: CASSANDRA-18263 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18263 > Project: Cassandra > Issue Type: Task > Components: Local/Config >Reporter: Ekaterina Dimitrova >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 5.x > > > As part of CASSANDRA-18027 we switched trunk to default to G1GC. We need to > update also our test settings in build.xml to test with what we default to in > trunk > CC [~mck] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org