[jira] [Updated] (CASSANDRA-18646) Add Azure snitch
[ https://issues.apache.org/jira/browse/CASSANDRA-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacek Lewandowski updated CASSANDRA-18646: -- Reviewers: Jacek Lewandowski Status: Review In Progress (was: Patch Available) > Add Azure snitch > > > Key: CASSANDRA-18646 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18646 > Project: Cassandra > Issue Type: New Feature > Components: Legacy/Core >Reporter: Stefan Miklosovic >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 5.x > > Time Spent: 20m > Remaining Estimate: 0h > > Create Azure snitch to support Azure clouds. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18643) jackson-core vulnerability: CVE-2022-45688
[ https://issues.apache.org/jira/browse/CASSANDRA-18643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Berenguer Blasi updated CASSANDRA-18643: Reviewers: Berenguer Blasi, Berenguer Blasi Berenguer Blasi, Berenguer Blasi (was: Berenguer Blasi) Status: Review In Progress (was: Patch Available) > jackson-core vulnerability: CVE-2022-45688 > -- > > Key: CASSANDRA-18643 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18643 > Project: Cassandra > Issue Type: Bug > Components: Dependencies >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp. > https://nvd.nist.gov/vuln/detail/CVE-2022-45688 > {quote} > A stack overflow in the XML.toJSONObject component of hutool-json v5.8.10 > allows attackers to cause a Denial of Service (DoS) via crafted JSON or XML > data. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18617) Disable the deprecated keyspace/table thresholds and convert them to Guardrails
[ https://issues.apache.org/jira/browse/CASSANDRA-18617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740382#comment-17740382 ] dan jatnieks edited comment on CASSANDRA-18617 at 7/6/23 3:17 AM: -- I agree ... let's try the static table names and see how that goes. I pushed an update and opened a PR to comment on details: [https://github.com/apache/cassandra/pull/2467] was (Author: djatnieks): I agree ... let's try the static table names and see how that goes. I pushed an update and opened a PR to comment on details: [https://github.com/apache/cassandra/pull/2467] > Disable the deprecated keyspace/table thresholds and convert them to > Guardrails > --- > > Key: CASSANDRA-18617 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18617 > Project: Cassandra > Issue Type: Improvement > Components: Feature/Guardrails >Reporter: dan jatnieks >Assignee: dan jatnieks >Priority: Normal > Fix For: 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > The non-guardrail thresholds 'keyspace_count_warn_threshold' and > 'table_count_warn_threshold' configuration settings were first added with > CASSANDRA-16309 in 4.0-beta4 and have subsequently been deprecated since > 4.1-alpha in CASSANDRA-17195 when they were replaced/migrated to guardrails > as part of CEP-3 (Guardrails). > These thresholds should now be removed from cassandra.yaml, while still > allowed in existing yaml files. > The old thresholds will be disabled by removing their default values from > Config.java, and any existing values for these thresholds will be converted > to the new guardrails using the '@Replaces' tag on the corresponding > guardrail values. > Since the old thresholds considered the number of system keyspace/tables in > their values, the '@Replaces' conversion will subtract the current number of > system tables from the old value and log a descriptive message. > See dev list discussion: > https://lists.apache.org/thread/0zjg08hrd6xv7lhvo96frz456b2rvr8b -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18617) Disable the deprecated keyspace/table thresholds and convert them to Guardrails
[ https://issues.apache.org/jira/browse/CASSANDRA-18617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dan jatnieks updated CASSANDRA-18617: - Test and Documentation Plan: Unit tests added and submitted free tier CI pre-commit pipeline: [https://app.circleci.com/pipelines/github/djatnieks/cassandra?branch=CASSANDRA-18617] Status: Patch Available (was: In Progress) > Disable the deprecated keyspace/table thresholds and convert them to > Guardrails > --- > > Key: CASSANDRA-18617 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18617 > Project: Cassandra > Issue Type: Improvement > Components: Feature/Guardrails >Reporter: dan jatnieks >Assignee: dan jatnieks >Priority: Normal > Fix For: 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > The non-guardrail thresholds 'keyspace_count_warn_threshold' and > 'table_count_warn_threshold' configuration settings were first added with > CASSANDRA-16309 in 4.0-beta4 and have subsequently been deprecated since > 4.1-alpha in CASSANDRA-17195 when they were replaced/migrated to guardrails > as part of CEP-3 (Guardrails). > These thresholds should now be removed from cassandra.yaml, while still > allowed in existing yaml files. > The old thresholds will be disabled by removing their default values from > Config.java, and any existing values for these thresholds will be converted > to the new guardrails using the '@Replaces' tag on the corresponding > guardrail values. > Since the old thresholds considered the number of system keyspace/tables in > their values, the '@Replaces' conversion will subtract the current number of > system tables from the old value and log a descriptive message. > See dev list discussion: > https://lists.apache.org/thread/0zjg08hrd6xv7lhvo96frz456b2rvr8b -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18617) Disable the deprecated keyspace/table thresholds and convert them to Guardrails
[ https://issues.apache.org/jira/browse/CASSANDRA-18617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740382#comment-17740382 ] dan jatnieks commented on CASSANDRA-18617: -- I agree ... let's try the static table names and see how that goes. I pushed an update and opened a PR to comment on details: [https://github.com/apache/cassandra/pull/2467] > Disable the deprecated keyspace/table thresholds and convert them to > Guardrails > --- > > Key: CASSANDRA-18617 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18617 > Project: Cassandra > Issue Type: Improvement > Components: Feature/Guardrails >Reporter: dan jatnieks >Assignee: dan jatnieks >Priority: Normal > Fix For: 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > The non-guardrail thresholds 'keyspace_count_warn_threshold' and > 'table_count_warn_threshold' configuration settings were first added with > CASSANDRA-16309 in 4.0-beta4 and have subsequently been deprecated since > 4.1-alpha in CASSANDRA-17195 when they were replaced/migrated to guardrails > as part of CEP-3 (Guardrails). > These thresholds should now be removed from cassandra.yaml, while still > allowed in existing yaml files. > The old thresholds will be disabled by removing their default values from > Config.java, and any existing values for these thresholds will be converted > to the new guardrails using the '@Replaces' tag on the corresponding > guardrail values. > Since the old thresholds considered the number of system keyspace/tables in > their values, the '@Replaces' conversion will subtract the current number of > system tables from the old value and log a descriptive message. > See dev list discussion: > https://lists.apache.org/thread/0zjg08hrd6xv7lhvo96frz456b2rvr8b -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-18570: Fix Version/s: 5.0 (was: 5.x) > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.0 > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-16895) Build with Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-16895: Description: This ticket is intended to group all issues found to support Java 17 in the future. Upgrade steps: * [Dependencies |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to be updated (not all but at least those that require an update in order to work with Java 17) * More encapsulated JDK internal APIs. Some of the issues might be solved with the dependencies updates * Currently trunk compiles if we remove the Nashorn dependency (ant script tag, used for the test environment; UDFs) . The oracle recommendation to use Nashorn-core won't work for the project as it is under GPL 2.0. Most probably we will opt in for graal-sdk licensed under UPL * All tests to be cleaned * CI environment to be setup *NOTE:* GC tuning, performance testing were never agreed to be part of this ticket. Below is a snapshot of current CI failures with JDK17, it will be updated on a regular basis with a date of update *June 15th 2023* || ||Failing Test Classes||Ticket Numbers|| | |_Python DTests_| | |1|-test_sjk-|CASSANDRA-18343| | |_Java Ditributed Tests_| | |1-6|org.apache.cassandra.distributed.test.ReprepareOldBehaviourTest - all tests, org.apache.cassandra.distributed.test.PrepareBatchStatementsTest - all tests, org.apache.cassandra.distributed.test.IPMembershipTest - both tests, org.apache.cassandra.distributed.test.MixedModeFuzzTest, org.apache.cassandra.distributed.test.ReprepareFuzzTest, org.apache.cassandra.distributed.test.ReprepareNewBehaviourTest|CASSANDRA-16304| |7,8|-org.apache.cassandra.distributed.test.NativeTransportEncryptionOptionsTest - all tests- -org.apache.cassandra.distributed.test.InternodeEncryptionOptionsTest - all tests-|Both tests suffer from CASSANDRA-18180 - *ready to commit; blocked on being ready to drop JDK8* fwiw, using the CASSANDRA-18180 branch, only the negotiatedProtocolMustBeAcceptedProtocolTest fails in both these tests. EDIT: We will need a ticket for this one post CASSANDRA-18180. TLSv1.1 failed to negotiate (netty complains about certificate_unknown). Changes in JDK17 config to be checked - done EDIT2: CASSANDRA-18540| |-9-|-org.apache.cassandra.distributed.test.SSTableLoaderEncryptionOptionsTest - 2 tests-|CASSANDRA-18180 ready to commit; blocked on being ready to drop JDK8| | |_Unit Tests_| | |1|org.apache.cassandra.repair.RepairJobTest - 1 test|CASSANDRA-17884| |2|org.apache.cassandra.security.SSLFactoryTest - all tests|CASSANDRA-17992| |3,4|org.apache.cassandra.db.memtable.MemtableSizeOffheapBuffersTest, org.apache.cassandra.utils.concurrent.RefCountedTest|CASSANDRA-18329| |5,6|-org.apache.cassandra.cql3.validation.entities.UFJavaTest,- -org.apache.cassandra.cql3.validation.entities.UFSecurityTest-|CASSANDRA-18190; ready to commit; blocked on being ready to drop JDK8| |7|-org.apache.cassandra.cql3.EmptyValuesTest-|CASSANDRA-18436| |8|{-}org.apache.cassandra.transport.MessagePayloadTest{-}.jdk17-|CASSANDRA-18437| | |_Burn tests_| | |1|-org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-|CASSANDRA-18570| was: This ticket is intended to group all issues found to support Java 17 in the future. Upgrade steps: * [Dependencies |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to be updated (not all but at least those that require an update in order to work with Java 17) * More encapsulated JDK internal APIs. Some of the issues might be solved with the dependencies updates * Currently trunk compiles if we remove the Nashorn dependency (ant script tag, used for the test environment; UDFs) . The oracle recommendation to use Nashorn-core won't work for the project as it is under GPL 2.0. Most probably we will opt in for graal-sdk licensed under UPL * All tests to be cleaned * CI environment to be setup *NOTE:* GC tuning, performance testing were never agreed to be part of this ticket. Below is a snapshot of current CI failures with JDK17, it will be updated on a regular basis with a date of update *June 15th 2023* || ||Failing Test Classes||Ticket Numbers|| | |_Python DTests_| | |1|-test_sjk-|CASSANDRA-18343| | |_Java Ditributed Tests_| | |1-6|org.apache.cassandra.distributed.test.ReprepareOldBehaviourTest - all tests, org.apache.cassandra.distributed.test.PrepareBatchStatementsTest - all tests, org.apache.cassandra.distributed.test.IPMembershipTest - both tests, org.apache.cassandra.distributed.test.MixedModeFuzzTest, org.apache.cassandra.distributed.test.ReprepareFuzzTest, org.apache.cassandra.distributed.test.ReprepareNewBehaviourTest|CASSANDRA-16304| |7,8|-org.apache.cassandra.distributed.test.NativeTransportEncryptionOptionsTest - all tests- -org.apache.cassandra.distributed.test.InternodeEncryptionOptionsTest - all
[jira] [Comment Edited] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740356#comment-17740356 ] Ekaterina Dimitrova edited comment on CASSANDRA-18570 at 7/5/23 11:54 PM: -- I agree; I'm closing for now. It was also not seen recently in CI. We can always revisit the decision if it pops up again. Thank you both for looking into it! was (Author: e.dimitrova): I agree; I'm closing for now. It was also not seen recently in CI. Thank you both for looking into it! > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-18570: Resolution: Cannot Reproduce Status: Resolved (was: Open) > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova reassigned CASSANDRA-18570: --- Assignee: Ningzi Zhan (was: Ekaterina Dimitrova) > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740356#comment-17740356 ] Ekaterina Dimitrova commented on CASSANDRA-18570: - I agree; I'm closing for now. It was also not seen recently in CI. Thank you both for looking into it! > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova reassigned CASSANDRA-18570: --- Assignee: Ekaterina Dimitrova (was: Ningzi Zhan) > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18490) Add checksum validation to all index components on startup, streaming, and SSTable import
[ https://issues.apache.org/jira/browse/CASSANDRA-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740354#comment-17740354 ] Caleb Rackliffe commented on CASSANDRA-18490: - After my initial review, I went back and commented on [something we probably need to understand|https://github.com/apache/cassandra/pull/2460/files#r1253762041] before merging. (Just linking here, and we can discuss in the PR...) > Add checksum validation to all index components on startup, streaming, and > SSTable import > - > > Key: CASSANDRA-18490 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18490 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Piotr Kolaczkowski >Priority: Normal > Fix For: 5.x > > > The SAI code currently does not checksum validate per-column index data files > at any point. It does checksum validate per-sstable components after a full > rebuild and it checksum validates the per-column metadata on opening. > We should checksum validate all index components on startup, full rebuild and > streaming. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740346#comment-17740346 ] Ningzi Zhan edited comment on CASSANDRA-18570 at 7/5/23 11:29 PM: -- Agree! I ran the test using the commit[ a2dc44f072|https://github.com/apache/cassandra/commit/a2dc44f0725b02294071e67d0cab57a7629f25a1] to reproduce the error, but there is no error message related to it. was (Author: JIRAUSER299826): Agree! I ran it using the commit[ a2dc44f072|https://github.com/apache/cassandra/commit/a2dc44f0725b02294071e67d0cab57a7629f25a1] to reproduce the error, but there is no error message related to it. > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740346#comment-17740346 ] Ningzi Zhan edited comment on CASSANDRA-18570 at 7/5/23 11:28 PM: -- Agree! I ran it using the commit[ a2dc44f072|https://github.com/apache/cassandra/commit/a2dc44f0725b02294071e67d0cab57a7629f25a1] to reproduce the error, but there is no error message related to it. was (Author: JIRAUSER299826): Agree! I ran it using the commit[commit a2dc44f072|https://github.com/apache/cassandra/commit/a2dc44f0725b02294071e67d0cab57a7629f25a1] to reproduce the error, but there is no error message related to it. > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740346#comment-17740346 ] Ningzi Zhan commented on CASSANDRA-18570: - Agree! I ran it using the commit[commit a2dc44f072|https://github.com/apache/cassandra/commit/a2dc44f0725b02294071e67d0cab57a7629f25a1] to reproduce the error, but there is no error message related to it. > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18570) Fix org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17
[ https://issues.apache.org/jira/browse/CASSANDRA-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740341#comment-17740341 ] Brandon Williams commented on CASSANDRA-18570: -- I think we should close for now and we can reopen if we see this again in a more stable environment. > Fix > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > > --- > > Key: CASSANDRA-18570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18570 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Assignee: Ningzi Zhan >Priority: Normal > Fix For: 5.x > > > h1. > {code:java} > Regression > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression-.jdk17 > (from org.apache.cassandra.transport.DriverBurnTest-.jdk17) > Failing for the past 1 build (Since #1590 ) Took 30 sec. Failed 5 times > in the last 30 runs. Flakiness: 24%, Stability: 83% Stacktrace > junit.framework.AssertionFailedError at > org.apache.cassandra.transport.DriverBurnTest.perfTest(DriverBurnTest.java:425) > at > org.apache.cassandra.transport.DriverBurnTest.measureLargeV4WithCompression(DriverBurnTest.java:316) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > {code} > The test is flaky since recently, failing every other time in Jenkins (burn > tests are not running in CircleCI) First seen with run #1572 this commit - > CASSANDRA-18025 > CC [~stefan.miklosovic] and [~brandon.williams] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-17992) Upgrade Netty on 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740329#comment-17740329 ] Ekaterina Dimitrova edited comment on CASSANDRA-17992 at 7/5/23 9:53 PM: - As asked for in Slack, I looked over the test failures. - simulationTest-cassandra.testtag_IS_UNDEFINED seems like the only new unit test failure. BUT I tend to see lately (Butler can confirm) tests failing because the testtag_IS_UNDEFINED, so I suspect we need an umbrella ticket for that type of failure, which has nothing to do with what we do here. I also cannot reproduce this failure locally with the Netty upgrade branch. (the testtag seems fine) - The rest of the unit test failures will be fixed when we commit CASSANDRA-18190. - With JDK17, test_login_new_node is the only failure I haven't seen anything about in Butler, and no tickets. Though, I am not sure it can be related to the netty upgrade: {code:java} failed on teardown with "Unexpected error found in node logs (see stdout for full details). Errors: [[node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,502 ExceptionHandlers.java:229 - Unexpected exception during request; channel = [id: 0xbf439ff8, L:/127.0.0.3:9042 - R:/127.0.0.1:58128]\njava.lang.AssertionError: null\n\tat org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:1179)\n\tat org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:1193)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicas(AbstractReplicationStrategy.java:95)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicasForToken(AbstractReplicationStrategy.java:88)\n\tat org.apache.cassandra.locator.ReplicaLayout.forTokenReadLiveSorted(ReplicaLayout.java:330)\n\tat org.apache.cassandra.locator.ReplicaPlans.forRead(ReplicaPlans.java:593)\n\tat org.apache.cassandra.service.reads.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:190)\n\tat org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:2097)\n\tat org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1995)\n\tat org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1873)\n\tat org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:1286)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:364)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:293)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.select(PasswordAuthenticator.java:201)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.queryHashedPassword(PasswordAuthenticator.java:177)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$2(LocalLoadingCache.java:141)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2413)\n\tat java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2411)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2394)\n\tat com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:54)\n\tat org.apache.cassandra.auth.AuthCache.get(AuthCache.java:228)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:142)\n\tat org.apache.cassandra.auth.PasswordAuthenticator$PlainTextSaslAuthenticator.getAuthenticatedUser(PasswordAuthenticator.java:268)\n\tat org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)\n\tat org.apache.cassandra.transport.Message$Request.execute(Message.java:256)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:194)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:213)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:240)\n\tat org.apache.cassandra.transport.Dispatcher$RequestProcessor.run(Dispatcher.java:137)\n\tat org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96)\n\tat org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61)\n\tat org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71)\n\tat org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:143)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:833)', [node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,774 ExceptionHandlers.java:229 - Unexpected exception durin
[jira] [Comment Edited] (CASSANDRA-17992) Upgrade Netty on 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740329#comment-17740329 ] Ekaterina Dimitrova edited comment on CASSANDRA-17992 at 7/5/23 9:52 PM: - As asked for in Slack, I looked over the test failures. - simulationTest-cassandra.testtag_IS_UNDEFINED seems like the only new unit test failure. BUT I tend to see lately (butler can confirm) tests failing because the testtag_IS_UNDEFINED, so I suspect we need an umbrella ticket for that type of failure, which has nothing to do with what we do here. I also cannot reproduce this failure locally with the Netty upgrade branch. - The rest of the unit test failures will be fixed when we commit CASSANDRA-18190. - With JDK17, test_login_new_node is the only failure I haven't seen anything about in Butler, and no tickets. Though, I am not sure it can be related to the netty upgrade: {code:java} failed on teardown with "Unexpected error found in node logs (see stdout for full details). Errors: [[node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,502 ExceptionHandlers.java:229 - Unexpected exception during request; channel = [id: 0xbf439ff8, L:/127.0.0.3:9042 - R:/127.0.0.1:58128]\njava.lang.AssertionError: null\n\tat org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:1179)\n\tat org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:1193)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicas(AbstractReplicationStrategy.java:95)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicasForToken(AbstractReplicationStrategy.java:88)\n\tat org.apache.cassandra.locator.ReplicaLayout.forTokenReadLiveSorted(ReplicaLayout.java:330)\n\tat org.apache.cassandra.locator.ReplicaPlans.forRead(ReplicaPlans.java:593)\n\tat org.apache.cassandra.service.reads.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:190)\n\tat org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:2097)\n\tat org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1995)\n\tat org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1873)\n\tat org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:1286)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:364)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:293)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.select(PasswordAuthenticator.java:201)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.queryHashedPassword(PasswordAuthenticator.java:177)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$2(LocalLoadingCache.java:141)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2413)\n\tat java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2411)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2394)\n\tat com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:54)\n\tat org.apache.cassandra.auth.AuthCache.get(AuthCache.java:228)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:142)\n\tat org.apache.cassandra.auth.PasswordAuthenticator$PlainTextSaslAuthenticator.getAuthenticatedUser(PasswordAuthenticator.java:268)\n\tat org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)\n\tat org.apache.cassandra.transport.Message$Request.execute(Message.java:256)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:194)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:213)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:240)\n\tat org.apache.cassandra.transport.Dispatcher$RequestProcessor.run(Dispatcher.java:137)\n\tat org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96)\n\tat org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61)\n\tat org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71)\n\tat org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:143)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:833)', [node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,774 ExceptionHandlers.java:229 - Unexpected exception during request; channel = [id:
[jira] [Comment Edited] (CASSANDRA-17992) Upgrade Netty on 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740329#comment-17740329 ] Ekaterina Dimitrova edited comment on CASSANDRA-17992 at 7/5/23 9:52 PM: - As asked for in Slack, I looked over the test failures. - simulationTest-cassandra.testtag_IS_UNDEFINED seems like the only new unit test failure. BUT I tend to see lately (butler can confirm) tests failing because the testtag_IS_UNDEFINED, so I suspect we need an umbrella ticket for that type of failure, which has nothing to do with what we do here. (the testtag seems fine) I also cannot reproduce this failure locally with the Netty upgrade branch. - The rest of the unit test failures will be fixed when we commit CASSANDRA-18190. - With JDK17, test_login_new_node is the only failure I haven't seen anything about in Butler, and no tickets. Though, I am not sure it can be related to the netty upgrade: {code:java} failed on teardown with "Unexpected error found in node logs (see stdout for full details). Errors: [[node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,502 ExceptionHandlers.java:229 - Unexpected exception during request; channel = [id: 0xbf439ff8, L:/127.0.0.3:9042 - R:/127.0.0.1:58128]\njava.lang.AssertionError: null\n\tat org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:1179)\n\tat org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:1193)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicas(AbstractReplicationStrategy.java:95)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicasForToken(AbstractReplicationStrategy.java:88)\n\tat org.apache.cassandra.locator.ReplicaLayout.forTokenReadLiveSorted(ReplicaLayout.java:330)\n\tat org.apache.cassandra.locator.ReplicaPlans.forRead(ReplicaPlans.java:593)\n\tat org.apache.cassandra.service.reads.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:190)\n\tat org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:2097)\n\tat org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1995)\n\tat org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1873)\n\tat org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:1286)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:364)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:293)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.select(PasswordAuthenticator.java:201)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.queryHashedPassword(PasswordAuthenticator.java:177)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$2(LocalLoadingCache.java:141)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2413)\n\tat java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2411)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2394)\n\tat com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:54)\n\tat org.apache.cassandra.auth.AuthCache.get(AuthCache.java:228)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:142)\n\tat org.apache.cassandra.auth.PasswordAuthenticator$PlainTextSaslAuthenticator.getAuthenticatedUser(PasswordAuthenticator.java:268)\n\tat org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)\n\tat org.apache.cassandra.transport.Message$Request.execute(Message.java:256)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:194)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:213)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:240)\n\tat org.apache.cassandra.transport.Dispatcher$RequestProcessor.run(Dispatcher.java:137)\n\tat org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96)\n\tat org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61)\n\tat org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71)\n\tat org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:143)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:833)', [node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,774 ExceptionHandlers.java:229 - Unexpected exception during
[jira] [Commented] (CASSANDRA-17992) Upgrade Netty on 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740329#comment-17740329 ] Ekaterina Dimitrova commented on CASSANDRA-17992: - As asked for in Slack, I looked over the test failures. - simulationTest-cassandra.testtag_IS_UNDEFINED seems like the only new one. BUT I tend to see lately (butler can confirm) tests failing because the testtag_IS_UNDEFINED, so I suspect we need an umbrella ticket for that type of failure, which has nothing to do with what we do here. I also cannot reproduce this failure locally with the Netty upgrade branch. - The rest of the unit test failures will be fixed when we commit CASSANDRA-18190. - With JDK17, test_login_new_node is the only failure I haven't seen anything about in Butler, and no tickets. Though, I am not sure it can be related to the netty upgrade: {code:java} failed on teardown with "Unexpected error found in node logs (see stdout for full details). Errors: [[node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,502 ExceptionHandlers.java:229 - Unexpected exception during request; channel = [id: 0xbf439ff8, L:/127.0.0.3:9042 - R:/127.0.0.1:58128]\njava.lang.AssertionError: null\n\tat org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:1179)\n\tat org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:1193)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicas(AbstractReplicationStrategy.java:95)\n\tat org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalReplicasForToken(AbstractReplicationStrategy.java:88)\n\tat org.apache.cassandra.locator.ReplicaLayout.forTokenReadLiveSorted(ReplicaLayout.java:330)\n\tat org.apache.cassandra.locator.ReplicaPlans.forRead(ReplicaPlans.java:593)\n\tat org.apache.cassandra.service.reads.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:190)\n\tat org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:2097)\n\tat org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1995)\n\tat org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1873)\n\tat org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:1286)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:364)\n\tat org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:293)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.select(PasswordAuthenticator.java:201)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.queryHashedPassword(PasswordAuthenticator.java:177)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$2(LocalLoadingCache.java:141)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2413)\n\tat java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2411)\n\tat com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2394)\n\tat com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)\n\tat com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:54)\n\tat org.apache.cassandra.auth.AuthCache.get(AuthCache.java:228)\n\tat org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:142)\n\tat org.apache.cassandra.auth.PasswordAuthenticator$PlainTextSaslAuthenticator.getAuthenticatedUser(PasswordAuthenticator.java:268)\n\tat org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)\n\tat org.apache.cassandra.transport.Message$Request.execute(Message.java:256)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:194)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:213)\n\tat org.apache.cassandra.transport.Dispatcher.processRequest(Dispatcher.java:240)\n\tat org.apache.cassandra.transport.Dispatcher$RequestProcessor.run(Dispatcher.java:137)\n\tat org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:96)\n\tat org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61)\n\tat org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71)\n\tat org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:143)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:833)', [node3] 'ERROR [Native-Transport-Auth-Requests-1] 2023-06-29 19:00:01,774 ExceptionHandlers.java:229 - Unexpected exception during request; channel = [id: 0x1205363e, L:/127.0.0.3:9042 - R:/127.0.0.1:58142]\njava.l
[cassandra-website] branch asf-staging updated (86aee3099 -> e890f4542)
This is an automated email from the ASF dual-hosted git repository. git-site-role pushed a change to branch asf-staging in repository https://gitbox.apache.org/repos/asf/cassandra-website.git discard 86aee3099 generate docs for 466d6ffe new e890f4542 generate docs for 466d6ffe This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (86aee3099) \ N -- N -- N refs/heads/asf-staging (e890f4542) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../tools/nodetool/getstreamthroughput.html| 3 +-- .../tools/nodetool/getstreamthroughput.html| 3 +-- content/search-index.js| 2 +- site-ui/build/ui-bundle.zip| Bin 4796900 -> 4796900 bytes 4 files changed, 3 insertions(+), 5 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740325#comment-17740325 ] Brandon Williams commented on CASSANDRA-18639: -- eclipse-warnings is mad about the KeyIterator not being closed, but only on j8. It's stupid, but we can't break j8 compat yet and eclipse-warnings hasn't been replaced yet, so I think we just need to make it happy. > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 50m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-17909) Clean SyncUtil from dead code and update it for new JDK versions
[ https://issues.apache.org/jira/browse/CASSANDRA-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-17909: Reviewers: Jacek Lewandowski, Ekaterina Dimitrova (was: Jacek Lewandowski) Jacek Lewandowski, Ekaterina Dimitrova (was: Ekaterina Dimitrova, Jacek Lewandowski) Status: Review In Progress (was: Patch Available) > Clean SyncUtil from dead code and update it for new JDK versions > > > Key: CASSANDRA-17909 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17909 > Project: Cassandra > Issue Type: Bug > Components: Local/Other >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > During code inspection I noticed [dead > code|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L77-L87] > (JDK 7) in SyncUtil. > From a very quick skim _I think_ the [Java 8 > section|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L65-L75] > is applicable in JDK 11, not sure for JDK 17 but it seems it should stay at > least until we have JDK11. To be revisited. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-17909) Clean SyncUtil from dead code and update it for new JDK versions
[ https://issues.apache.org/jira/browse/CASSANDRA-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740324#comment-17740324 ] Ekaterina Dimitrova commented on CASSANDRA-17909: - The PR was approved on GH. Blocking final rebase, CI run and commit on CASSANDRA-18255 > Clean SyncUtil from dead code and update it for new JDK versions > > > Key: CASSANDRA-17909 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17909 > Project: Cassandra > Issue Type: Bug > Components: Local/Other >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > During code inspection I noticed [dead > code|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L77-L87] > (JDK 7) in SyncUtil. > From a very quick skim _I think_ the [Java 8 > section|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L65-L75] > is applicable in JDK 11, not sure for JDK 17 but it seems it should stay at > least until we have JDK11. To be revisited. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-17909) Clean SyncUtil from dead code and update it for new JDK versions
[ https://issues.apache.org/jira/browse/CASSANDRA-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-17909: Reviewers: Jacek Lewandowski (was: Ekaterina Dimitrova, Jacek Lewandowski) > Clean SyncUtil from dead code and update it for new JDK versions > > > Key: CASSANDRA-17909 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17909 > Project: Cassandra > Issue Type: Bug > Components: Local/Other >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > During code inspection I noticed [dead > code|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L77-L87] > (JDK 7) in SyncUtil. > From a very quick skim _I think_ the [Java 8 > section|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L65-L75] > is applicable in JDK 11, not sure for JDK 17 but it seems it should stay at > least until we have JDK11. To be revisited. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-17909) Clean SyncUtil from dead code and update it for new JDK versions
[ https://issues.apache.org/jira/browse/CASSANDRA-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-17909: Status: Ready to Commit (was: Review In Progress) > Clean SyncUtil from dead code and update it for new JDK versions > > > Key: CASSANDRA-17909 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17909 > Project: Cassandra > Issue Type: Bug > Components: Local/Other >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > During code inspection I noticed [dead > code|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L77-L87] > (JDK 7) in SyncUtil. > From a very quick skim _I think_ the [Java 8 > section|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/SyncUtil.java#L65-L75] > is applicable in JDK 11, not sure for JDK 17 but it seems it should stay at > least until we have JDK11. To be revisited. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18642) cqlsh on Cassandra 4.1.2 fails on Amazon Linux
[ https://issues.apache.org/jira/browse/CASSANDRA-18642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740321#comment-17740321 ] Brandon Williams commented on CASSANDRA-18642: -- Well, [this|https://unix.stackexchange.com/questions/488135/how-to-build-an-rpm-package-for-python-app-that-works-in-fedora-28-and-29-which] seems to indicate that multiple packages is the way. > cqlsh on Cassandra 4.1.2 fails on Amazon Linux > --- > > Key: CASSANDRA-18642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18642 > Project: Cassandra > Issue Type: Bug > Components: CQL/Interpreter, Packaging >Reporter: Stefan Miklosovic >Assignee: Brandon Williams >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > I am on the newest Amazon Linux Version 2023.1.20230629 > When I install cassandra-4.1.2 from Yum repository, it starts fine but cqlsh > prints this: > {code} > [ec2-user@ip-172-31-27-5 ~]$ cqlsh > Traceback (most recent call last): > File "/usr/bin/cqlsh.py", line 148, in > from cqlshlib import cql3handling, pylexotron, sslhandling, > cqlshhandling, authproviderhandling > ModuleNotFoundError: No module named 'cqlshlib' > {code} > If I change in /usr/bin/cqlsh.py > {code} > cqlshlibdir = os.path.join(CASSANDRA_PATH, 'pylib') > {code} > to this > {code} > cqlshlibdir = os.path.join('/usr/lib/python3.6', 'site-packages') > {code} > it works. > I am not sure if this is the correct way to handle that as not everybody has > python3.6. There is also no symlink pointing to this. I guess we would need > to find where packages are for Python we are going to use in cassandra.spec > and then change cqlsh.py to reflect that? > {code} > [ec2-user@ip-172-31-27-5 /]$ sudo find -type d -name site-packages > ./usr/lib/python3.9/site-packages > ./usr/lib/python3.6/site-packages > ./usr/lib64/python3.9/site-packages > {code} > I think we need to pass whatever this (1) expands to here (2) > (1) https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L167 > (2) https://github.com/apache/cassandra/blob/trunk/bin/cqlsh.py#L78 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18642) cqlsh on Cassandra 4.1.2 fails on Amazon Linux
[ https://issues.apache.org/jira/browse/CASSANDRA-18642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740319#comment-17740319 ] Brandon Williams commented on CASSANDRA-18642: -- The problem with installing the files in %post is they won't be properly tracked by the package manager like the ones in %files going to python_sitelib. From my reading of https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/installing_and_using_dynamic_programming_languages/assembly_packaging-python-3-rpms_installing-and-using-dynamic-programming-languages it would seem that we need to package for each 3.x version we want to support, which is unfortunate since we actually have support for all of them in a single code base. I'm going to keep thinking about this, I really don't want to have to do that. > cqlsh on Cassandra 4.1.2 fails on Amazon Linux > --- > > Key: CASSANDRA-18642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18642 > Project: Cassandra > Issue Type: Bug > Components: CQL/Interpreter, Packaging >Reporter: Stefan Miklosovic >Assignee: Brandon Williams >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > I am on the newest Amazon Linux Version 2023.1.20230629 > When I install cassandra-4.1.2 from Yum repository, it starts fine but cqlsh > prints this: > {code} > [ec2-user@ip-172-31-27-5 ~]$ cqlsh > Traceback (most recent call last): > File "/usr/bin/cqlsh.py", line 148, in > from cqlshlib import cql3handling, pylexotron, sslhandling, > cqlshhandling, authproviderhandling > ModuleNotFoundError: No module named 'cqlshlib' > {code} > If I change in /usr/bin/cqlsh.py > {code} > cqlshlibdir = os.path.join(CASSANDRA_PATH, 'pylib') > {code} > to this > {code} > cqlshlibdir = os.path.join('/usr/lib/python3.6', 'site-packages') > {code} > it works. > I am not sure if this is the correct way to handle that as not everybody has > python3.6. There is also no symlink pointing to this. I guess we would need > to find where packages are for Python we are going to use in cassandra.spec > and then change cqlsh.py to reflect that? > {code} > [ec2-user@ip-172-31-27-5 /]$ sudo find -type d -name site-packages > ./usr/lib/python3.9/site-packages > ./usr/lib/python3.6/site-packages > ./usr/lib64/python3.9/site-packages > {code} > I think we need to pass whatever this (1) expands to here (2) > (1) https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L167 > (2) https://github.com/apache/cassandra/blob/trunk/bin/cqlsh.py#L78 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18490) Add checksum validation to all index components on startup, streaming, and SSTable import
[ https://issues.apache.org/jira/browse/CASSANDRA-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740317#comment-17740317 ] Caleb Rackliffe edited comment on CASSANDRA-18490 at 7/5/23 8:45 PM: - Reviewed the PR and left my comments, including some suggestions around the new test failures. Overall, LGTM. The only thing I guess I'm not 100% clear on is whether we're definitely not going to want optional checksumming of anything on startup. (SSTable-level, column index, or both?) I'm not going to push super hard for it, and I understand the reasons for leaving it out (at least for now). We've got what might be the same exact problem to solve in CASSANDRA-18535, etc. CC [~mike_tr_adamson] was (Author: maedhroz): Reviewed the PR and left my comments, including some suggestions around the new test failures. Overall, LGTM. The only thing I guess I'm not 100% clear on is whether we're definitely not going to want optional checksumming of anything on startup. I'm not going to push super hard for it, and I understand the reasons for leaving it out (at least for now). We've got what might be the same exact problem to solve in CASSANDRA-18535, etc. CC [~mike_tr_adamson] > Add checksum validation to all index components on startup, streaming, and > SSTable import > - > > Key: CASSANDRA-18490 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18490 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Piotr Kolaczkowski >Priority: Normal > Fix For: 5.x > > > The SAI code currently does not checksum validate per-column index data files > at any point. It does checksum validate per-sstable components after a full > rebuild and it checksum validates the per-column metadata on opening. > We should checksum validate all index components on startup, full rebuild and > streaming. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18490) Add checksum validation to all index components on startup, streaming, and SSTable import
[ https://issues.apache.org/jira/browse/CASSANDRA-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740317#comment-17740317 ] Caleb Rackliffe commented on CASSANDRA-18490: - Reviewed the PR and left my comments, including some suggestions around the new test failures. Overall, LGTM. The only thing I guess I'm not 100% clear on is whether we're definitely not going to want optional checksumming of anything on startup. I'm not going to push super hard for it, and I understand the reasons for leaving it out (at least for now). We've got what might be the same exact problem to solve in CASSANDRA-18535, etc. CC [~mike_tr_adamson] > Add checksum validation to all index components on startup, streaming, and > SSTable import > - > > Key: CASSANDRA-18490 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18490 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Piotr Kolaczkowski >Priority: Normal > Fix For: 5.x > > > The SAI code currently does not checksum validate per-column index data files > at any point. It does checksum validate per-sstable components after a full > rebuild and it checksum validates the per-column metadata on opening. > We should checksum validate all index components on startup, full rebuild and > streaming. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18490) Add checksum validation to all index components on startup, streaming, and SSTable import
[ https://issues.apache.org/jira/browse/CASSANDRA-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caleb Rackliffe updated CASSANDRA-18490: Summary: Add checksum validation to all index components on startup, streaming, and SSTable import (was: Add checksum validation to all index components on startup, full rebuild and streaming) > Add checksum validation to all index components on startup, streaming, and > SSTable import > - > > Key: CASSANDRA-18490 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18490 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Piotr Kolaczkowski >Priority: Normal > Fix For: 5.x > > > The SAI code currently does not checksum validate per-column index data files > at any point. It does checksum validate per-sstable components after a full > rebuild and it checksum validates the per-column metadata on opening. > We should checksum validate all index components on startup, full rebuild and > streaming. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740316#comment-17740316 ] Brandon Williams commented on CASSANDRA-18639: -- The nits can also be fixed on commit, but either way: ||Branch||CI|| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-18639-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1115/workflows/f635f612-08a6-4507-a343-a983931d6f73], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1115/workflows/aeb89429-508f-4b38-bafb-2b66959f9e9a]| > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 50m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740310#comment-17740310 ] Stefan Miklosovic edited comment on CASSANDRA-18639 at 7/5/23 8:27 PM: --- I commented few nits on the PR. Might be done upon commit. Apart from briefly looking at it, I have NOT tested that. Note: this might be a candidate to rewrite it to use TableBuilder instead of StringBuilder but might be done in a separate ticket etc ... No need to deal with this now (if ever, but still ... an idea) was (Author: smiklosovic): I commented few nits on the PR. Might be done upon commit. Apart from briefly looking at it, I have tested that. Note: this might be a candidate to rewrite it to use TableBuilder instead of StringBuilder but might be done in a separate ticket etc ... No need to deal with this now (if ever, but still ... an idea) > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 40m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740315#comment-17740315 ] Brad Schoening edited comment on CASSANDRA-18639 at 7/5/23 8:27 PM: [~brandon.williams] I've reviewed it and it looks good, [~timothytu] can fix the nits. was (Author: bschoeni): [~brandon.williams] I've review it and it looks good, [~timothytu] can fix the nits. > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 40m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740315#comment-17740315 ] Brad Schoening commented on CASSANDRA-18639: [~brandon.williams] I've review it and it looks good, [~timothytu] can fix the nits. > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 40m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-17808) Optionally avoid hint transfer during decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740311#comment-17740311 ] Stefan Miklosovic commented on CASSANDRA-17808: --- [~maedhroz] are you ok with backporting to 4.1? How is this ad-hoc addition of features done? We just agree on that? > Optionally avoid hint transfer during decommission > -- > > Key: CASSANDRA-17808 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17808 > Project: Cassandra > Issue Type: Improvement > Components: Consistency/Hints >Reporter: Caleb Rackliffe >Assignee: Caleb Rackliffe >Priority: Normal > Fix For: 5.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Both because they aren’t strictly necessary to maintain consistency, and > because throttling induced by their rate-limiter (see > {{hinted_handoff_throttle}}) may stall progress, transferring hints during > decommission (specifically unbootstrap) rather than just pausing, disabling, > and truncating them probably doesn’t make sense. The only other concern would > be the BatchLog, which nominally depends on hint delivery to maintain its > "guarantees". However, during BatchLog replay on unbootstrap, > {{ReplayingBatch}} ignores batches older the gcgs anyway. > Here's a proposal from [~aleksey] that might strike a reasonable balance: > 1.) We continue to transfer hints by default during decommission, but at a > higher rate. We could, for instance, stop having {{DispatchHintsTask}} divide > its effective rate by the number of nodes in the cluster. > {noformat} > int nodesCount = Math.max(1, > StorageService.instance.getTokenMetadata().getAllEndpoints().size() - 1); > double throttleInBytes = DatabaseDescriptor.getHintedHandoffThrottleInKiB() * > 1024.0 / nodesCount; > this.rateLimiter = RateLimiter.create(throttleInBytes == 0 ? Double.MAX_VALUE > : throttleInBytes); > {noformat} > 2.) We provide an option to simply avoid transferring hints during > unbootstrap. Even this would only take the BatchLog from "best effort" to > "slightly less effort" ;) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740310#comment-17740310 ] Stefan Miklosovic commented on CASSANDRA-18639: --- I commented few nits on the PR. Might be done upon commit. Apart from briefly looking at it, I have tested that. Note: this might be a candidate to rewrite it to use TableBuilder instead of StringBuilder but might be done in a separate ticket etc ... No need to deal with this now (if ever, but still ... an idea) > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 40m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740309#comment-17740309 ] Brandon Williams commented on CASSANDRA-18639: -- If this looks good to you [~bschoeni] I'll run CI. > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18639) Add duration and partition key count to sstablemetadata
[ https://issues.apache.org/jira/browse/CASSANDRA-18639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Tu updated CASSANDRA-18639: --- Impacts: Docs (was: None) Test and Documentation Plan: Injected mock data and validated output Added updated documentation for various options and updated output Status: Patch Available (was: Open) > Add duration and partition key count to sstablemetadata > --- > > Key: CASSANDRA-18639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18639 > Project: Cassandra > Issue Type: Improvement > Components: Tool/sstable >Reporter: Timothy Tu >Assignee: Timothy Tu >Priority: Normal > Fix For: 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > The new -m option will output metadata information for: > * Partition Key Count > * Duration > Partition key count is the total number of partitions in the sstable.. > For Time Window Compaction (TWC), the min and max timestamps together with > duration describe the bounds of the time window in the table. > {quote}{{Total partitions: 2430}} > {{Total rows: 8000}} > {{Total column set: 10}} > {{...}} > {{Min Timestamp: 06/28/2023 15:15:04 (1688067443651650)}} > {{Max Timestamp: 06/28/2023 15:15:58 (1688067500268865)}} > {{Duration Days: 0 Hours: 0 Minutes: 0 Seconds: 53}} > {quote} > The online docs in sstablemetadata.adoc will need to be updated as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Miklosovic reassigned CASSANDRA-18647: - Assignee: Stefan Miklosovic > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug > Components: CQL/Semantics >Reporter: Nadav Har'El >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 5.x > > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-17992) Upgrade Netty on 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740295#comment-17740295 ] Jacek Lewandowski commented on CASSANDRA-17992: --- I don't really know what we really need at runtime. We had a freedom to do that before by choosing individual Netty dependencies instead of including uber netty-all. Though, we haven't done that. > Upgrade Netty on 5.0 > > > Key: CASSANDRA-17992 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17992 > Project: Cassandra > Issue Type: Task > Components: Dependencies >Reporter: Ekaterina Dimitrova >Assignee: Jacek Lewandowski >Priority: Low > Fix For: 5.x > > > I haven't been able to identify from the Netty docs which was the lowest > version where JDK17 was added but we are about 40 versions behind in netty 4 > so I suspect we better update. > -We need to consider there was an issue with class cast exceptions when > building with JDK17 with newer versions of netty (the newest available in > March 2022). For the record, we didn't see those when running CI on JDK8 and > JDK11. We also need to carefully revise the changes between the netty > versions. -->- CASSANDRA-18180 > Upgrading will cover also a fix in netty that was discussed in > [this|https://the-asf.slack.com/archives/CK23JSY2K/p1665567660202989] ASF > Slack thread. > CC [~benedict] , [~aleksey] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18643) jackson-core vulnerability: CVE-2022-45688
[ https://issues.apache.org/jira/browse/CASSANDRA-18643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-18643: - Test and Documentation Plan: run CI Status: Patch Available (was: Open) > jackson-core vulnerability: CVE-2022-45688 > -- > > Key: CASSANDRA-18643 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18643 > Project: Cassandra > Issue Type: Bug > Components: Dependencies >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp. > https://nvd.nist.gov/vuln/detail/CVE-2022-45688 > {quote} > A stack overflow in the XML.toJSONObject component of hutool-json v5.8.10 > allows attackers to cause a Denial of Service (DoS) via crafted JSON or XML > data. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18649) netty-all vulnerability: CVE-2023-34462
[ https://issues.apache.org/jira/browse/CASSANDRA-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-18649: - Test and Documentation Plan: run CI Status: Patch Available (was: Open) > netty-all vulnerability: CVE-2023-34462 > --- > > Key: CASSANDRA-18649 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18649 > Project: Cassandra > Issue Type: Bug > Components: Feature/Encryption >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp: > https://nvd.nist.gov/vuln/detail/CVE-2023-34462 > {quote} > The `SniHandler` can allocate up to 16MB of heap for each channel during the > TLS handshake. When the handler or the channel does not have an idle timeout, > it can be used to make a TCP server using the `SniHandler` to allocate 16MB > of heap. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra-website] branch asf-staging updated (4fb6471e5 -> 86aee3099)
This is an automated email from the ASF dual-hosted git repository. git-site-role pushed a change to branch asf-staging in repository https://gitbox.apache.org/repos/asf/cassandra-website.git discard 4fb6471e5 generate docs for 466d6ffe new 86aee3099 generate docs for 466d6ffe This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (4fb6471e5) \ N -- N -- N refs/heads/asf-staging (86aee3099) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../tools/nodetool/getstreamthroughput.html| 3 ++- .../tools/nodetool/getstreamthroughput.html| 3 ++- content/search-index.js| 2 +- site-ui/build/ui-bundle.zip| Bin 4796900 -> 4796900 bytes 4 files changed, 5 insertions(+), 3 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18645) Upgrade guava on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-18645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17739239#comment-17739239 ] Ekaterina Dimitrova edited comment on CASSANDRA-18645 at 7/5/23 4:47 PM: - We must remove the exclusion of failureaccess - Guava InternalFutureFailureAccess and InternalFuturesContains. >From maven: "com.google.common.util.concurrent.internal.InternalFutureFailureAccess and InternalFutures. Most Guava users will never need to use this artifact. Its classes are conceptually a part of Guava, but they were moved to a separate artifact so that Android libraries can use them without pulling in all of Guava (just as they can use ListenableFuture by depending on the listenablefuture artifact)." I did a quick preliminary run in CI a few weeks ago when I realized Guava added JDK17 - [https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra?branch=test-guava]. I do not see any failures, but I am not surprised because, in newer Guava versions, there is a promise for no breakages in API, even if a method is deprecated. I have to finish next week the review of the Guava [release notes|https://github.com/google/guava/releases] before pushing this for review. was (Author: e.dimitrova): We must remove the exclusion of failureaccess - Guava InternalFutureFailureAccess and InternalFuturesContains. >From maven: "com.google.common.util.concurrent.internal.InternalFutureFailureAccess and InternalFutures. Most Guava users will never need to use this artifact. Its classes are conceptually a part of Guava, but they were moved to a separate artifact so that Android libraries can use them without pulling in all of Guava (just as they can use ListenableFuture by depending on the listenablefuture artifact)." I did a quick preliminary run in CI a few weeks ago when I realized Guava added JDK17 - [https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra?branch=test-guava]. I do not see any failures, but I am not surprised because, in newer Guava versions, there is a promise for no breakages in API, even if a method is deprecated. I have to finish next week the review of the Guava [release notes|https://github.com/google/guava/releases?page=2] before pushing this for review. > Upgrade guava on trunk > -- > > Key: CASSANDRA-18645 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18645 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Labels: Dependency > Fix For: 5.x > > > Recently guava added JDK17 in CI and fixed some bugs down the road. > Upgrading before the major 5.0 release is something we should do. > Also, the current version that Cassandra uses is from 2018. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18649) netty-all vulnerability: CVE-2023-34462
[ https://issues.apache.org/jira/browse/CASSANDRA-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740275#comment-17740275 ] Brandon Williams commented on CASSANDRA-18649: -- Given that none of our netty-based services should be exposed to untrusted networks, we can suppress this. Trunk will get an upgraded netty in CASSANDRA-17992. ||Branch||CI|| |[3.0|https://github.com/driftx/cassandra/tree/CASSANDRA-18649-3.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1108/workflows/ac3c8e31-f776-49c5-9365-7654f9e7fa15]| |[3.11|https://github.com/driftx/cassandra/tree/CASSANDRA-18649-3.11]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1107/workflows/36e6988d-00f5-46a5-bd36-051f63a97bc0]| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-18649-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1110/workflows/e32dd6d7-b99f-4764-9006-88acaece57d9], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1110/workflows/aefd6698-3e62-452e-9b9d-d1ff5ebbf0ea]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-18649-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1109/workflows/e8fd29d0-5166-4b86-8e8a-53a48651488a], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1109/workflows/9d90fcd1-05c9-4c60-88c8-5e0a39dda4f4]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-18649-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1106/workflows/6f36192a-1ced-44e2-8c9d-2e0c748af672], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1106/workflows/17f2bfaf-2db2-4f75-839b-b121658f46ae]| > netty-all vulnerability: CVE-2023-34462 > --- > > Key: CASSANDRA-18649 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18649 > Project: Cassandra > Issue Type: Bug > Components: Feature/Encryption >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp: > https://nvd.nist.gov/vuln/detail/CVE-2023-34462 > {quote} > The `SniHandler` can allocate up to 16MB of heap for each channel during the > TLS handshake. When the handler or the channel does not have an idle timeout, > it can be used to make a TCP server using the `SniHandler` to allocate 16MB > of heap. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18643) jackson-core vulnerability: CVE-2022-45688
[ https://issues.apache.org/jira/browse/CASSANDRA-18643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17739111#comment-17739111 ] Brandon Williams edited comment on CASSANDRA-18643 at 7/5/23 4:01 PM: -- We can suppress, this appears to be a false positive but regardless we aren't calling any methods like this. Owasp is currently down so I'll start these after I can verify. ||Branch||CI|| |[3.11|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-3.11]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1104/workflows/f7aec536-ac9a-4baf-8023-425dd25e595b]| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1105/workflows/6b2fffdb-8e10-4e08-b25a-a4ce0bc057ea], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1105/workflows/816d8f9a-d50f-432f-a120-543125a18736]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1102/workflows/fc5de164-b0a8-412a-a9cc-a065e720e9e0], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1102/workflows/247840f4-55d3-4c96-bb1b-895280509579]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1103/workflows/d3c4198c-30de-4152-a773-13aad7a12c26], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1103/workflows/80039f33-e26d-4706-b771-d56b02903b50]| edit: figured out [the problem|https://github.com/apache/cassandra/commit/00cf31882beaf5e28dc600489f9ea8b69d1803df] and started CI was (Author: brandon.williams): We can suppress, this appears to be a false positive but regardless we aren't calling any methods like this. Owasp is currently down so I'll start these after I can verify. ||Branch||CI|| |[3.11|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-3.11]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1097/workflows/c20978b9-e60c-48d6-a172-5df5a56561e0]| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1100/workflows/defd016a-a101-448b-8eb8-bf969daa9441], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1100/workflows/c0c280b7-ea52-4d2e-bbaa-16df4dda1f66]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1099/workflows/f20b9a04-f091-4893-962d-9245abfa9a23], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1099/workflows/3bf61e5d-98f4-4dd4-a994-fc53f77e7eb8]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1098/workflows/35288fbc-d5b4-4b8b-9ff9-1b8a1c872fd5], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1098/workflows/d3311324-d3c0-4ed1-bb8f-ccfc47aef040]| edit: figured out [the problem|https://github.com/apache/cassandra/commit/00cf31882beaf5e28dc600489f9ea8b69d1803df] and started CI > jackson-core vulnerability: CVE-2022-45688 > -- > > Key: CASSANDRA-18643 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18643 > Project: Cassandra > Issue Type: Bug > Components: Dependencies >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp. > https://nvd.nist.gov/vuln/detail/CVE-2022-45688 > {quote} > A stack overflow in the XML.toJSONObject component of hutool-json v5.8.10 > allows attackers to cause a Denial of Service (DoS) via crafted JSON or XML > data. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18651) When starting up after a failed repair was detected Cassandra should clean up automatically
[ https://issues.apache.org/jira/browse/CASSANDRA-18651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740257#comment-17740257 ] Brandon Williams edited comment on CASSANDRA-18651 at 7/5/23 3:48 PM: -- bq. the operator has to manually clear out the data directory before restarting -Will the server not start?- Description was amended to clarify this. was (Author: brandon.williams): bq. the operator has to manually clear out the data directory before restarting Will the server not start? > When starting up after a failed repair was detected Cassandra should clean up > automatically > --- > > Key: CASSANDRA-18651 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18651 > Project: Cassandra > Issue Type: Improvement >Reporter: Jordan West >Priority: Normal > > When a repair fails and is not restarted using "resume" the operator has to > manually clear out the data directory before restarting. Instead Cassandra > could detect this case (or does detect this case) and can clean up itself. > This removes one error prone step for the operator or control plane. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18643) jackson-core vulnerability: CVE-2022-45688
[ https://issues.apache.org/jira/browse/CASSANDRA-18643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17739111#comment-17739111 ] Brandon Williams edited comment on CASSANDRA-18643 at 7/5/23 3:45 PM: -- We can suppress, this appears to be a false positive but regardless we aren't calling any methods like this. Owasp is currently down so I'll start these after I can verify. ||Branch||CI|| |[3.11|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-3.11]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1097/workflows/c20978b9-e60c-48d6-a172-5df5a56561e0]| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1100/workflows/defd016a-a101-448b-8eb8-bf969daa9441], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1100/workflows/c0c280b7-ea52-4d2e-bbaa-16df4dda1f66]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1099/workflows/f20b9a04-f091-4893-962d-9245abfa9a23], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1099/workflows/3bf61e5d-98f4-4dd4-a994-fc53f77e7eb8]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1098/workflows/35288fbc-d5b4-4b8b-9ff9-1b8a1c872fd5], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1098/workflows/d3311324-d3c0-4ed1-bb8f-ccfc47aef040]| edit: figured out [the problem|https://github.com/apache/cassandra/commit/00cf31882beaf5e28dc600489f9ea8b69d1803df] and started CI was (Author: brandon.williams): We can suppress, this appears to be a false positive but regardless we aren't calling any methods like this. Owasp is currently down so I'll start these after I can verify. ||Branch||CI|| |[3.11|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-3.11]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1097/workflows/c20978b9-e60c-48d6-a172-5df5a56561e0]| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1100/workflows/defd016a-a101-448b-8eb8-bf969daa9441], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1100/workflows/c0c280b7-ea52-4d2e-bbaa-16df4dda1f66]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1099/workflows/f20b9a04-f091-4893-962d-9245abfa9a23], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1099/workflows/3bf61e5d-98f4-4dd4-a994-fc53f77e7eb8]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-18643-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1098/workflows/35288fbc-d5b4-4b8b-9ff9-1b8a1c872fd5], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1098/workflows/d3311324-d3c0-4ed1-bb8f-ccfc47aef040]| > jackson-core vulnerability: CVE-2022-45688 > -- > > Key: CASSANDRA-18643 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18643 > Project: Cassandra > Issue Type: Bug > Components: Dependencies >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp. > https://nvd.nist.gov/vuln/detail/CVE-2022-45688 > {quote} > A stack overflow in the XML.toJSONObject component of hutool-json v5.8.10 > allows attackers to cause a Denial of Service (DoS) via crafted JSON or XML > data. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18651) When starting up after a failed repair was detected Cassandra should clean up automatically
[ https://issues.apache.org/jira/browse/CASSANDRA-18651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740257#comment-17740257 ] Brandon Williams commented on CASSANDRA-18651: -- bq. the operator has to manually clear out the data directory before restarting Will the server not start? > When starting up after a failed repair was detected Cassandra should clean up > automatically > --- > > Key: CASSANDRA-18651 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18651 > Project: Cassandra > Issue Type: Improvement >Reporter: Jordan West >Priority: Normal > > When a repair fails and is not restarted using "resume" the operator has to > manually clear out the data directory before restarting. Instead Cassandra > could detect this case (or does detect this case) and can clean up itself. > This removes one error prone step for the operator or control plane. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-18651) When starting up after a failed repair was detected Cassandra should clean up automatically
Jordan West created CASSANDRA-18651: --- Summary: When starting up after a failed repair was detected Cassandra should clean up automatically Key: CASSANDRA-18651 URL: https://issues.apache.org/jira/browse/CASSANDRA-18651 Project: Cassandra Issue Type: Improvement Reporter: Jordan West When a repair fails and is not restarted using "resume" the operator has to manually clear out the data directory before restarting. Instead Cassandra could detect this case (or does detect this case) and can clean up itself. This removes one error prone step for the operator or control plane. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] 01/01: Merge branch 'cassandra-4.1' into trunk
This is an automated email from the ASF dual-hosted git repository. brandonwilliams pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra.git commit a75b35d7912f851208ede5c61d08b2e089cbed56 Merge: ac25943876 4a9fafb310 Author: Brandon Williams AuthorDate: Wed Jul 5 10:08:26 2023 -0500 Merge branch 'cassandra-4.1' into trunk - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] 01/01: Merge branch 'cassandra-4.0' into cassandra-4.1
This is an automated email from the ASF dual-hosted git repository. brandonwilliams pushed a commit to branch cassandra-4.1 in repository https://gitbox.apache.org/repos/asf/cassandra.git commit 4a9fafb310d9218db044ddacd2f6b6f3497022c8 Merge: d2ad51c2f6 0c79b2857a Author: Brandon Williams AuthorDate: Wed Jul 5 10:08:20 2023 -0500 Merge branch 'cassandra-4.0' into cassandra-4.1 - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] 01/01: Merge branch 'cassandra-3.11' into cassandra-4.0
This is an automated email from the ASF dual-hosted git repository. brandonwilliams pushed a commit to branch cassandra-4.0 in repository https://gitbox.apache.org/repos/asf/cassandra.git commit 0c79b2857ac86e1e6fa40ddbff04a8bb5c603375 Merge: 0a53770ddc 00cf31882b Author: Brandon Williams AuthorDate: Wed Jul 5 10:07:47 2023 -0500 Merge branch 'cassandra-3.11' into cassandra-4.0 - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] branch cassandra-4.0 updated (0a53770ddc -> 0c79b2857a)
This is an automated email from the ASF dual-hosted git repository. brandonwilliams pushed a change to branch cassandra-4.0 in repository https://gitbox.apache.org/repos/asf/cassandra.git from 0a53770ddc Merge branch 'cassandra-3.11' into cassandra-4.0 new 00cf31882b Ninja fix my bad merge new 0c79b2857a Merge branch 'cassandra-3.11' into cassandra-4.0 The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] branch cassandra-4.1 updated (d2ad51c2f6 -> 4a9fafb310)
This is an automated email from the ASF dual-hosted git repository. brandonwilliams pushed a change to branch cassandra-4.1 in repository https://gitbox.apache.org/repos/asf/cassandra.git from d2ad51c2f6 Merge branch 'cassandra-4.0' into cassandra-4.1 new 00cf31882b Ninja fix my bad merge new 0c79b2857a Merge branch 'cassandra-3.11' into cassandra-4.0 new 4a9fafb310 Merge branch 'cassandra-4.0' into cassandra-4.1 The 3 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] branch trunk updated (ac25943876 -> a75b35d791)
This is an automated email from the ASF dual-hosted git repository. brandonwilliams pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra.git from ac25943876 Make `ant generate-idea-files` support the current JDK new 00cf31882b Ninja fix my bad merge new 0c79b2857a Merge branch 'cassandra-3.11' into cassandra-4.0 new 4a9fafb310 Merge branch 'cassandra-4.0' into cassandra-4.1 new a75b35d791 Merge branch 'cassandra-4.1' into trunk The 4 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] branch cassandra-3.11 updated: Ninja fix my bad merge
This is an automated email from the ASF dual-hosted git repository. brandonwilliams pushed a commit to branch cassandra-3.11 in repository https://gitbox.apache.org/repos/asf/cassandra.git The following commit(s) were added to refs/heads/cassandra-3.11 by this push: new 00cf31882b Ninja fix my bad merge 00cf31882b is described below commit 00cf31882beaf5e28dc600489f9ea8b69d1803df Author: Brandon Williams AuthorDate: Wed Jul 5 10:07:34 2023 -0500 Ninja fix my bad merge --- .build/dependency-check-suppressions.xml | 4 1 file changed, 4 deletions(-) diff --git a/.build/dependency-check-suppressions.xml b/.build/dependency-check-suppressions.xml index e6fe535381..9b10f433f7 100644 --- a/.build/dependency-check-suppressions.xml +++ b/.build/dependency-check-suppressions.xml @@ -124,9 +124,5 @@ CVE-2022-42004 CVE-2023-35116 - - -^pkg:maven/com\.fasterxml\.jackson\.core/jackson\-databind@.*$ - - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-17850) Find a way to get FileDescriptor.fd and sun.nio.ch.FileChannelImpl.fd without opening internals
[ https://issues.apache.org/jira/browse/CASSANDRA-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-17850: Description: With Java 17 if we do not add below to the jvm17 server options: {code:java} --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED{code} we get on startup (considering I comment out the scripted UDFs and apply a few changes to the startup scripts): {code:java} ERROR [ScheduledTasks:1] 2022-08-23 12:29:25,652 JVMStabilityInspector.java:68 - Exception in thread Thread[ScheduledTasks:1,5,ScheduledTasks] java.lang.AssertionError: java.lang.reflect.InaccessibleObjectException: Unable to make field private int java.io.FileDescriptor.fd accessible: module java.base does not "opens java.io" to unnamed module @11d8ae8b at org.apache.cassandra.utils.FBUtilities.getProtectedField(FBUtilities.java:801) at org.apache.cassandra.utils.NativeLibrary.(NativeLibrary.java:84) at org.apache.cassandra.utils.TimeUUID$Generator.hash(TimeUUID.java:496) at org.apache.cassandra.utils.TimeUUID$Generator.makeNode(TimeUUID.java:474) at org.apache.cassandra.utils.TimeUUID$Generator.makeClockSeqAndNode(TimeUUID.java:452) at org.apache.cassandra.utils.TimeUUID$Generator.(TimeUUID.java:368) at org.apache.cassandra.streaming.StreamingState.(StreamingState.java:50) at org.apache.cassandra.streaming.StreamManager.(StreamManager.java:257) at org.apache.cassandra.streaming.StreamManager.(StreamManager.java:58) at org.apache.cassandra.service.StorageService.(StorageService.java:376) at org.apache.cassandra.service.StorageService.(StorageService.java:226) at org.apache.cassandra.locator.DynamicEndpointSnitch.updateScores(DynamicEndpointSnitch.java:274) at org.apache.cassandra.locator.DynamicEndpointSnitch$1.run(DynamicEndpointSnitch.java:91) at org.apache.cassandra.concurrent.ExecutionFailure$1.run(ExecutionFailure.java:133) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field private int java.io.FileDescriptor.fd accessible: module java.base does not "opens java.io" to unnamed module @11d8ae8b at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at org.apache.cassandra.utils.FBUtilities.getProtectedField(FBUtilities.java:796) ... 20 common frames omitted {code} and {code:java} ERROR [ScheduledTasks:1] 2022-08-23 12:31:18,443 JVMStabilityInspector.java:68 - Exception in thread Thread[ScheduledTasks:1,5,ScheduledTasks] java.lang.AssertionError: java.lang.reflect.InaccessibleObjectException: Unable to make field private final java.io.FileDescriptor sun.nio.ch.FileChannelImpl.fd accessible: module java.base does not "opens sun.nio.ch" to unnamed module @4c012563 at org.apache.cassandra.utils.FBUtilities.getProtectedField(FBUtilities.java:801) at org.apache.cassandra.utils.NativeLibrary.(NativeLibrary.java:87) at org.apache.cassandra.utils.TimeUUID$Generator.hash(TimeUUID.java:496) at org.apache.cassandra.utils.TimeUUID$Generator.makeNode(TimeUUID.java:474) at org.apache.cassandra.utils.TimeUUID$Generator.makeClockSeqAndNode(TimeUUID.java:452) at org.apache.cassandra.utils.TimeUUID$Generator.(TimeUUID.java:368) at org.apache.cassandra.streaming.StreamingState.(StreamingState.java:50) at org.apache.cassandra.streaming.StreamManager.(StreamManager.java:257) at org.apache.cassandra.streaming.StreamManager.(StreamManager.java:58) at org.apache.cassandra.service.StorageService.(StorageService.java:376) at org.apache.cassandra.service.StorageService.(StorageService.java:226) at org.apache.cassandra.locator.DynamicEndpointSnitch.updateScores(DynamicEndpointSnitch.java:274) at org.apache.cassandra.locator.DynamicEndpointSnitch$1.run(DynamicEndpointSnitch.java:91) at org.apache.cassandra.concurrent.ExecutionFailure$1.run(ExecutionFailure.java:133) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledTh
[jira] [Updated] (CASSANDRA-18650) Upgrade owasp to 8.3.1
[ https://issues.apache.org/jira/browse/CASSANDRA-18650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-18650: - Bug Category: Parent values: Security(12985) Complexity: Low Hanging Fruit Component/s: Build Discovered By: User Report Fix Version/s: 3.0.x 3.11.x 4.0.x 4.1.x 5.x Severity: Normal Assignee: Brandon Williams Status: Open (was: Triage Needed) > Upgrade owasp to 8.3.1 > -- > > Key: CASSANDRA-18650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18650 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.x > > > I believe I'm fighting with an issue this upgrade solves, but also I cannot > think of any reason to not run the latest version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-18650) Upgrade owasp to 8.3.1
Brandon Williams created CASSANDRA-18650: Summary: Upgrade owasp to 8.3.1 Key: CASSANDRA-18650 URL: https://issues.apache.org/jira/browse/CASSANDRA-18650 Project: Cassandra Issue Type: Bug Reporter: Brandon Williams I believe I'm fighting with an issue this upgrade solves, but also I cannot think of any reason to not run the latest version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18467) Update generate-idea-files for J17
[ https://issues.apache.org/jira/browse/CASSANDRA-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-18467: Fix Version/s: 5.0 (was: 5.x) Source Control Link: https://github.com/apache/cassandra/commit/ac259438763ed96c402bab771567df59d18ad280 Resolution: Fixed Status: Resolved (was: Ready to Commit) > Update generate-idea-files for J17 > -- > > Key: CASSANDRA-18467 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18467 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Jakub Zytka >Priority: Low > Fix For: 5.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > There was a discussion in CASSANDRA-18258 how to update generate-idea-files. > The final agreement was to create one target to cover both Java 11 and Java > 17. > It will be good to figure out CASSANDRA-18263 and reshuffle arguments and > tasks based on what we decide to use as gc in testing for both Java 11 and > Java 17. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18467) Update generate-idea-files for J17
[ https://issues.apache.org/jira/browse/CASSANDRA-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740242#comment-17740242 ] Ekaterina Dimitrova commented on CASSANDRA-18467: - Committed to https://github.com/apache/cassandra [5735a9ccaa..ac25943876|https://github.com/apache/cassandra/commit/ac259438763ed96c402bab771567df59d18ad280] trunk -> trunk > Update generate-idea-files for J17 > -- > > Key: CASSANDRA-18467 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18467 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Jakub Zytka >Priority: Low > Fix For: 5.x > > Time Spent: 2.5h > Remaining Estimate: 0h > > There was a discussion in CASSANDRA-18258 how to update generate-idea-files. > The final agreement was to create one target to cover both Java 11 and Java > 17. > It will be good to figure out CASSANDRA-18263 and reshuffle arguments and > tasks based on what we decide to use as gc in testing for both Java 11 and > Java 17. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] branch trunk updated: Make `ant generate-idea-files` support the current JDK
This is an automated email from the ASF dual-hosted git repository. edimitrova pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra.git The following commit(s) were added to refs/heads/trunk by this push: new ac25943876 Make `ant generate-idea-files` support the current JDK ac25943876 is described below commit ac259438763ed96c402bab771567df59d18ad280 Author: Jakub Zytka AuthorDate: Thu Apr 27 13:33:41 2023 +0200 Make `ant generate-idea-files` support the current JDK ant generate-idea-files now support JDK 8, JDK 11 and JDK 17. To add support of another JDK the java-jvmargs property must be set for the JDK in question (see how it's done in build.xml for Java 11 and 17) Other minor, but notable changes are: - test jvmargs are now added to idea run configurations - .idea dir and project iml file are first removed and then recreated during `ant generate-idea-files` patch by Jakub Zytka; reviewed by Mick Semb Wever, Štefan Miklošovič, Ekaterina Dimitrova for CASSANDRA-18467 --- build.xml | 72 ++- 1 file changed, 34 insertions(+), 38 deletions(-) diff --git a/build.xml b/build.xml index 818761a30e..bc361df079 100644 --- a/build.xml +++ b/build.xml @@ -189,6 +189,14 @@ + + + + + + + + -Djdk.attach.allowAttachSelf=true -XX:+UseConcMarkSweepGC @@ -221,8 +229,8 @@ --add-opens java.base/jdk.internal.util.jar=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED - - + + @@ -269,8 +277,8 @@ --add-opens java.rmi/sun.rmi.transport.tcp=ALL-UNNAMED - - + + @@ -304,10 +312,13 @@ -Dio.netty.tryReflectionSetAccessible=true - + + + + - + @@ -489,8 +500,7 @@ - - + @@ -1148,12 +1158,10 @@ - - + - - + @@ -1842,13 +1850,13 @@ - - - - + + + + - "IDE configuration updated for use with JDK11" - - - - - - - -"IDE configuration updated for use with JDK17" - - + + @@ -1897,8 +1888,13 @@ ]]> - - + + + IDE configuration in .idea/ updated for use with JDK${ant.java.version}. + + In IntelliJ verify that the SDK is ${ant.java.version}, and its path is valid. + This can be verified in 'Project Structure/Project Setting/Project' and 'Project Structure/Platform Setting/SDKs'. + - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18467) Update generate-idea-files for J17
[ https://issues.apache.org/jira/browse/CASSANDRA-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740229#comment-17740229 ] Ekaterina Dimitrova commented on CASSANDRA-18467: - Thanks, everyone; I will commit this one in a bit. > Update generate-idea-files for J17 > -- > > Key: CASSANDRA-18467 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18467 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Jakub Zytka >Priority: Low > Fix For: 5.x > > Time Spent: 2.5h > Remaining Estimate: 0h > > There was a discussion in CASSANDRA-18258 how to update generate-idea-files. > The final agreement was to create one target to cover both Java 11 and Java > 17. > It will be good to figure out CASSANDRA-18263 and reshuffle arguments and > tasks based on what we decide to use as gc in testing for both Java 11 and > Java 17. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-17992) Upgrade Netty on 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740226#comment-17740226 ] Ekaterina Dimitrova commented on CASSANDRA-17992: - We often add exclusions for dependencies that will not be needed, like [here|https://github.com/apache/cassandra/blob/trunk/.build/parent-pom-template.xml#L273-L298], for example. Can we do that for some of those not required here (when it is confirmed that some of them are not needed, I might be wrong)? > Upgrade Netty on 5.0 > > > Key: CASSANDRA-17992 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17992 > Project: Cassandra > Issue Type: Task > Components: Dependencies >Reporter: Ekaterina Dimitrova >Assignee: Jacek Lewandowski >Priority: Low > Fix For: 5.x > > > I haven't been able to identify from the Netty docs which was the lowest > version where JDK17 was added but we are about 40 versions behind in netty 4 > so I suspect we better update. > -We need to consider there was an issue with class cast exceptions when > building with JDK17 with newer versions of netty (the newest available in > March 2022). For the record, we didn't see those when running CI on JDK8 and > JDK11. We also need to carefully revise the changes between the netty > versions. -->- CASSANDRA-18180 > Upgrading will cover also a fix in netty that was discussed in > [this|https://the-asf.slack.com/archives/CK23JSY2K/p1665567660202989] ASF > Slack thread. > CC [~benedict] , [~aleksey] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-18647: - Bug Category: Parent values: Correctness(12982) Complexity: Normal Component/s: CQL/Semantics Discovered By: User Report Fix Version/s: 5.x Severity: Normal Status: Open (was: Triage Needed) > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug > Components: CQL/Semantics >Reporter: Nadav Har'El >Priority: Normal > Fix For: 5.x > > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18615) CREATE INDEX Modifications for Initial Release of SAI
[ https://issues.apache.org/jira/browse/CASSANDRA-18615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres de la Peña updated CASSANDRA-18615: -- Status: Review In Progress (was: Needs Committer) > CREATE INDEX Modifications for Initial Release of SAI > - > > Key: CASSANDRA-18615 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18615 > Project: Cassandra > Issue Type: Improvement > Components: CQL/Syntax, Feature/SAI >Reporter: Caleb Rackliffe >Assignee: Caleb Rackliffe >Priority: Normal > Fix For: 5.x > > Time Spent: 5.5h > Remaining Estimate: 0h > > After a lengthy discussion on the dev list, the community seems to have > arrived at the following list of TODOs before we release SAI in 5.0: > 1.) CREATE INDEX should be expanded to support {{USING … WITH OPTIONS…}} > Essentially, we should be able to do something like {{CREATE INDEX ON tbl(v) > USING ’sai’ WITH OPTIONS = ...}} and {{CREATE INDEX ON tbl(v) USING > ‘cassandra’}} as a more specific/complete way to emulate the current behavior > of {{CREATE INDEX}}. > 2.) Allow operators to configure, in the YAML, a.) whether an index > implementation must be specified w/ USING and {{CREATE INDEX}} and b.) what > the default implementation will be, if {{USING}} isn’t required. > 3.) The defaults we ship w/ will avoid breaking existing {{CREATE INDEX}} > usage. (i.e. A default is allowed, and that default will remain ‘cassandra’, > or the legacy 2i) > With all this in place, users should be able create SAI indexes w/ the > simplest possible syntax, no defaults will change, and operators will have > the ability to change defaults to favor SAI whenever they like. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18479) Add basic text tokenisation and analysis
[ https://issues.apache.org/jira/browse/CASSANDRA-18479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres de la Peña updated CASSANDRA-18479: -- Status: Review In Progress (was: Patch Available) > Add basic text tokenisation and analysis > > > Key: CASSANDRA-18479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18479 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Normal > > [CASSANDRA-16092|https://issues.apache.org/jira/browse/CASSANDRA-16092] > removed support for any text analysis or tokenisation. > SAI currently supports the following analyzers: > * normalize - text normalization using NFC normalization > * case_sensitive - allow control over the case sensitivity of an index > * ascii - allow ascii folding of text -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18649) netty-all vulnerability: CVE-2023-34462
[ https://issues.apache.org/jira/browse/CASSANDRA-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-18649: - Description: This is failing owasp: https://nvd.nist.gov/vuln/detail/CVE-2023-34462 {quote} The `SniHandler` can allocate up to 16MB of heap for each channel during the TLS handshake. When the handler or the channel does not have an idle timeout, it can be used to make a TCP server using the `SniHandler` to allocate 16MB of heap. {quote} was: This is failing owasp: https://nvd.nist.gov/vuln/detail/CVE-2023-34462 The `SniHandler` can allocate up to 16MB of heap for each channel during the TLS handshake. When the handler or the channel does not have an idle timeout, it can be used to make a TCP server using the `SniHandler` to allocate 16MB of heap. > netty-all vulnerability: CVE-2023-34462 > --- > > Key: CASSANDRA-18649 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18649 > Project: Cassandra > Issue Type: Bug > Components: Feature/Encryption >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp: > https://nvd.nist.gov/vuln/detail/CVE-2023-34462 > {quote} > The `SniHandler` can allocate up to 16MB of heap for each channel during the > TLS handshake. When the handler or the channel does not have an idle timeout, > it can be used to make a TCP server using the `SniHandler` to allocate 16MB > of heap. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18649) netty-all vulnerability: CVE-2023-34462
[ https://issues.apache.org/jira/browse/CASSANDRA-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-18649: - Bug Category: Parent values: Degradation(12984)Level 1 values: Resource Management(12995) Complexity: Normal Component/s: Feature/Encryption Discovered By: User Report Fix Version/s: 3.0.x 3.11.x 4.0.x 4.1.x 5.x Severity: Normal Assignee: Brandon Williams Status: Open (was: Triage Needed) > netty-all vulnerability: CVE-2023-34462 > --- > > Key: CASSANDRA-18649 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18649 > Project: Cassandra > Issue Type: Bug > Components: Feature/Encryption >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.x > > > This is failing owasp: > https://nvd.nist.gov/vuln/detail/CVE-2023-34462 > > The `SniHandler` can allocate up to 16MB of heap for each channel during the > TLS handshake. When the handler or the channel does not have an idle timeout, > it can be used to make a TCP server using the `SniHandler` to allocate 16MB > of heap. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-18649) netty-all vulnerability: CVE-2023-34462
Brandon Williams created CASSANDRA-18649: Summary: netty-all vulnerability: CVE-2023-34462 Key: CASSANDRA-18649 URL: https://issues.apache.org/jira/browse/CASSANDRA-18649 Project: Cassandra Issue Type: Bug Reporter: Brandon Williams This is failing owasp: https://nvd.nist.gov/vuln/detail/CVE-2023-34462 The `SniHandler` can allocate up to 16MB of heap for each channel during the TLS handshake. When the handler or the channel does not have an idle timeout, it can be used to make a TCP server using the `SniHandler` to allocate 16MB of heap. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18515) Optimize Initial Concurrency Selection for Range Read Algorithm During SAI Queries
[ https://issues.apache.org/jira/browse/CASSANDRA-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740156#comment-17740156 ] Andres de la Peña commented on CASSANDRA-18515: --- [~mike_tr_adamson] if you set {{BASE_BRANCH=cep-7-sai}} on {{.circleci/generate_11_and_17.sh}} and run this script with {{-p}} it will create a job to repeatedly run the new dtest. > Optimize Initial Concurrency Selection for Range Read Algorithm During SAI > Queries > -- > > Key: CASSANDRA-18515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18515 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Normal > Time Spent: 2h 40m > Remaining Estimate: 0h > > The range read algorithm relies on the Index API’s notion of estimated result > rows to decide how many replicas to contact in parallel during its first > round of requests. The more results expected from a replica for a token > range, the fewer replicas the range read will initially try to contact. Like > SASI, SAI floors that estimate to a huge negative number to make sure it’s > selected over other indexes, and this floors the concurrency factor to 1. The > actual formula looks like this: > {code:java} > // resultsPerRange, from SAI, is a giant negative number > concurrencyFactor = Math.max(1, Math.min(ranges.rangeCount(), (int) > Math.ceil(command.limits().count() / resultsPerRange))); > {code} > Although that concurrency factor is updated as actual results stream in, only > sending a single range request to a single replica in every case for SAI is > not ideal. For example, assume I have a 3 node cluster and a keyspace at > RF=1, with 10 rows spread across the 3 nodes, without vnodes. Issuing a query > that matches all 10 rows with a LIMIT of 10 will make 2 or 3 serial range > requests from the coordinator, one to each of the 3 nodes. > This can be fixed by allowing indexes to bypass the initial concurrency > calculation allowing SAI queries to contact the entire ring in a single round > of queries, or at worst the minimum number of rounds as bounded by the > existing statutory maximum ranges per round. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18479) Add basic text tokenisation and analysis
[ https://issues.apache.org/jira/browse/CASSANDRA-18479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-18479: - Test and Documentation Plan: The current circle-ci test run for this patch is here: https://app.circleci.com/pipelines/github/mike-tr-adamson/cassandra/179/workflows/0fb47446-7a4b-46c8-8b0d-e85057c26308 (was: The current circle-ci test run for this patch is here: https://app.circleci.com/pipelines/github/mike-tr-adamson/cassandra/152/workflows/00e8deeb-fe42-4076-be3d-d2da83d454dd) > Add basic text tokenisation and analysis > > > Key: CASSANDRA-18479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18479 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Normal > > [CASSANDRA-16092|https://issues.apache.org/jira/browse/CASSANDRA-16092] > removed support for any text analysis or tokenisation. > SAI currently supports the following analyzers: > * normalize - text normalization using NFC normalization > * case_sensitive - allow control over the case sensitivity of an index > * ascii - allow ascii folding of text -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18515) Optimize Initial Concurrency Selection for Range Read Algorithm During SAI Queries
[ https://issues.apache.org/jira/browse/CASSANDRA-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-18515: - Test and Documentation Plan: The latest circle-ci test run is here: https://app.circleci.com/pipelines/github/mike-tr-adamson/cassandra/178/workflows/938e05f9-339a-4d9d-bfbb-56aaf6cd8a5a (was: The latest circle-ci test run is here: https://app.circleci.com/pipelines/github/mike-tr-adamson/cassandra/154/workflows/b22cf67c-1e71-4c93-bdc8-c2c1ab6c3773) > Optimize Initial Concurrency Selection for Range Read Algorithm During SAI > Queries > -- > > Key: CASSANDRA-18515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18515 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Normal > Time Spent: 2.5h > Remaining Estimate: 0h > > The range read algorithm relies on the Index API’s notion of estimated result > rows to decide how many replicas to contact in parallel during its first > round of requests. The more results expected from a replica for a token > range, the fewer replicas the range read will initially try to contact. Like > SASI, SAI floors that estimate to a huge negative number to make sure it’s > selected over other indexes, and this floors the concurrency factor to 1. The > actual formula looks like this: > {code:java} > // resultsPerRange, from SAI, is a giant negative number > concurrencyFactor = Math.max(1, Math.min(ranges.rangeCount(), (int) > Math.ceil(command.limits().count() / resultsPerRange))); > {code} > Although that concurrency factor is updated as actual results stream in, only > sending a single range request to a single replica in every case for SAI is > not ideal. For example, assume I have a 3 node cluster and a keyspace at > RF=1, with 10 rows spread across the 3 nodes, without vnodes. Issuing a query > that matches all 10 rows with a LIMIT of 10 will make 2 or 3 serial range > requests from the coordinator, one to each of the 3 nodes. > This can be fixed by allowing indexes to bypass the initial concurrency > calculation allowing SAI queries to contact the entire ring in a single round > of queries, or at worst the minimum number of rounds as bounded by the > existing statutory maximum ranges per round. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18479) Add basic text tokenisation and analysis
[ https://issues.apache.org/jira/browse/CASSANDRA-18479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-18479: - Source Control Link: https://github.com/apache/cassandra/pull/2465 (was: https://github.com/maedhroz/cassandra/pull/12) > Add basic text tokenisation and analysis > > > Key: CASSANDRA-18479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18479 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Normal > > [CASSANDRA-16092|https://issues.apache.org/jira/browse/CASSANDRA-16092] > removed support for any text analysis or tokenisation. > SAI currently supports the following analyzers: > * normalize - text normalization using NFC normalization > * case_sensitive - allow control over the case sensitivity of an index > * ascii - allow ascii folding of text -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18515) Optimize Initial Concurrency Selection for Range Read Algorithm During SAI Queries
[ https://issues.apache.org/jira/browse/CASSANDRA-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740144#comment-17740144 ] Mike Adamson commented on CASSANDRA-18515: -- [~bereng] I have just added a commit for the PAID ci and have a run going. I will post results when it is complete. > Optimize Initial Concurrency Selection for Range Read Algorithm During SAI > Queries > -- > > Key: CASSANDRA-18515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18515 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Normal > Time Spent: 1h 50m > Remaining Estimate: 0h > > The range read algorithm relies on the Index API’s notion of estimated result > rows to decide how many replicas to contact in parallel during its first > round of requests. The more results expected from a replica for a token > range, the fewer replicas the range read will initially try to contact. Like > SASI, SAI floors that estimate to a huge negative number to make sure it’s > selected over other indexes, and this floors the concurrency factor to 1. The > actual formula looks like this: > {code:java} > // resultsPerRange, from SAI, is a giant negative number > concurrencyFactor = Math.max(1, Math.min(ranges.rangeCount(), (int) > Math.ceil(command.limits().count() / resultsPerRange))); > {code} > Although that concurrency factor is updated as actual results stream in, only > sending a single range request to a single replica in every case for SAI is > not ideal. For example, assume I have a 3 node cluster and a keyspace at > RF=1, with 10 rows spread across the 3 nodes, without vnodes. Issuing a query > that matches all 10 rows with a LIMIT of 10 will make 2 or 3 serial range > requests from the coordinator, one to each of the 3 nodes. > This can be fixed by allowing indexes to bypass the initial concurrency > calculation allowing SAI queries to contact the entire ring in a single round > of queries, or at worst the minimum number of rounds as bounded by the > existing statutory maximum ranges per round. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18490) Add checksum validation to all index components on startup, full rebuild and streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Berenguer Blasi updated CASSANDRA-18490: Status: Review In Progress (was: Patch Available) > Add checksum validation to all index components on startup, full rebuild and > streaming > -- > > Key: CASSANDRA-18490 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18490 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Piotr Kolaczkowski >Priority: Normal > Fix For: 5.x > > > The SAI code currently does not checksum validate per-column index data files > at any point. It does checksum validate per-sstable components after a full > rebuild and it checksum validates the per-column metadata on opening. > We should checksum validate all index components on startup, full rebuild and > streaming. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18613) Add support for vectors on UDFs
[ https://issues.apache.org/jira/browse/CASSANDRA-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Berenguer Blasi updated CASSANDRA-18613: Reviewers: Berenguer Blasi, Maxwell Guo (was: Maxwell Guo) > Add support for vectors on UDFs > --- > > Key: CASSANDRA-18613 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18613 > Project: Cassandra > Issue Type: New Feature > Components: Cluster/Schema >Reporter: Andres de la Peña >Assignee: Andres de la Peña >Priority: Normal > Fix For: 5.x > > Time Spent: 2h > Remaining Estimate: 0h > > CASSANDRA-18504 will add a new vector type, but [it won't be supported on > UDFs|https://github.com/apache/cassandra/blob/5027e688da006e5d5bf9bfdf4719caddbf85986a/test/unit/org/apache/cassandra/cql3/validation/operations/CQLVectorTest.java#L248-L271]. > The goal of this ticket is to add that support. > This will require adding a new {{o.a.c.cql3.functions.types.TypeCodec}} for > vectors. Those codecs are mostly duplicates of the codecs on the Java driver. > They are used for UDFs instead of the regular {{AbstractType}} to prevent > pulling too many internal dependencies. The driver's vector codec has > recently been added by > [JAVA-3060|https://datastax-oss.atlassian.net/browse/JAVA-3060]. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18515) Optimize Initial Concurrency Selection for Range Read Algorithm During SAI Queries
[ https://issues.apache.org/jira/browse/CASSANDRA-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740069#comment-17740069 ] Berenguer Blasi commented on CASSANDRA-18515: - The CI run seems outdated and not PAID hence the failures? > Optimize Initial Concurrency Selection for Range Read Algorithm During SAI > Queries > -- > > Key: CASSANDRA-18515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18515 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Normal > Time Spent: 1h 20m > Remaining Estimate: 0h > > The range read algorithm relies on the Index API’s notion of estimated result > rows to decide how many replicas to contact in parallel during its first > round of requests. The more results expected from a replica for a token > range, the fewer replicas the range read will initially try to contact. Like > SASI, SAI floors that estimate to a huge negative number to make sure it’s > selected over other indexes, and this floors the concurrency factor to 1. The > actual formula looks like this: > {code:java} > // resultsPerRange, from SAI, is a giant negative number > concurrencyFactor = Math.max(1, Math.min(ranges.rangeCount(), (int) > Math.ceil(command.limits().count() / resultsPerRange))); > {code} > Although that concurrency factor is updated as actual results stream in, only > sending a single range request to a single replica in every case for SAI is > not ideal. For example, assume I have a 3 node cluster and a keyspace at > RF=1, with 10 rows spread across the 3 nodes, without vnodes. Issuing a query > that matches all 10 rows with a LIMIT of 10 will make 2 or 3 serial range > requests from the coordinator, one to each of the 3 nodes. > This can be fixed by allowing indexes to bypass the initial concurrency > calculation allowing SAI queries to contact the entire ring in a single round > of queries, or at worst the minimum number of rounds as bounded by the > existing statutory maximum ranges per round. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El edited comment on CASSANDRA-18647 at 7/5/23 7:35 AM: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is exactly the same function that the Cassandra implementation uses for this purpose, so the implementation and the test have the same bug and the test doesn't verify anything. I found the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: {quote}valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. {quote} So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. And also fix the test, of course. was (Author: nyh): By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: {quote}valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. {quote} So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. And also fix the test, of course. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El edited comment on CASSANDRA-18647 at 7/5/23 7:22 AM: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. And also fix the test, of course. was (Author: nyh): By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. And also fix the test, of course. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El edited comment on CASSANDRA-18647 at 7/5/23 7:22 AM: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: {quote}valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. {quote} So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. And also fix the test, of course. was (Author: nyh): By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. And also fix the test, of course. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El edited comment on CASSANDRA-18647 at 7/5/23 7:21 AM: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. And also fix the test, of course. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. was (Author: nyh): By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. And also fix the test, of course. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El edited comment on CASSANDRA-18647 at 7/5/23 7:21 AM: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. And also fix the test, of course. was (Author: nyh): By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. And also fix the test, of course. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El edited comment on CASSANDRA-18647 at 7/5/23 7:21 AM: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. And also fix the test, of course. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. was (Author: nyh): By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El edited comment on CASSANDRA-18647 at 7/5/23 7:20 AM: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - do *not* use BigDecimal.valueOf(double) on a float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. was (Author: nyh): By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - not using the float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18647) CASTing a float to decimal adds wrong digits
[ https://issues.apache.org/jira/browse/CASSANDRA-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740047#comment-17740047 ] Nadav Har'El commented on CASSANDRA-18647: -- By the way, there is a unit test - testNumericCastsInSelectionClause in test/unit/org/apache/cassandra/cql3/functions/CastFctsTest.java - that should have caught this bug. The problem is that it compares the result of the cast not to any specific value but to BigDecimal.valueOf(5.2F), and this BigDecimal.valueOf(float) is apparently the same function that the Cassandra implementation uses for this purpose, so if the implementation has a bug the test doesn't verify anything. I think I know the cause of this bug. It turns out that BigDecimal does *not* have a float overload, only a double. The Java documentation says that: valueOf(double val) Translates a double into a BigDecimal, using the double's canonical string representation provided by the Double.toString(double) method. So the solution of how to turn a float into a Decimal is easy - just use *Float.toString(float)* and then construct a BigDecimal using that string - not using the float. So it seems the fix would be a two-line patch to getDecimalConversionFunction() in src/java/org/apache/cassandra/cql3/functions/CastFcts.java to do that. > CASTing a float to decimal adds wrong digits > > > Key: CASSANDRA-18647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18647 > Project: Cassandra > Issue Type: Bug >Reporter: Nadav Har'El >Priority: Normal > > If I create a table with a *float* (32-bit) column, and cast it to the > *decimal* type, the casting wrongly passes through the double (64-bit) type > and picks up extra, wrong, digits. For example, if we have a column e of type > "float", and run > INSERT INTO tbl (p, e) VALUES (1, 5.2) > SELECT CAST(e AS decimal) FROM tbl WHERE p=1 > The result is the "decimal" value 5.19809265137, with all those extra > wrong digits. It would have been better to get back the decimal value 5.2, > with only two significant digits. > It appears that this happens because Cassandra's implementation first > converts the 32-bit float into a 64-bit double, and only then converts that - > with all the silly extra digits it picked up in the first conversion - into a > "decimal" value. > Contrast this with CAST(e AS text) which works correctly - it returns the > string "5.2" - only the actual digits of the 32-bit floating point value are > converted to the string, without inventing additional digits in the process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org