[jira] [Commented] (CASSJAVA-89) Using deprecated features in latest java driver

2025-04-18 Thread Michael Karsten (Jira)


[ 
https://issues.apache.org/jira/browse/CASSJAVA-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17945721#comment-17945721
 ] 

Michael Karsten commented on CASSJAVA-89:
-

[~andrew.tolbert] let me know your thoughts on my proposal above. Also LMK how 
you'd like to collab. I tried joining the cassandra slack channel, but got the 
runaround in the Apache docs b/c I don't have an apache.org email address.

> Using deprecated features in latest java driver
> ---
>
> Key: CASSJAVA-89
> URL: https://issues.apache.org/jira/browse/CASSJAVA-89
> Project: Apache Cassandra Java driver
>  Issue Type: Bug
>Reporter: Michael Karsten
>Assignee: Michael Karsten
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> The latest version of the java driver 4.19.0 is using the deprecated LZ4 
> compression setting of `chunk_length_kb` when it should be using 
> `chunk_length_in_kb`. This works fine with Cassandra 4.1 but will fail with 
> Cassandra 5.0.
>  
> In RelationOptions.java:
> {code:java}
> @NonNull
>   @CheckReturnValue
>   default SelfT withCompression(
>       @NonNull String compressionAlgorithmName, int chunkLengthKB, double 
> crcCheckChance) {
>     return withOption(
>         "compression",
>         ImmutableMap.of(
>             "class",
>             compressionAlgorithmName,
>             "chunk_length_kb",
>             chunkLengthKB,
>             "crc_check_chance",
>             crcCheckChance));
>   }{code}
>  
>  
> How to reproduce:
>  
> {code:java}
> client.execute(SchemaBuilder.createTable("new_table")
> .ifNotExists()
> .withPartitionKey("id", DataTypes.BIGINT)
> .withColumn("some_column", DataTypes.DOUBLE)
> .withLZ4Compression(64, 1.0)
> .build()) {code}
> Stack trace:
> {code:java}
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException:
>  Unknown compression options chunk_length_kb
> at 
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:54)
> at 
> com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:151)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:55)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:32)
> at 
> com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:234)
> at 
> com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:56){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSJAVA-89) Using deprecated features in latest java driver

2025-04-10 Thread Michael Karsten (Jira)


[ 
https://issues.apache.org/jira/browse/CASSJAVA-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17941670#comment-17941670
 ] 

Michael Karsten commented on CASSJAVA-89:
-

There is always the option to use OptionProvider::withOptions to pass these 
deprecated options directly. That's not to say we shouldn't try to support 
backward compatibility, but there are workarounds for users still on older 
versions of C*. Do you have any ideas or is there similar work elsewhere in the 
driver we could leverage?

Agreed on crc_check_chance, good catch. I will look into NoOpCompressor.

One could argue that adding Zstd methods are outside the scope of this ticket, 
especially because it can be set with
{code:java}
.withCompression("org.apache.cassandra.io.compress.ZstdCompressor"){code}
However that only uses the default compression_level so it could be a good 
addition. I've had very positive experiences using Zstd over LZ4 on large 
tables.

Collaborating sounds good to me.

> Using deprecated features in latest java driver
> ---
>
> Key: CASSJAVA-89
> URL: https://issues.apache.org/jira/browse/CASSJAVA-89
> Project: Apache Cassandra Java driver
>  Issue Type: Bug
>Reporter: Michael Karsten
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> The latest version of the java driver 4.19.0 is using the deprecated LZ4 
> compression setting of `chunk_length_kb` when it should be using 
> `chunk_length_in_kb`. This works fine with Cassandra 4.1 but will fail with 
> Cassandra 5.0.
>  
> In RelationOptions.java:
> {code:java}
> @NonNull
>   @CheckReturnValue
>   default SelfT withCompression(
>       @NonNull String compressionAlgorithmName, int chunkLengthKB, double 
> crcCheckChance) {
>     return withOption(
>         "compression",
>         ImmutableMap.of(
>             "class",
>             compressionAlgorithmName,
>             "chunk_length_kb",
>             chunkLengthKB,
>             "crc_check_chance",
>             crcCheckChance));
>   }{code}
>  
>  
> How to reproduce:
>  
> {code:java}
> client.execute(SchemaBuilder.createTable("new_table")
> .ifNotExists()
> .withPartitionKey("id", DataTypes.BIGINT)
> .withColumn("some_column", DataTypes.DOUBLE)
> .withLZ4Compression(64, 1.0)
> .build()) {code}
> Stack trace:
> {code:java}
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException:
>  Unknown compression options chunk_length_kb
> at 
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:54)
> at 
> com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:151)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:55)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:32)
> at 
> com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:234)
> at 
> com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:56){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSJAVA-89) Using deprecated features in latest java driver

2025-04-07 Thread Thomas Steinmaurer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSJAVA-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17941774#comment-17941774
 ] 

Thomas Steinmaurer commented on CASSJAVA-89:


[~mkarsten]
{noformat}
I've had very positive experiences using Zstd over LZ4 on large tables
{noformat}

Me too. Up to 35% disk space saving in our use case (timeseries data) at 
neglectable overhead. Curious about getting ZSTD for internode network 
compression? Please join/vote: 
https://issues.apache.org/jira/browse/CASSANDRA-20488. Sorry, could not resist. 
:)


> Using deprecated features in latest java driver
> ---
>
> Key: CASSJAVA-89
> URL: https://issues.apache.org/jira/browse/CASSJAVA-89
> Project: Apache Cassandra Java driver
>  Issue Type: Bug
>Reporter: Michael Karsten
>Assignee: Michael Karsten
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> The latest version of the java driver 4.19.0 is using the deprecated LZ4 
> compression setting of `chunk_length_kb` when it should be using 
> `chunk_length_in_kb`. This works fine with Cassandra 4.1 but will fail with 
> Cassandra 5.0.
>  
> In RelationOptions.java:
> {code:java}
> @NonNull
>   @CheckReturnValue
>   default SelfT withCompression(
>       @NonNull String compressionAlgorithmName, int chunkLengthKB, double 
> crcCheckChance) {
>     return withOption(
>         "compression",
>         ImmutableMap.of(
>             "class",
>             compressionAlgorithmName,
>             "chunk_length_kb",
>             chunkLengthKB,
>             "crc_check_chance",
>             crcCheckChance));
>   }{code}
>  
>  
> How to reproduce:
>  
> {code:java}
> client.execute(SchemaBuilder.createTable("new_table")
> .ifNotExists()
> .withPartitionKey("id", DataTypes.BIGINT)
> .withColumn("some_column", DataTypes.DOUBLE)
> .withLZ4Compression(64, 1.0)
> .build()) {code}
> Stack trace:
> {code:java}
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException:
>  Unknown compression options chunk_length_kb
> at 
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:54)
> at 
> com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:151)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:55)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:32)
> at 
> com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:234)
> at 
> com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:56){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSJAVA-89) Using deprecated features in latest java driver

2025-04-07 Thread Michael Karsten (Jira)


[ 
https://issues.apache.org/jira/browse/CASSJAVA-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17941733#comment-17941733
 ] 

Michael Karsten commented on CASSJAVA-89:
-

After looking at this again, perhaps we're overthinking the backward 
compatibility here. Moving crc_check_chance to the top level gave us an 
opportunity to make this easy on ourselves.

What we can do is leave withCompression(String, int, double) alone, but mark as 
deprecated. A new method withCompression(String, int) would use 
chunk_length_in_kb and not reference ccr_check_change. Then C* 5.0 users can 
migrate to the new method while older versions and use the old one.

Other methods will have to get deprecated the same way. For example, 
withLZ4Compression(int, double) will get deprecated in favor of 
withLZ4Compression(int).

> Using deprecated features in latest java driver
> ---
>
> Key: CASSJAVA-89
> URL: https://issues.apache.org/jira/browse/CASSJAVA-89
> Project: Apache Cassandra Java driver
>  Issue Type: Bug
>Reporter: Michael Karsten
>Assignee: Michael Karsten
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> The latest version of the java driver 4.19.0 is using the deprecated LZ4 
> compression setting of `chunk_length_kb` when it should be using 
> `chunk_length_in_kb`. This works fine with Cassandra 4.1 but will fail with 
> Cassandra 5.0.
>  
> In RelationOptions.java:
> {code:java}
> @NonNull
>   @CheckReturnValue
>   default SelfT withCompression(
>       @NonNull String compressionAlgorithmName, int chunkLengthKB, double 
> crcCheckChance) {
>     return withOption(
>         "compression",
>         ImmutableMap.of(
>             "class",
>             compressionAlgorithmName,
>             "chunk_length_kb",
>             chunkLengthKB,
>             "crc_check_chance",
>             crcCheckChance));
>   }{code}
>  
>  
> How to reproduce:
>  
> {code:java}
> client.execute(SchemaBuilder.createTable("new_table")
> .ifNotExists()
> .withPartitionKey("id", DataTypes.BIGINT)
> .withColumn("some_column", DataTypes.DOUBLE)
> .withLZ4Compression(64, 1.0)
> .build()) {code}
> Stack trace:
> {code:java}
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException:
>  Unknown compression options chunk_length_kb
> at 
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:54)
> at 
> com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:151)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:55)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:32)
> at 
> com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:234)
> at 
> com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:56){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSJAVA-89) Using deprecated features in latest java driver

2025-04-07 Thread Andy Tolbert (Jira)


[ 
https://issues.apache.org/jira/browse/CASSJAVA-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17941637#comment-17941637
 ] 

Andy Tolbert commented on CASSJAVA-89:
--

(y) this is something i've stumbled into as well; I had taken a few notes on 
what I thought needed changed:

RelationOptions in aci-casssandra-driver has a few issues with C* 5.0:
 * chunk_length_kb was deprecated in 3.0 and removed in OSS 5.0 in favor of 
'chunk_length_in_kb';  the PR changes this but it'd be nice to preserve the old 
name somehow for backward compatibility.
 * Like [~tsteinmaurer]  reported, crc_check_chance was removed in OSS 5.0 in 
favor of being moved to a top level table option 
([https://github.com/apache/cassandra/commit/b59b832eba014e8d2fc93133cb3db41b509a1c26]),
 so we need to use that.

I think we should do the following:
 * Add the capability for the client to specify the target C* version when 
addressing the schema API.  I think we can't detect based on the cluster I 
believe as traditionally statement builders are version agnostic, but we can 
explore out options.
 * Update methods that use chunk_length_kb to use chunk_length_in_kb for newer 
versions.
 * Update methods that take in a crc_check_chance to pass in the table level 
option for newer versions.
 * Update withNoCompression to use NoOpCompressor
 * Add methods for Zstd compression, as Zstd can really save a decent amount 
(~10%) of on disk data over LZ4 for a very minimal cost.

Added myself as reviewer to PR and voted on this jira.  Would be great to 
collaborate together on fixing this.

> Using deprecated features in latest java driver
> ---
>
> Key: CASSJAVA-89
> URL: https://issues.apache.org/jira/browse/CASSJAVA-89
> Project: Apache Cassandra Java driver
>  Issue Type: Bug
>Reporter: Michael Karsten
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> The latest version of the java driver 4.19.0 is using the deprecated LZ4 
> compression setting of `chunk_length_kb` when it should be using 
> `chunk_length_in_kb`. This works fine with Cassandra 4.1 but will fail with 
> Cassandra 5.0.
>  
> In RelationOptions.java:
> {code:java}
> @NonNull
>   @CheckReturnValue
>   default SelfT withCompression(
>       @NonNull String compressionAlgorithmName, int chunkLengthKB, double 
> crcCheckChance) {
>     return withOption(
>         "compression",
>         ImmutableMap.of(
>             "class",
>             compressionAlgorithmName,
>             "chunk_length_kb",
>             chunkLengthKB,
>             "crc_check_chance",
>             crcCheckChance));
>   }{code}
>  
>  
> How to reproduce:
>  
> {code:java}
> client.execute(SchemaBuilder.createTable("new_table")
> .ifNotExists()
> .withPartitionKey("id", DataTypes.BIGINT)
> .withColumn("some_column", DataTypes.DOUBLE)
> .withLZ4Compression(64, 1.0)
> .build()) {code}
> Stack trace:
> {code:java}
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException:
>  Unknown compression options chunk_length_kb
> at 
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:54)
> at 
> com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:151)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:55)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:32)
> at 
> com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:234)
> at 
> com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:56){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSJAVA-89) Using deprecated features in latest java driver

2025-04-07 Thread Thomas Steinmaurer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSJAVA-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17941544#comment-17941544
 ] 

Thomas Steinmaurer commented on CASSJAVA-89:


Once the chunk length param is fixed / renamed, it likely fails for 
_crc_check_chance_ as well, due to 
https://issues.apache.org/jira/browse/CASSANDRA-18872.

> Using deprecated features in latest java driver
> ---
>
> Key: CASSJAVA-89
> URL: https://issues.apache.org/jira/browse/CASSJAVA-89
> Project: Apache Cassandra Java driver
>  Issue Type: Bug
>Reporter: Michael Karsten
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> The latest version of the java driver 4.19.0 is using the deprecated LZ4 
> compression setting of `chunk_length_kb` when it should be using 
> `chunk_length_in_kb`. This works fine with Cassandra 4.1 but will fail with 
> Cassandra 5.0.
>  
> In RelationOptions.java:
> {code:java}
> @NonNull
>   @CheckReturnValue
>   default SelfT withCompression(
>       @NonNull String compressionAlgorithmName, int chunkLengthKB, double 
> crcCheckChance) {
>     return withOption(
>         "compression",
>         ImmutableMap.of(
>             "class",
>             compressionAlgorithmName,
>             "chunk_length_kb",
>             chunkLengthKB,
>             "crc_check_chance",
>             crcCheckChance));
>   }{code}
>  
>  
> How to reproduce:
>  
> {code:java}
> client.execute(SchemaBuilder.createTable("new_table")
> .ifNotExists()
> .withPartitionKey("id", DataTypes.BIGINT)
> .withColumn("some_column", DataTypes.DOUBLE)
> .withLZ4Compression(64, 1.0)
> .build()) {code}
> Stack trace:
> {code:java}
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException:
>  Unknown compression options chunk_length_kb
> at 
> com.datastax.oss.driver.api.core.servererrors.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:54)
> at 
> com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:151)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:55)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:32)
> at 
> com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:234)
> at 
> com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:56){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org