[jira] [Commented] (KAFKA-16404) Flaky test org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig

2024-03-27 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831202#comment-17831202
 ] 

Matthias J. Sax commented on KAFKA-16404:
-

Sounds like an env issue to me. I would propose to close this as "not a bug".

> Flaky test 
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig
> -
>
> Key: KAFKA-16404
> URL: https://issues.apache.org/jira/browse/KAFKA-16404
> Project: Kafka
>  Issue Type: Bug
>Reporter: Igor Soarez
>Priority: Major
>
>  
> {code:java}
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig()
>  failed, log available in 
> /home/jenkins/workspace/Kafka_kafka-pr_PR-14903@2/streams/examples/build/reports/testOutput/org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig().test.stdout
> Gradle Test Run :streams:examples:test > Gradle Test Executor 87 > 
> WordCountDemoTest > testGetStreamsConfig() FAILED
>     org.apache.kafka.streams.errors.ProcessorStateException: Error opening 
> store KSTREAM-AGGREGATE-STATE-STORE-03 at location 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:329)
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBTimestampedStore.openRocksDB(RocksDBTimestampedStore.java:69)
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:254)
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:175)
>         at 
> app//org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> app//org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:56)
>         at 
> app//org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> app//org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$init$3(MeteredKeyValueStore.java:151)
>         at 
> app//org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
>         at 
> app//org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:151)
>         at 
> app//org.apache.kafka.streams.processor.internals.ProcessorStateManager.registerStateStores(ProcessorStateManager.java:232)
>         at 
> app//org.apache.kafka.streams.processor.internals.StateManagerUtil.registerStateStores(StateManagerUtil.java:102)
>         at 
> app//org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:258)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.setupTask(TopologyTestDriver.java:530)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:373)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:300)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:276)
>         at 
> app//org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.setup(WordCountDemoTest.java:60)
>         Caused by:
>         org.rocksdb.RocksDBException: While lock file: 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03/LOCK:
>  Resource temporarily unavailable
>             at app//org.rocksdb.RocksDB.open(Native Method)
>             at app//org.rocksdb.RocksDB.open(RocksDB.java:307)
>             at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:323)
>             ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16403) Flaky test org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords

2024-03-27 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831201#comment-17831201
 ] 

Matthias J. Sax commented on KAFKA-16403:
-

Sound like an env issue but not a bug to me? I would tend to close as "not a 
bug"?

> Flaky test 
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords
> -
>
> Key: KAFKA-16403
> URL: https://issues.apache.org/jira/browse/KAFKA-16403
> Project: Kafka
>  Issue Type: Bug
>Reporter: Igor Soarez
>Priority: Major
>
> {code:java}
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords()
>  failed, log available in 
> /home/jenkins/workspace/Kafka_kafka-pr_PR-14903/streams/examples/build/reports/testOutput/org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords().test.stdout
> Gradle Test Run :streams:examples:test > Gradle Test Executor 82 > 
> WordCountDemoTest > testCountListOfWords() FAILED
>     org.apache.kafka.streams.errors.ProcessorStateException: Error opening 
> store KSTREAM-AGGREGATE-STATE-STORE-03 at location 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03
>         at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:329)
>         at 
> org.apache.kafka.streams.state.internals.RocksDBTimestampedStore.openRocksDB(RocksDBTimestampedStore.java:69)
>         at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:254)
>         at 
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:175)
>         at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:56)
>         at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$init$3(MeteredKeyValueStore.java:151)
>         at 
> org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
>         at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:151)
>         at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.registerStateStores(ProcessorStateManager.java:232)
>         at 
> org.apache.kafka.streams.processor.internals.StateManagerUtil.registerStateStores(StateManagerUtil.java:102)
>         at 
> org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:258)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.setupTask(TopologyTestDriver.java:530)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:373)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:300)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:276)
>         at 
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.setup(WordCountDemoTest.java:60)
>         Caused by:
>         org.rocksdb.RocksDBException: Corruption: IO error: No such file or 
> directory: While open a file for random read: 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03/10.ldb:
>  No such file or directory in file 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03/MANIFEST-05
>             at org.rocksdb.RocksDB.open(Native Method)
>             at org.rocksdb.RocksDB.open(RocksDB.java:307)
>             at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:323)
>             ... 17 more
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16377) Fix flaky HighAvailabilityTaskAssignorIntegrationTest.shouldScaleOutWithWarmupTasksAndPersistentStores

2024-03-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16377:

Component/s: streams
 unit tests

> Fix flaky 
> HighAvailabilityTaskAssignorIntegrationTest.shouldScaleOutWithWarmupTasksAndPersistentStores
> --
>
> Key: KAFKA-16377
> URL: https://issues.apache.org/jira/browse/KAFKA-16377
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Chia-Ping Tsai
>Priority: Major
>
> {quote}
> [2024-03-13T16:07:11.125Z] Gradle Test Run :streams:test > Gradle Test 
> Executor 95 > HighAvailabilityTaskAssignorIntegrationTest > 
> shouldScaleOutWithWarmupTasksAndPersistentStores(String, TestInfo) > 
> "shouldScaleOutWithWarmupTasksAndPersistentStores(String, 
> TestInfo).balance_subtopology" FAILED
> [2024-03-13T16:07:11.125Z] java.lang.AssertionError: the first assignment 
> after adding a node should be unstable while we warm up the state.
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.streams.integration.HighAvailabilityTaskAssignorIntegrationTest.assertFalseNoRetry(HighAvailabilityTaskAssignorIntegrationTest.java:310)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.streams.integration.HighAvailabilityTaskAssignorIntegrationTest.lambda$shouldScaleOutWithWarmupTasks$7(HighAvailabilityTaskAssignorIntegrationTest.java:237)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:395)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:443)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:392)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:366)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.streams.integration.HighAvailabilityTaskAssignorIntegrationTest.shouldScaleOutWithWarmupTasks(HighAvailabilityTaskAssignorIntegrationTest.java:232)
> [2024-03-13T16:07:11.125Z] at 
> org.apache.kafka.streams.integration.HighAvailabilityTaskAssignorIntegrationTest.shouldScaleOutWithWarmupTasksAndPersistentStores(HighAvailabilityTaskAssignorIntegrationTest.java:130)
> {quote}
> https://ci-builds.apache.org/blue/rest/organizations/jenkins/pipelines/Kafka/pipelines/kafka-pr/branches/PR-15474/runs/12/nodes/9/steps/88/log/?start=0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16359) kafka-clients-3.7.0.jar published to Maven Central is defective

2024-03-11 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825496#comment-17825496
 ] 

Matthias J. Sax commented on KAFKA-16359:
-

Just to clarify: we cannot re-publish a 3.7.0 artifact. – We can only fix this 
with 3.7.1 release. Seems we should push out 3.7.1 rather sooner than later.

> kafka-clients-3.7.0.jar published to Maven Central is defective
> ---
>
> Key: KAFKA-16359
> URL: https://issues.apache.org/jira/browse/KAFKA-16359
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0
>Reporter: Jeremy Norris
>Assignee: Apoorv Mittal
>Priority: Critical
>
> The {{kafka-clients-3.7.0.jar}} that has been published to Maven Central is 
> defective: it's {{META-INF/MANIFEST.MF}} bogusly include a {{Class-Path}} 
> element:
> {code}
> Manifest-Version: 1.0
> Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10
> .5.jar slf4j-api-1.7.36.jar
> {code}
> This bogus {{Class-Path}} element leads to compiler warnings for projects 
> that utilize it as a dependency:
> {code}
> [WARNING] [path] bad path element 
> ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar": 
> no such file or directory
> [WARNING] [path] bad path element 
> ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> ".../.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar":
>  no such file or directory
> {code}
> Either the {{kafka-clients-3.7.0.jar}} needs to be respun and published 
> without the bogus {{Class-Path}} element in it's {{META-INF/MANIFEST.MF}} or 
> a new release should be published that corrects this defect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16357) Kafka Client JAR manifest breaks javac linting

2024-03-11 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-16357.
-
Resolution: Duplicate

> Kafka Client JAR manifest breaks javac linting
> --
>
> Key: KAFKA-16357
> URL: https://issues.apache.org/jira/browse/KAFKA-16357
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0
> Environment: Linux, JDK 21 (Docker image eclipse-temurin:21-jdk-jammy)
>Reporter: Jacek Wojciechowski
>Priority: Critical
>
> I upgraded kafka-clients from 3.6.1 to 3.7.0 and discovered that my project 
> is not building anymore.
> The reason is that kafka-clients-3.7.0.jar contains the following entry in 
> its JAR manifest file:
> Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10
>  .5.jar slf4j-api-1.7.36.jar
> I'm using Maven repo to keep my dependencies and those files are not in the 
> same directory as kafka-clients-3.7.0.jar, so the paths in the manifest's 
> Class-Path are not correct. It fails my build because we build with javac 
> with all linting options on, in particular -Xlint:-path. It produces the 
> following warnings coming from javac:
> [WARNING] COMPILATION WARNING : 
> [INFO] -
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar":
>  no such file or directory
> Since we have also {{-Werror}} option enabled, it turns warnings into errors 
> and fails our build.
> I think our setup is quite typical: using Maven repo to store dependencies, 
> having linting on and -Werror. Unfortunatelly, it doesn't work with the 
> lastest kafka-clients because of the entries in the manifest's Class-Path. 
> And I think it might affect quite a lot of projects set up in a similar way.
> I don't know what was the reason to add Class-Path entry in the JAR manifest 
> file - but perhaps this effect was not considered.
> It would be great if you removed the Class-Path entry from the JAR manifest 
> file.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16357) Kafka Client JAR manifest breaks javac linting

2024-03-11 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-16357.
-
Resolution: Duplicate

> Kafka Client JAR manifest breaks javac linting
> --
>
> Key: KAFKA-16357
> URL: https://issues.apache.org/jira/browse/KAFKA-16357
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0
> Environment: Linux, JDK 21 (Docker image eclipse-temurin:21-jdk-jammy)
>Reporter: Jacek Wojciechowski
>Priority: Critical
>
> I upgraded kafka-clients from 3.6.1 to 3.7.0 and discovered that my project 
> is not building anymore.
> The reason is that kafka-clients-3.7.0.jar contains the following entry in 
> its JAR manifest file:
> Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10
>  .5.jar slf4j-api-1.7.36.jar
> I'm using Maven repo to keep my dependencies and those files are not in the 
> same directory as kafka-clients-3.7.0.jar, so the paths in the manifest's 
> Class-Path are not correct. It fails my build because we build with javac 
> with all linting options on, in particular -Xlint:-path. It produces the 
> following warnings coming from javac:
> [WARNING] COMPILATION WARNING : 
> [INFO] -
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar":
>  no such file or directory
> Since we have also {{-Werror}} option enabled, it turns warnings into errors 
> and fails our build.
> I think our setup is quite typical: using Maven repo to store dependencies, 
> having linting on and -Werror. Unfortunatelly, it doesn't work with the 
> lastest kafka-clients because of the entries in the manifest's Class-Path. 
> And I think it might affect quite a lot of projects set up in a similar way.
> I don't know what was the reason to add Class-Path entry in the JAR manifest 
> file - but perhaps this effect was not considered.
> It would be great if you removed the Class-Path entry from the JAR manifest 
> file.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16357) Kafka Client JAR manifest breaks javac linting

2024-03-11 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16357:

Priority: Critical  (was: Major)

> Kafka Client JAR manifest breaks javac linting
> --
>
> Key: KAFKA-16357
> URL: https://issues.apache.org/jira/browse/KAFKA-16357
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0
> Environment: Linux, JDK 21 (Docker image eclipse-temurin:21-jdk-jammy)
>Reporter: Jacek Wojciechowski
>Priority: Critical
>
> I upgraded kafka-clients from 3.6.1 to 3.7.0 and discovered that my project 
> is not building anymore.
> The reason is that kafka-clients-3.7.0.jar contains the following entry in 
> its JAR manifest file:
> Class-Path: zstd-jni-1.5.5-6.jar lz4-java-1.8.0.jar snappy-java-1.1.10
>  .5.jar slf4j-api-1.7.36.jar
> I'm using Maven repo to keep my dependencies and those files are not in the 
> same directory as kafka-clients-3.7.0.jar, so the paths in the manifest's 
> Class-Path are not correct. It fails my build because we build with javac 
> with all linting options on, in particular -Xlint:-path. It produces the 
> following warnings coming from javac:
> [WARNING] COMPILATION WARNING : 
> [INFO] -
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/zstd-jni-1.5.5-6.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/lz4-java-1.8.0.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/snappy-java-1.1.10.5.jar":
>  no such file or directory
> [WARNING] [path] bad path element 
> "/home/ci/.m2/repository/org/apache/kafka/kafka-clients/3.7.0/slf4j-api-1.7.36.jar":
>  no such file or directory
> Since we have also {{-Werror}} option enabled, it turns warnings into errors 
> and fails our build.
> I think our setup is quite typical: using Maven repo to store dependencies, 
> having linting on and -Werror. Unfortunatelly, it doesn't work with the 
> lastest kafka-clients because of the entries in the manifest's Class-Path. 
> And I think it might affect quite a lot of projects set up in a similar way.
> I don't know what was the reason to add Class-Path entry in the JAR manifest 
> file - but perhaps this effect was not considered.
> It would be great if you removed the Class-Path entry from the JAR manifest 
> file.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16360) Release plan of 3.x kafka releases.

2024-03-11 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-16360.
-
Resolution: Invalid

Please don't use Jira to ask questions. Jira tickets are for bug reports and 
features only.

Question should be asked on the user and/or dev mailing lists: 
https://kafka.apache.org/contact

> Release plan of 3.x kafka releases.
> ---
>
> Key: KAFKA-16360
> URL: https://issues.apache.org/jira/browse/KAFKA-16360
> Project: Kafka
>  Issue Type: Improvement
>Reporter: kaushik srinivas
>Priority: Major
>
> KIP 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-833%3A+Mark+KRaft+as+Production+Ready#KIP833:MarkKRaftasProductionReady-ReleaseTimeline]
>  mentions ,
> h2. Kafka 3.7
>  * January 2024
>  * Final release with ZK mode
> But we see in Jira, some tickets are marked for 3.8 release. Does apache 
> continue to make 3.x releases having zookeeper and kraft supported 
> independent of pure kraft 4.x releases ?
> If yes, how many more releases can be expected on 3.x release line ?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16360) Release plan of 3.x kafka releases.

2024-03-11 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-16360.
-
Resolution: Invalid

Please don't use Jira to ask questions. Jira tickets are for bug reports and 
features only.

Question should be asked on the user and/or dev mailing lists: 
https://kafka.apache.org/contact

> Release plan of 3.x kafka releases.
> ---
>
> Key: KAFKA-16360
> URL: https://issues.apache.org/jira/browse/KAFKA-16360
> Project: Kafka
>  Issue Type: Improvement
>Reporter: kaushik srinivas
>Priority: Major
>
> KIP 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-833%3A+Mark+KRaft+as+Production+Ready#KIP833:MarkKRaftasProductionReady-ReleaseTimeline]
>  mentions ,
> h2. Kafka 3.7
>  * January 2024
>  * Final release with ZK mode
> But we see in Jira, some tickets are marked for 3.8 release. Does apache 
> continue to make 3.x releases having zookeeper and kraft supported 
> independent of pure kraft 4.x releases ?
> If yes, how many more releases can be expected on 3.x release line ?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16366) Refactor KTable source optimization

2024-03-11 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16366:
---

 Summary: Refactor KTable source optimization
 Key: KAFKA-16366
 URL: https://issues.apache.org/jira/browse/KAFKA-16366
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Kafka Streams DSL offers an optimization to re-use an input topic as table 
changelog, in favor of creating a dedicated changelog topic.

So far, the Processor API did not support any such feature, and thus when the 
DSL compiles down into a Topology, we needed to access topology internal stuff 
to allow for this optimization.

With KIP-813 (merged for AK 3.8), we added `Topology#addReadOnlyStateStore` as 
public API, and thus we should refactor the DSL compilation code, to use this 
public API to build the `Topology` instead of internal APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16366) Refactor KTable source optimization

2024-03-11 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16366:
---

 Summary: Refactor KTable source optimization
 Key: KAFKA-16366
 URL: https://issues.apache.org/jira/browse/KAFKA-16366
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Kafka Streams DSL offers an optimization to re-use an input topic as table 
changelog, in favor of creating a dedicated changelog topic.

So far, the Processor API did not support any such feature, and thus when the 
DSL compiles down into a Topology, we needed to access topology internal stuff 
to allow for this optimization.

With KIP-813 (merged for AK 3.8), we added `Topology#addReadOnlyStateStore` as 
public API, and thus we should refactor the DSL compilation code, to use this 
public API to build the `Topology` instead of internal APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14748) Relax non-null FK left-join requirement

2024-03-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-14748:

Fix Version/s: 3.7.0

> Relax non-null FK left-join requirement
> ---
>
> Key: KAFKA-14748
> URL: https://issues.apache.org/jira/browse/KAFKA-14748
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Florin Akermann
>Priority: Major
>  Labels: kip
> Fix For: 3.7.0
>
>
> Kafka Streams enforces a strict non-null-key policy in the DSL across all 
> key-dependent operations (like aggregations and joins).
> This also applies to FK-joins, in particular to the ForeignKeyExtractor. If 
> it returns `null`, it's treated as invalid. For left-joins, it might make 
> sense to still accept a `null`, and add the left-hand record with an empty 
> right-hand-side to the result.
> KIP-962: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12317) Relax non-null key requirement for left/outer KStream joins

2024-03-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12317:

Fix Version/s: 3.7.0

> Relax non-null key requirement for left/outer KStream joins
> ---
>
> Key: KAFKA-12317
> URL: https://issues.apache.org/jira/browse/KAFKA-12317
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Florin Akermann
>Priority: Major
>  Labels: kip
> Fix For: 3.7.0
>
>
> Currently, for a stream-streams and stream-table/globalTable join 
> KafkaStreams drops all stream records with a `null`{-}key (`null`-join-key 
> for stream-globalTable), because for a `null`{-}(join)key the join is 
> undefined: ie, we don't have an attribute the do the table lookup (we 
> consider the stream-record as malformed). Note, that we define the semantics 
> of _left/outer_ join as: keep the stream record if no matching join record 
> was found.
> We could relax the definition of _left_ stream-table/globalTable and 
> _left/outer_ stream-stream join though, and not drop `null`-(join)key stream 
> records, and call the ValueJoiner with a `null` "other-side" value instead: 
> if the stream record key (or join-key) is `null`, we could treat is as 
> "failed lookup" instead of treating the stream record as corrupted.
> If we make this change, users that want to keep the current behavior, can add 
> a `filter()` before the join to drop `null`-(join)key records from the stream 
> explicitly.
> Note that this change also requires to change the behavior if we insert a 
> repartition topic before the join: currently, we drop `null`-key record 
> before writing into the repartition topic (as we know they would be dropped 
> later anyway). We need to relax this behavior for a left stream-table and 
> left/outer stream-stream join. User need to be aware (ie, we might need to 
> put this into the docs and JavaDocs), that records with `null`-key would be 
> partitioned randomly.
> KIP-962: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15576) Add 3.6.0 to broker/client and streams upgrade/compatibility tests

2024-03-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15576.
-
Resolution: Fixed

> Add 3.6.0 to broker/client and streams upgrade/compatibility tests
> --
>
> Key: KAFKA-15576
> URL: https://issues.apache.org/jira/browse/KAFKA-15576
> Project: Kafka
>  Issue Type: Task
>Reporter: Satish Duggana
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15576) Add 3.6.0 to broker/client and streams upgrade/compatibility tests

2024-03-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15576.
-
Resolution: Fixed

> Add 3.6.0 to broker/client and streams upgrade/compatibility tests
> --
>
> Key: KAFKA-15576
> URL: https://issues.apache.org/jira/browse/KAFKA-15576
> Project: Kafka
>  Issue Type: Task
>Reporter: Satish Duggana
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16025) Streams StateDirectory has orphaned locks after rebalancing, blocking future rebalancing

2024-03-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16025:

Fix Version/s: 3.7.1

> Streams StateDirectory has orphaned locks after rebalancing, blocking future 
> rebalancing
> 
>
> Key: KAFKA-16025
> URL: https://issues.apache.org/jira/browse/KAFKA-16025
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
> Environment: Linux
>Reporter: Sabit
>Assignee: Sabit
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> Hello,
>  
> We are encountering an issue where during rebalancing, we see streams threads 
> on one client get stuck in rebalancing. Upon enabling debug logs, we saw that 
> some tasks were having issues initializing due to failure to grab a lock in 
> the StateDirectory:
>  
> {{2023-12-14 22:51:57.352000Z stream-thread 
> [i-0f1a5e7a42158e04b-StreamThread-14] Could not initialize task 0_51 since: 
> stream-thread [i-0f1a5e7a42158e04b-StreamThread-14] standby-task [0_51] 
> Failed to lock the state directory for task 0_51; will retry}}
>  
> We were able to reproduce this behavior reliably on 3.4.0. This is the 
> sequence that triggers the bug.
> Assume in a streams consumer group, there are 5 instances (A, B, C, D, E), 
> each with 5 threads (1-5), and the consumer is using stateful tasks which 
> have state stores on disk. There are 10 active tasks and 10 standby tasks.
>  # Instance A is deactivated
>  # As an example, lets say task 0_1, previously on instance B, moves to 
> instance C
>  # Task 0_1 leaves behind it's state directory on Instance B's disk, 
> currently unused, and no lock for it exists in Instance B's StateDirectory 
> in-memory lock tracker
>  # Instance A is re-activated
>  # Streams thread 1 on Instance B is asked to re-join the consumer group due 
> to a new member being added
>  # As part of re-joining, thread 1 lists non-empty state directories in order 
> to report the offset's it has in it's state stores as part of it's metadata. 
> Thread 1 sees that the directory for 0_1 is not empty.
>  # The cleanup thread on instance B runs. The cleanup thread locks state 
> store 0_1, sees the directory for 0_1 was last modified more than 
> `state.cleanup.delay.ms` ago, deletes it, and unlocks it successfully
>  # Thread 1 takes a lock on directory 0_1 due to it being found not-empty 
> before, unaware that the cleanup has run between the time of the check and 
> the lock. It tracks this lock in it's own in-memory store, in addition to 
> StateDirectory's in-memory lock store
>  # Thread 1 successfully joins the consumer group
>  # After every consumer in the group joins the group, assignments are 
> calculated, and then every consumer calls sync group to receive the new 
> assignments
>  # Thread 1 on Instance B calls sync group but gets an error - the group 
> coordinator has triggered a new rebalance and all members must rejoin the 
> group
>  # Thread 1 again lists non-empty state directories in order to report the 
> offset's it has in it's state stores as part of it's metadata. Prior to doing 
> so, it clears it's in-memory store tracking the locks it has taken for the 
> purpose of gathering rebalance metadata
>  # Thread 1 no longer takes a lock on 0_1 as it is empty
>  # However, that lock on 0_1 owned by Thread 1 remains in StateDirectory
>  # All consumers re-join and sync successfully, receiving their new 
> assignments
>  # Thread 2 on Instance B is assigned task 0_1
>  # Thread 2 cannot take a lock on 0_1 in the StateDirectory because it is 
> still being held by Thread 1
>  # Thread 2 remains in rebalancing state, and cannot make progress on task 
> 0_1, or any other tasks it has assigned.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16350) StateUpdater does not init transaction after canceling task close action

2024-03-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16350:

Summary: StateUpdater does not init transaction after canceling task close 
action  (was: StateUpdated does not init transaction after canceling task close 
action)

> StateUpdater does not init transaction after canceling task close action
> 
>
> Key: KAFKA-16350
> URL: https://issues.apache.org/jira/browse/KAFKA-16350
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
> Attachments: 
> tyh5pkfmgwfoe-org.apache.kafka.streams.integration.EosIntegrationTest-shouldWriteLatestOffsetsToCheckpointOnShutdown[exactly_once_v2,
>  processing threads true]-1-output.txt
>
>
> With EOSv2, we use a thread producer shared across all tasks. We init tx on 
> the producer with each _task_ (due to EOSv1 which uses a producer per task), 
> and have a guard in place to only init tx a single time.
> If we hit an error, we close the producer and create a new one, which is 
> still not initialized for transaction. At the same time, with state updater, 
> we schedule a "close task" action on error.
> For each task we get back, we do cancel the "close task" action, to actually 
> keep the task. If this happens for _all_ tasks, we don't have any task in 
> state CRATED at hand, and thus we never init the producer for transactions, 
> because we assume this was already done.
> On the first `send` request, we crash with an IllegalStateException:{{{}{}}}
> {code:java}
> Invalid transition attempted from state UNINITIALIZED to state IN_TRANSACTION 
> {code}
> This bug is exposed via EOSIntegrationTest (logs attached).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16350) StateUpdated does not init transaction after canceling task close action

2024-03-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16350:

Attachment: 
tyh5pkfmgwfoe-org.apache.kafka.streams.integration.EosIntegrationTest-shouldWriteLatestOffsetsToCheckpointOnShutdown[exactly_once_v2,
 processing threads true]-1-output.txt

> StateUpdated does not init transaction after canceling task close action
> 
>
> Key: KAFKA-16350
> URL: https://issues.apache.org/jira/browse/KAFKA-16350
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
> Attachments: 
> tyh5pkfmgwfoe-org.apache.kafka.streams.integration.EosIntegrationTest-shouldWriteLatestOffsetsToCheckpointOnShutdown[exactly_once_v2,
>  processing threads true]-1-output.txt
>
>
> With EOSv2, we use a thread producer shared across all tasks. We init tx on 
> the producer with each _task_ (due to EOSv1 which uses a producer per task), 
> and have a guard in place to only init tx a single time.
> If we hit an error, we close the producer and create a new one, which is 
> still not initialized for transaction. At the same time, with state updater, 
> we schedule a "close task" action on error.
> For each task we get back, we do cancel the "close task" action, to actually 
> keep the task. If this happens for _all_ tasks, we don't have any task in 
> state CRATED at hand, and thus we never init the producer for transactions, 
> because we assume this was already done.
> On the first `send` request, we crash with an IllegalStateException:{{{}{}}}
> {code:java}
> Invalid transition attempted from state UNINITIALIZED to state IN_TRANSACTION 
> {code}
> This bug is exposed via EOSIntegrationTest (logs attached).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16350) StateUpdated does not init transaction after canceling task close action

2024-03-06 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16350:
---

 Summary: StateUpdated does not init transaction after canceling 
task close action
 Key: KAFKA-16350
 URL: https://issues.apache.org/jira/browse/KAFKA-16350
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Matthias J. Sax


With EOSv2, we use a thread producer shared across all tasks. We init tx on the 
producer with each _task_ (due to EOSv1 which uses a producer per task), and 
have a guard in place to only init tx a single time.

If we hit an error, we close the producer and create a new one, which is still 
not initialized for transaction. At the same time, with state updater, we 
schedule a "close task" action on error.

For each task we get back, we do cancel the "close task" action, to actually 
keep the task. If this happens for _all_ tasks, we don't have any task in state 
CRATED at hand, and thus we never init the producer for transactions, because 
we assume this was already done.

On the first `send` request, we crash with an IllegalStateException:{{{}{}}}
{code:java}
Invalid transition attempted from state UNINITIALIZED to state IN_TRANSACTION 
{code}
This bug is exposed via EOSIntegrationTest (logs attached).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16350) StateUpdated does not init transaction after canceling task close action

2024-03-06 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16350:
---

 Summary: StateUpdated does not init transaction after canceling 
task close action
 Key: KAFKA-16350
 URL: https://issues.apache.org/jira/browse/KAFKA-16350
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Matthias J. Sax


With EOSv2, we use a thread producer shared across all tasks. We init tx on the 
producer with each _task_ (due to EOSv1 which uses a producer per task), and 
have a guard in place to only init tx a single time.

If we hit an error, we close the producer and create a new one, which is still 
not initialized for transaction. At the same time, with state updater, we 
schedule a "close task" action on error.

For each task we get back, we do cancel the "close task" action, to actually 
keep the task. If this happens for _all_ tasks, we don't have any task in state 
CRATED at hand, and thus we never init the producer for transactions, because 
we assume this was already done.

On the first `send` request, we crash with an IllegalStateException:{{{}{}}}
{code:java}
Invalid transition attempted from state UNINITIALIZED to state IN_TRANSACTION 
{code}
This bug is exposed via EOSIntegrationTest (logs attached).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15964) Flaky test: testHighAvailabilityTaskAssignorLargeNumConsumers – org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest

2024-03-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-15964:
---

Assignee: Matthias J. Sax

> Flaky test: testHighAvailabilityTaskAssignorLargeNumConsumers – 
> org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest
> ---
>
> Key: KAFKA-15964
> URL: https://issues.apache.org/jira/browse/KAFKA-15964
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Apoorv Mittal
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
>
> PR build: 
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14767/15/tests/]
>  
> {code:java}
> java.lang.AssertionError: The first assignment took too long to complete at 
> 94250ms.Stacktracejava.lang.AssertionError: The first assignment took too 
> long to complete at 94250ms.at 
> org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.completeLargeAssignment(StreamsAssignmentScaleTest.java:220)
>  at 
> org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.testHighAvailabilityTaskAssignorLargeNumConsumers(StreamsAssignmentScaleTest.java:85)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:568)   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16343) Improve tests of streams foreignkeyjoin package

2024-03-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16343:

Component/s: unit tests

> Improve tests of streams foreignkeyjoin package
> ---
>
> Key: KAFKA-16343
> URL: https://issues.apache.org/jira/browse/KAFKA-16343
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Affects Versions: 3.7.0
>Reporter: Ayoub Omari
>Assignee: Ayoub Omari
>Priority: Major
>
> Some classes are not tested in streams foreignkeyjoin package, such as 
> SubscriptionSendProcessorSupplier and ForeignTableJoinProcessorSupplier. 
> Corresponding tests should be added.
> The class ForeignTableJoinProcessorSupplierTest should be renamed as it is 
> not testing ForeignTableJoinProcessor, but rather 
> SubscriptionJoinProcessorSupplier.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16335) Remove Deprecated method on StreamPartitioner

2024-03-06 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824137#comment-17824137
 ] 

Matthias J. Sax commented on KAFKA-16335:
-

Thanks for your interest to work on this ticket. Note that the next release 
will be 3.8, and thus we cannot yet complete this ticket right now.

> Remove Deprecated method on StreamPartitioner
> -
>
> Key: KAFKA-16335
> URL: https://issues.apache.org/jira/browse/KAFKA-16335
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Caio César
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in 3.4 release via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356]
>  * StreamPartitioner#partition (singular)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2024-03-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15417.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: Afbeelding 1-1.png, Afbeelding 1.png, 
> SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2024-03-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15417.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: Afbeelding 1-1.png, Afbeelding 1.png, 
> SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15625) Do not flush global state store at each commit

2024-03-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-15625:

Fix Version/s: 3.8.0

> Do not flush global state store at each commit
> --
>
> Key: KAFKA-15625
> URL: https://issues.apache.org/jira/browse/KAFKA-15625
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Ayoub Omari
>Priority: Major
> Fix For: 3.8.0
>
>
> Global state stores are flushed at each commit. While that is not a big issue 
> with at-least-once processing mode since the commit interval is by default 
> 30s, it might become an issue with EOS where the commit interval is 200ms by 
> default.
> One option would be to flush and checkpoint global state stores when the 
> delta of the content exceeds a given threshold as we do for other stores. See 
> https://github.com/apache/kafka/blob/a1f3c6d16061566a4f53c72a95e2679b8ee229e0/streams/src/main/java/org/apache/kafka/streams/processor/internals/AbstractTask.java#L97
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14747) FK join should record discarded subscription responses

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-14747.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> FK join should record discarded subscription responses
> --
>
> Key: KAFKA-14747
> URL: https://issues.apache.org/jira/browse/KAFKA-14747
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Ayoub Omari
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.8.0
>
>
> FK-joins are subject to a race condition: If the left-hand side record is 
> updated, a subscription is sent to the right-hand side (including a hash 
> value of the left-hand side record), and the right-hand side might send back 
> join responses (also including the original hash). The left-hand side only 
> processed the responses if the returned hash matches to current hash of the 
> left-hand side record, because a different hash implies that the lef- hand 
> side record was updated in the mean time (including sending a new 
> subscription to the right hand side), and thus the data is stale and the 
> response should not be processed (joining the response to the new record 
> could lead to incorrect results).
> A similar thing can happen on a right-hand side update that triggers a 
> response, that might be dropped if the left-hand side record was updated in 
> parallel.
> While the behavior is correct, we don't record if this happens. We should 
> consider to record this using the existing "dropped record" sensor or maybe 
> add a new sensor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14747) FK join should record discarded subscription responses

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-14747.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> FK join should record discarded subscription responses
> --
>
> Key: KAFKA-14747
> URL: https://issues.apache.org/jira/browse/KAFKA-14747
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Ayoub Omari
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.8.0
>
>
> FK-joins are subject to a race condition: If the left-hand side record is 
> updated, a subscription is sent to the right-hand side (including a hash 
> value of the left-hand side record), and the right-hand side might send back 
> join responses (also including the original hash). The left-hand side only 
> processed the responses if the returned hash matches to current hash of the 
> left-hand side record, because a different hash implies that the lef- hand 
> side record was updated in the mean time (including sending a new 
> subscription to the right hand side), and thus the data is stale and the 
> response should not be processed (joining the response to the new record 
> could lead to incorrect results).
> A similar thing can happen on a right-hand side update that triggers a 
> response, that might be dropped if the left-hand side record was updated in 
> parallel.
> While the behavior is correct, we don't record if this happens. We should 
> consider to record this using the existing "dropped record" sensor or maybe 
> add a new sensor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16338:

Description: 
* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)
 * "default.store.config" was deprecated 

  was:
* "buffered.records.per.partition" was deprecated in 3.4 via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390], 
however the KIP is not fully implemented yet, so we move this from the 4.0 into 
this 5.0 ticket
 * "default.dsl.store" was deprecated in 3.7 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-954%3A+expand+default+DSL+store+configuration+to+custom+types]
 


> Removed Deprecated configs from StreamsConfig
> -
>
> Key: KAFKA-16338
> URL: https://issues.apache.org/jira/browse/KAFKA-16338
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 5.0.0
>
>
> * "buffered.records.per.partition" were deprecated via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
> (KIP not fully implemented yet, so move this from the 4.0 into this 5.0 
> ticket)
>  * "default.store.config" was deprecated 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16338:

Description: 
* "buffered.records.per.partition" was deprecated in 3.4 via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390], 
however the KIP is not fully implemented yet, so we move this from the 4.0 into 
this 5.0 ticket
 * "default.dsl.store" was deprecated in 3.7 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-954%3A+expand+default+DSL+store+configuration+to+custom+types]
 

  was:* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)


> Removed Deprecated configs from StreamsConfig
> -
>
> Key: KAFKA-16338
> URL: https://issues.apache.org/jira/browse/KAFKA-16338
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 5.0.0
>
>
> * "buffered.records.per.partition" was deprecated in 3.4 via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390], 
> however the KIP is not fully implemented yet, so we move this from the 4.0 
> into this 5.0 ticket
>  * "default.dsl.store" was deprecated in 3.7 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-954%3A+expand+default+DSL+store+configuration+to+custom+types]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)
 * Cf https://issues.apache.org/jira/browse/KAFKA-16339 – both tickets should 
be worked on together

Also cf related tickets:
 * https://issues.apache.org/jira/browse/KAFKA-12832
 * https://issues.apache.org/jira/browse/KAFKA-12833

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)
 * Cf https://issues.apache.org/jira/browse/KAFKA-16339 – both tickets should 
be worked on together

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> 

[jira] [Resolved] (KAFKA-10603) Re-design KStream.process() and K*.transform*() operations

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10603.
-
Resolution: Fixed

> Re-design KStream.process() and K*.transform*() operations
> --
>
> Key: KAFKA-10603
> URL: https://issues.apache.org/jira/browse/KAFKA-10603
> Project: Kafka
>  Issue Type: New Feature
>Reporter: John Roesler
>Priority: Major
>  Labels: needs-kip
>
> After the implementation of KIP-478, we have the ability to reconsider all 
> these APIs, and maybe just replace them with
> {code:java}
> // KStream
> KStream process(ProcessorSupplier) 
> // KTable
> KTable process(ProcessorSupplier){code}
>  
> but it needs more thought and a KIP for sure.
>  
> This ticket probably supercedes 
> https://issues.apache.org/jira/browse/KAFKA-8396



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10603) Re-design KStream.process() and K*.transform*() operations

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10603.
-
Resolution: Fixed

> Re-design KStream.process() and K*.transform*() operations
> --
>
> Key: KAFKA-10603
> URL: https://issues.apache.org/jira/browse/KAFKA-10603
> Project: Kafka
>  Issue Type: New Feature
>Reporter: John Roesler
>Priority: Major
>  Labels: needs-kip
>
> After the implementation of KIP-478, we have the ability to reconsider all 
> these APIs, and maybe just replace them with
> {code:java}
> // KStream
> KStream process(ProcessorSupplier) 
> // KTable
> KTable process(ProcessorSupplier){code}
>  
> but it needs more thought and a KIP for sure.
>  
> This ticket probably supercedes 
> https://issues.apache.org/jira/browse/KAFKA-8396



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16339) Remove Deprecated "transformer" methods and classes

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16339:
---

 Summary: Remove Deprecated "transformer" methods and classes
 Key: KAFKA-16339
 URL: https://issues.apache.org/jira/browse/KAFKA-16339
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-820%3A+Extend+KStream+process+with+new+Processor+API]
 * KStream#tranform
 * KStream#flatTransform
 * KStream#transformValue
 * KStream#flatTransformValues
 * and the corresponding Scala methods

Related to https://issues.apache.org/jira/browse/KAFKA-12829, and both tickets 
should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16339) Remove Deprecated "transformer" methods and classes

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16339:
---

 Summary: Remove Deprecated "transformer" methods and classes
 Key: KAFKA-16339
 URL: https://issues.apache.org/jira/browse/KAFKA-16339
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-820%3A+Extend+KStream+process+with+new+Processor+API]
 * KStream#tranform
 * KStream#flatTransform
 * KStream#transformValue
 * KStream#flatTransformValues
 * and the corresponding Scala methods

Related to https://issues.apache.org/jira/browse/KAFKA-12829, and both tickets 
should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)
 * Cf https://issues.apache.org/jira/browse/KAFKA-16339 – both tickets should 
be worked on together

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> 

[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)
 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, 

[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Component/s: streams-test-utils

> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams, streams-test-utils
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.5 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or 
> later releases).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
> While KIP-770 was approved for 3.4 and is partially implemented, it was not 
> completed yet, and thus should not be subject to 4.0 release: 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16338:
---

 Summary: Removed Deprecated configs from StreamsConfig
 Key: KAFKA-16338
 URL: https://issues.apache.org/jira/browse/KAFKA-16338
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 5.0.0


* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16338) Removed Deprecated configs from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16338:
---

 Summary: Removed Deprecated configs from StreamsConfig
 Key: KAFKA-16338
 URL: https://issues.apache.org/jira/browse/KAFKA-16338
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 5.0.0


* "buffered.records.per.partition" were deprecated via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
(KIP not fully implemented yet, so move this from the 4.0 into this 5.0 ticket)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16337) Remove Deprecates APIs of Kafka Streams in 5.0

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16337:
---

 Summary: Remove Deprecates APIs of Kafka Streams in 5.0
 Key: KAFKA-16337
 URL: https://issues.apache.org/jira/browse/KAFKA-16337
 Project: Kafka
  Issue Type: Task
  Components: streams, streams-test-utils
Reporter: Matthias J. Sax
 Fix For: 5.0.0


This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated in 3.6 or later. When the release scheduled for 5.0 will 
be set, we might need to remove sub-tasks if they don't hit the 1-year 
threshold.

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 5.0.0 or maybe even at a later point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16337) Remove Deprecates APIs of Kafka Streams in 5.0

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16337:
---

 Summary: Remove Deprecates APIs of Kafka Streams in 5.0
 Key: KAFKA-16337
 URL: https://issues.apache.org/jira/browse/KAFKA-16337
 Project: Kafka
  Issue Type: Task
  Components: streams, streams-test-utils
Reporter: Matthias J. Sax
 Fix For: 5.0.0


This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated in 3.6 or later. When the release scheduled for 5.0 will 
be set, we might need to remove sub-tasks if they don't hit the 1-year 
threshold.

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 5.0.0 or maybe even at a later point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Description: 
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.5 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or later 
releases).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

While KIP-770 was approved for 3.4 and is partially implemented, it was not 
completed yet, and thus should not be subject to 4.0 release: 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 

 

  was:
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.5 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or later 
releases).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 


> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.5 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or 
> later releases).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
> While KIP-770 was approved for 3.4 and is partially implemented, it was not 
> completed yet, and thus should not be subject to 4.0 release: 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186878390] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16336:
---

 Summary: Remove Deprecated metric standby-process-ratio
 Key: KAFKA-16336
 URL: https://issues.apache.org/jira/browse/KAFKA-16336
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Metric "standby-process-ratio" was deprecated in 3.5 release via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16336:
---

 Summary: Remove Deprecated metric standby-process-ratio
 Key: KAFKA-16336
 URL: https://issues.apache.org/jira/browse/KAFKA-16336
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Metric "standby-process-ratio" was deprecated in 3.5 release via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Description: 
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.5 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or later 
releases).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 

  was:
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.4 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.5 release).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 


> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.5 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.6 or 
> later releases).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-12689) Remove deprecated EOS configs

2024-03-04 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823353#comment-17823353
 ] 

Matthias J. Sax commented on KAFKA-12689:
-

As commented on https://issues.apache.org/jira/browse/KAFKA-16331, I am not 
sure if we should really remove EOSv1 already.

> Remove deprecated EOS configs
> -
>
> Key: KAFKA-12689
> URL: https://issues.apache.org/jira/browse/KAFKA-12689
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 4.0.0
>
>
> In 
> [KIP-732|https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
>  we deprecated the EXACTLY_ONCE and EXACTLY_ONCE_BETA configs in 
> StreamsConfig, to be removed in 4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12822) Remove Deprecated APIs of Kafka Streams in 4.0

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12822:

Description: 
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 and up-to 3.4 (given the current release plan to 
ship 4.0 after 3.8 we don't want to include changes introduced in 3.5 release).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 

  was:
This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
that were deprecated after 2.5 (the current threshold for being removed in 
version 3.0.0).

Each subtask will de focusing on a specific API, so it's easy to discuss if it 
should be removed by 4.0.0 or maybe even at a later point.

 


> Remove Deprecated APIs of Kafka Streams in 4.0
> --
>
> Key: KAFKA-12822
> URL: https://issues.apache.org/jira/browse/KAFKA-12822
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> This is an umbrella ticket that tries to collect all APIs under Kafka Streams 
> that were deprecated after 2.5 and up-to 3.4 (given the current release plan 
> to ship 4.0 after 3.8 we don't want to include changes introduced in 3.5 
> release).
> Each subtask will de focusing on a specific API, so it's easy to discuss if 
> it should be removed by 4.0.0 or maybe even at a later point.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16335) Remove Deprecated method on StreamPartitioner

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16335:
---

 Summary: Remove Deprecated method on StreamPartitioner
 Key: KAFKA-16335
 URL: https://issues.apache.org/jira/browse/KAFKA-16335
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356]
 * StreamPartitioner#partition (singular)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16335) Remove Deprecated method on StreamPartitioner

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16335:
---

 Summary: Remove Deprecated method on StreamPartitioner
 Key: KAFKA-16335
 URL: https://issues.apache.org/jira/browse/KAFKA-16335
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356]
 * StreamPartitioner#partition (singular)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16334) Remove Deprecated command line option from reset tool

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16334:
---

 Summary: Remove Deprecated command line option from reset tool
 Key: KAFKA-16334
 URL: https://issues.apache.org/jira/browse/KAFKA-16334
 Project: Kafka
  Issue Type: Sub-task
  Components: streams, tools
Reporter: Matthias J. Sax
 Fix For: 4.0.0


--bootstrap-server (singular) was deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-865%3A+Support+--bootstrap-server+in+kafka-streams-application-reset]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16334) Remove Deprecated command line option from reset tool

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16334:
---

 Summary: Remove Deprecated command line option from reset tool
 Key: KAFKA-16334
 URL: https://issues.apache.org/jira/browse/KAFKA-16334
 Project: Kafka
  Issue Type: Sub-task
  Components: streams, tools
Reporter: Matthias J. Sax
 Fix For: 4.0.0


--bootstrap-server (singular) was deprecated in 3.4 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-865%3A+Support+--bootstrap-server+in+kafka-streams-application-reset]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16333) Removed Deprecated methods KTable#join

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16333:
---

 Summary: Removed Deprecated methods KTable#join
 Key: KAFKA-16333
 URL: https://issues.apache.org/jira/browse/KAFKA-16333
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


KTable#join() methods taking a `Named` parameter got deprecated in 3.1 release 
via https://issues.apache.org/jira/browse/KAFKA-13813 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16333) Removed Deprecated methods KTable#join

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16333:
---

 Summary: Removed Deprecated methods KTable#join
 Key: KAFKA-16333
 URL: https://issues.apache.org/jira/browse/KAFKA-16333
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


KTable#join() methods taking a `Named` parameter got deprecated in 3.1 release 
via https://issues.apache.org/jira/browse/KAFKA-13813 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16332:

Component/s: streams

> Remove Deprecated builder methods for Time/Session/Join/SlidingWindows
> --
>
> Key: KAFKA-16332
> URL: https://issues.apache.org/jira/browse/KAFKA-16332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
>
> Deprecated in 3.0: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
>  
>  * TimeWindows#of
>  * TimeWindows#grace
>  * SessionWindows#with
>  * SessionWindows#grace
>  * JoinWindows#of
>  * JoinWindows#grace
>  * SlidingWindows#withTimeDifferencAndGrace
> Me might want to hold-off to cleanup JoinWindows due to 
> https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16332:

Priority: Blocker  (was: Major)

> Remove Deprecated builder methods for Time/Session/Join/SlidingWindows
> --
>
> Key: KAFKA-16332
> URL: https://issues.apache.org/jira/browse/KAFKA-16332
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Matthias J. Sax
>Priority: Blocker
>
> Deprecated in 3.0: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
>  
>  * TimeWindows#of
>  * TimeWindows#grace
>  * SessionWindows#with
>  * SessionWindows#grace
>  * JoinWindows#of
>  * JoinWindows#grace
>  * SlidingWindows#withTimeDifferencAndGrace
> Me might want to hold-off to cleanup JoinWindows due to 
> https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16332:
---

 Summary: Remove Deprecated builder methods for 
Time/Session/Join/SlidingWindows
 Key: KAFKA-16332
 URL: https://issues.apache.org/jira/browse/KAFKA-16332
 Project: Kafka
  Issue Type: Sub-task
Reporter: Matthias J. Sax


Deprecated in 3.0: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
 
 * TimeWindows#of
 * TimeWindows#grace
 * SessionWindows#with
 * SessionWindows#grace
 * JoinWindows#of
 * JoinWindows#grace
 * SlidingWindows#withTimeDifferencAndGrace

Me might want to hold-off to cleanup JoinWindows due to 
https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16332:
---

 Summary: Remove Deprecated builder methods for 
Time/Session/Join/SlidingWindows
 Key: KAFKA-16332
 URL: https://issues.apache.org/jira/browse/KAFKA-16332
 Project: Kafka
  Issue Type: Sub-task
Reporter: Matthias J. Sax


Deprecated in 3.0: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
 
 * TimeWindows#of
 * TimeWindows#grace
 * SessionWindows#with
 * SessionWindows#grace
 * JoinWindows#of
 * JoinWindows#grace
 * SlidingWindows#withTimeDifferencAndGrace

Me might want to hold-off to cleanup JoinWindows due to 
https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16332) Remove Deprecated builder methods for Time/Session/Join/SlidingWindows

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16332:

Fix Version/s: 4.0.0

> Remove Deprecated builder methods for Time/Session/Join/SlidingWindows
> --
>
> Key: KAFKA-16332
> URL: https://issues.apache.org/jira/browse/KAFKA-16332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in 3.0: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Deprecate+24-hour+Default+Grace+Period+for+Windowed+Operations+in+Streams]
>  
>  * TimeWindows#of
>  * TimeWindows#grace
>  * SessionWindows#with
>  * SessionWindows#grace
>  * JoinWindows#of
>  * JoinWindows#grace
>  * SlidingWindows#withTimeDifferencAndGrace
> Me might want to hold-off to cleanup JoinWindows due to 
> https://issues.apache.org/jira/browse/KAFKA-13813 (open for discussion)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16331) Remove Deprecated EOSv1

2024-03-04 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823337#comment-17823337
 ] 

Matthias J. Sax commented on KAFKA-16331:
-

While I think we should for sure remove Producer#sendOffsetsToTransactions, I 
am not sure if we should remove EOSv1 already. We might still want to add 
"multi-cluster support" and might need EOSv1 for this case.

> Remove Deprecated EOSv1
> ---
>
> Key: KAFKA-16331
> URL: https://issues.apache.org/jira/browse/KAFKA-16331
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> EOSv1 was deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
>  * remove conifg
>  * remove Producer#sendOffsetsToTransaction
>  * cleanup code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16331) Remove Deprecated EOSv1

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16331:
---

 Summary: Remove Deprecated EOSv1
 Key: KAFKA-16331
 URL: https://issues.apache.org/jira/browse/KAFKA-16331
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


EOSv1 was deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
 * remove conifg
 * remove Producer#sendOffsetsToTransaction
 * cleanup code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16331) Remove Deprecated EOSv1

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16331:
---

 Summary: Remove Deprecated EOSv1
 Key: KAFKA-16331
 URL: https://issues.apache.org/jira/browse/KAFKA-16331
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


EOSv1 was deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2]
 * remove conifg
 * remove Producer#sendOffsetsToTransaction
 * cleanup code



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16328) Remove Deprecated config from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16328:

Description: 
* #retries was deprecated in AK 2.7 – already unused – via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams]
 * "inner window serde" configs were deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930] 

  was:Deprecated in AK 2.7 – already unused – via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams


> Remove Deprecated config from StreamsConfig
> ---
>
> Key: KAFKA-16328
> URL: https://issues.apache.org/jira/browse/KAFKA-16328
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> * #retries was deprecated in AK 2.7 – already unused – via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams]
>  * "inner window serde" configs were deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16328) Remove Deprecated config from StreamsConfig

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16328:

Summary: Remove Deprecated config from StreamsConfig  (was: Remove 
Deprecated config StreamsConfig#retries)

> Remove Deprecated config from StreamsConfig
> ---
>
> Key: KAFKA-16328
> URL: https://issues.apache.org/jira/browse/KAFKA-16328
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 2.7 – already unused – via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16330) Remove Deprecated methods/variables from TaskId

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16330:

Description: 
Cf [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]

 

This ticket relates to https://issues.apache.org/jira/browse/KAFKA-16329 and 
both should be worked on together.

  was:Cf 
[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]


> Remove Deprecated methods/variables from TaskId
> ---
>
> Key: KAFKA-16330
> URL: https://issues.apache.org/jira/browse/KAFKA-16330
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Cf 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]
>  
> This ticket relates to https://issues.apache.org/jira/browse/KAFKA-16329 and 
> both should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16329:

Description: 
Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 
 * 
org.apache.kafka.streams.processor.TaskMetadata
 * org.apache.kafka.streams.processo.ThreadMetadata
 * org.apache.kafka.streams.KafkaStreams#localThredMetadata
 * org.apache.kafka.streams.state.StreamsMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadataForStore

This is related https://issues.apache.org/jira/browse/KAFKA-16330 and both 
ticket should be worked on together.

  was:
Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 
 * 
org.apache.kafka.streams.processor.TaskMetadata
 * org.apache.kafka.streams.processo.ThreadMetadata
 * org.apache.kafka.streams.KafkaStreams#localThredMetadata
 * org.apache.kafka.streams.state.StreamsMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadataForStore

 


> Remove Deprecated Task/ThreadMetadata classes and related methods
> -
>
> Key: KAFKA-16329
> URL: https://issues.apache.org/jira/browse/KAFKA-16329
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
>  
>  * 
> org.apache.kafka.streams.processor.TaskMetadata
>  * org.apache.kafka.streams.processo.ThreadMetadata
>  * org.apache.kafka.streams.KafkaStreams#localThredMetadata
>  * org.apache.kafka.streams.state.StreamsMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadataForStore
> This is related https://issues.apache.org/jira/browse/KAFKA-16330 and both 
> ticket should be worked on together.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16330) Remove Deprecated methods/variables from TaskId

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16330:
---

 Summary: Remove Deprecated methods/variables from TaskId
 Key: KAFKA-16330
 URL: https://issues.apache.org/jira/browse/KAFKA-16330
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16330) Remove Deprecated methods/variables from TaskId

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16330:
---

 Summary: Remove Deprecated methods/variables from TaskId
 Key: KAFKA-16330
 URL: https://issues.apache.org/jira/browse/KAFKA-16330
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Cf [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181306557]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16329:

Description: 
Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 
 * 
org.apache.kafka.streams.processor.TaskMetadata
 * org.apache.kafka.streams.processo.ThreadMetadata
 * org.apache.kafka.streams.KafkaStreams#localThredMetadata
 * org.apache.kafka.streams.state.StreamsMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadata
 * org.apache.kafka.streams.KafkaStreams#allMetadataForStore

 

  was:Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 


> Remove Deprecated Task/ThreadMetadata classes and related methods
> -
>
> Key: KAFKA-16329
> URL: https://issues.apache.org/jira/browse/KAFKA-16329
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 3.0 via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
>  
>  * 
> org.apache.kafka.streams.processor.TaskMetadata
>  * org.apache.kafka.streams.processo.ThreadMetadata
>  * org.apache.kafka.streams.KafkaStreams#localThredMetadata
>  * org.apache.kafka.streams.state.StreamsMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadata
>  * org.apache.kafka.streams.KafkaStreams#allMetadataForStore
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16329:
---

 Summary: Remove Deprecated Task/ThreadMetadata classes and related 
methods
 Key: KAFKA-16329
 URL: https://issues.apache.org/jira/browse/KAFKA-16329
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16329) Remove Deprecated Task/ThreadMetadata classes and related methods

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16329:
---

 Summary: Remove Deprecated Task/ThreadMetadata classes and related 
methods
 Key: KAFKA-16329
 URL: https://issues.apache.org/jira/browse/KAFKA-16329
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 3.0 via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-744%3A+Migrate+TaskMetadata+and+ThreadMetadata+to+an+interface+with+internal+implementation]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext} (note: that `ProcessorContext` is also use by `Transformer` 
which was also deprecated in 3.3 and can be removed, too)

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * 
org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext}

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> 

[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
 org.apache.kafka.streams.processor.StateStore)
 * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

 * 
org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
`ProcessorSupplier`)

The following classes were deprecated:
 * org.apache.kafka.streams.processor.\{Processor, ProcessorSupplier, 
ProcessorContext}

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
>  org.apache.kafka.streams.processor.StateStore)
>  * org.apache.kafka.streams.kstream.KStream.process (multiple, taking the old 
> `ProcessorSupplier`)
>  * 
> org.apache.kafka.streams.scala.KStream.process (multiple, taking the old 
> `ProcessorSupplier`)
> The following classes were deprecated:
>  * 

[jira] [Resolved] (KAFKA-12831) Remove Deprecated method StateStore#init

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12831.
-
Resolution: Fixed

> Remove Deprecated method StateStore#init
> 
>
> Key: KAFKA-12831
> URL: https://issues.apache.org/jira/browse/KAFKA-12831
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The method 
> org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
>  org.apache.kafka.streams.processor.StateStore) was deprected in version 2.7
>  
> See KAFKA-10562 and KIP-478
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12831) Remove Deprecated method StateStore#init

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12831.
-
Resolution: Fixed

> Remove Deprecated method StateStore#init
> 
>
> Key: KAFKA-12831
> URL: https://issues.apache.org/jira/browse/KAFKA-12831
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The method 
> org.apache.kafka.streams.processor.StateStore#init(org.apache.kafka.streams.processor.ProcessorContext,
>  org.apache.kafka.streams.processor.StateStore) was deprected in version 2.7
>  
> See KAFKA-10562 and KIP-478
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Description: 
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
org.apache.kafka.streams.processor.ProcessorSupplier)

 

See KAFKA-10605 and KIP-478.

  was:
The following methods were deprecated in version 2.7:
 * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...) 
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
 * 
org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
 java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
org.apache.kafka.common.serialization.Deserializer, 
org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier) 

 

See KAFKA-10605 and KIP-478.


> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
>  
> See KAFKA-10605 and KIP-478.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12825) Remove Deprecated method StreamsBuilder#addGlobalStore

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12825.
-
Resolution: Fixed

> Remove Deprecated method StreamsBuilder#addGlobalStore
> --
>
> Key: KAFKA-12825
> URL: https://issues.apache.org/jira/browse/KAFKA-12825
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Methods:
> org.apache.kafka.streams.scala.StreamsBuilder#addGlobalStore
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
> were deprecated in 2.7
>  
> See KAFKA-10379 and KIP-478



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12825) Remove Deprecated method StreamsBuilder#addGlobalStore

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12825.
-
Resolution: Fixed

> Remove Deprecated method StreamsBuilder#addGlobalStore
> --
>
> Key: KAFKA-12825
> URL: https://issues.apache.org/jira/browse/KAFKA-12825
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Methods:
> org.apache.kafka.streams.scala.StreamsBuilder#addGlobalStore
> org.apache.kafka.streams.StreamsBuilder#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.kstream.Consumed, 
> org.apache.kafka.streams.processor.ProcessorSupplier)
> were deprecated in 2.7
>  
> See KAFKA-10379 and KIP-478



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-12829) Remove Deprecated methods can classes of old Processor API

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12829:

Summary: Remove Deprecated methods can classes of old Processor API  (was: 
Remove Deprecated methods under Topology)

> Remove Deprecated methods can classes of old Processor API
> --
>
> Key: KAFKA-12829
> URL: https://issues.apache.org/jira/browse/KAFKA-12829
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Josep Prat
>Priority: Blocker
> Fix For: 4.0.0
>
>
> The following methods were deprecated in version 2.7:
>  * org.apache.kafka.streams.Topology#addProcessor(java.lang.String, 
> org.apache.kafka.streams.processor.ProcessorSupplier, java.lang.String...) 
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier)
>  * 
> org.apache.kafka.streams.Topology#addGlobalStore(org.apache.kafka.streams.state.StoreBuilder,
>  java.lang.String, org.apache.kafka.streams.processor.TimestampExtractor, 
> org.apache.kafka.common.serialization.Deserializer, 
> org.apache.kafka.common.serialization.Deserializer, java.lang.String, 
> java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier) 
>  
> See KAFKA-10605 and KIP-478.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16328) Remove Deprecated config StreamsConfig#retries

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16328:

Summary: Remove Deprecated config StreamsConfig#retries  (was: Remove 
deprecated config StreamsConfig#retries)

> Remove Deprecated config StreamsConfig#retries
> --
>
> Key: KAFKA-16328
> URL: https://issues.apache.org/jira/browse/KAFKA-16328
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in AK 2.7 – already unused – via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16328) Remove deprecated config StreamsConfig#retries

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16328:
---

 Summary: Remove deprecated config StreamsConfig#retries
 Key: KAFKA-16328
 URL: https://issues.apache.org/jira/browse/KAFKA-16328
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 2.7 – already unused – via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16328) Remove deprecated config StreamsConfig#retries

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16328:
---

 Summary: Remove deprecated config StreamsConfig#retries
 Key: KAFKA-16328
 URL: https://issues.apache.org/jira/browse/KAFKA-16328
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax
 Fix For: 4.0.0


Deprecated in AK 2.7 – already unused – via 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-572%3A+Improve+timeouts+and+retries+in+Kafka+Streams



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16327:
---

 Summary: Remove Deprecated variable 
StreamsConfig#TOPOLOGY_OPTIMIZATION 
 Key: KAFKA-16327
 URL: https://issues.apache.org/jira/browse/KAFKA-16327
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax


Deprecated in 2.7 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16327:

Priority: Blocker  (was: Minor)

> Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION 
> ---
>
> Key: KAFKA-16327
> URL: https://issues.apache.org/jira/browse/KAFKA-16327
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
>
> Deprecated in 2.7 release via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16327:

Fix Version/s: 4.0.0

> Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION 
> ---
>
> Key: KAFKA-16327
> URL: https://issues.apache.org/jira/browse/KAFKA-16327
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Deprecated in 2.7 release via 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16327) Remove Deprecated variable StreamsConfig#TOPOLOGY_OPTIMIZATION

2024-03-04 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16327:
---

 Summary: Remove Deprecated variable 
StreamsConfig#TOPOLOGY_OPTIMIZATION 
 Key: KAFKA-16327
 URL: https://issues.apache.org/jira/browse/KAFKA-16327
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax


Deprecated in 2.7 release via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-626%3A+Rename+StreamsConfig+config+variable+name]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15797) Flaky test EosV2UpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosV2[true]

2024-02-29 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-15797:
---

Assignee: Matthias J. Sax

> Flaky test EosV2UpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosV2[true] 
> --
>
> Key: KAFKA-15797
> URL: https://issues.apache.org/jira/browse/KAFKA-15797
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Justine Olshan
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
>
> I found two recent failures:
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14629/22/testReport/junit/org.apache.kafka.streams.integration/EosV2UpgradeIntegrationTest/Build___JDK_8_and_Scala_2_12___shouldUpgradeFromEosAlphaToEosV2_true_/]
> [https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2365/testReport/junit/org.apache.kafka.streams.integration/EosV2UpgradeIntegrationTest/Build___JDK_21_and_Scala_2_13___shouldUpgradeFromEosAlphaToEosV2_true__2/]
>  
> Output generally looks like:
> {code:java}
> java.lang.AssertionError: Did not receive all 138 records from topic 
> multiPartitionOutputTopic within 6 ms, currently accumulated data is 
> [KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 
> 10), KeyValue(0, 15), KeyValue(0, 21), KeyValue(0, 28), KeyValue(0, 36), 
> KeyValue(0, 45), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 105), KeyValue(0, 120), KeyValue(0, 136), 
> KeyValue(0, 153), KeyValue(0, 171), KeyValue(0, 190), KeyValue(3, 0), 
> KeyValue(3, 1), KeyValue(3, 3), KeyValue(3, 6), KeyValue(3, 10), KeyValue(3, 
> 15), KeyValue(3, 21), KeyValue(3, 28), KeyValue(3, 36), KeyValue(3, 45), 
> KeyValue(3, 55), KeyValue(3, 66), KeyValue(3, 78), KeyValue(3, 91), 
> KeyValue(3, 105), KeyValue(3, 120), KeyValue(3, 136), KeyValue(3, 153), 
> KeyValue(3, 171), KeyValue(3, 190), KeyValue(3, 190), KeyValue(3, 210), 
> KeyValue(3, 231), KeyValue(3, 253), KeyValue(3, 276), KeyValue(3, 300), 
> KeyValue(3, 325), KeyValue(3, 351), KeyValue(3, 378), KeyValue(3, 406), 
> KeyValue(3, 435), KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 
> 6), KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45), KeyValue(1, 55), KeyValue(1, 66), 
> KeyValue(1, 78), KeyValue(1, 91), KeyValue(1, 105), KeyValue(1, 120), 
> KeyValue(1, 136), KeyValue(1, 153), KeyValue(1, 171), KeyValue(1, 190), 
> KeyValue(1, 120), KeyValue(1, 136), KeyValue(1, 153), KeyValue(1, 171), 
> KeyValue(1, 190), KeyValue(1, 210), KeyValue(1, 231), KeyValue(1, 253), 
> KeyValue(1, 276), KeyValue(1, 300), KeyValue(1, 325), KeyValue(1, 351), 
> KeyValue(1, 378), KeyValue(1, 406), KeyValue(1, 435), KeyValue(2, 0), 
> KeyValue(2, 1), KeyValue(2, 3), KeyValue(2, 6), KeyValue(2, 10), KeyValue(2, 
> 15), KeyValue(2, 21), KeyValue(2, 28), KeyValue(2, 36), KeyValue(2, 45), 
> KeyValue(2, 55), KeyValue(2, 66), KeyValue(2, 78), KeyValue(2, 91), 
> KeyValue(2, 105), KeyValue(2, 55), KeyValue(2, 66), KeyValue(2, 78), 
> KeyValue(2, 91), KeyValue(2, 105), KeyValue(2, 120), KeyValue(2, 136), 
> KeyValue(2, 153), KeyValue(2, 171), KeyValue(2, 190), KeyValue(2, 210), 
> KeyValue(2, 231), KeyValue(2, 253), KeyValue(2, 276), KeyValue(2, 300), 
> KeyValue(2, 325), KeyValue(2, 351), KeyValue(2, 378), KeyValue(2, 406), 
> KeyValue(0, 210), KeyValue(0, 231), KeyValue(0, 253), KeyValue(0, 276), 
> KeyValue(0, 300), KeyValue(0, 325), KeyValue(0, 351), KeyValue(0, 378), 
> KeyValue(0, 406), KeyValue(0, 435)] Expected: is a value equal to or greater 
> than <138> but: <134> was less than <138>{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16316) Make the restore behavior of GlobalKTables with custom processors configureable

2024-02-29 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16316:

Labels: needs-kip  (was: )

> Make the restore behavior of GlobalKTables with custom processors 
> configureable
> ---
>
> Key: KAFKA-16316
> URL: https://issues.apache.org/jira/browse/KAFKA-16316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Walker Carlson
>Assignee: Walker Carlson
>Priority: Major
>  Labels: needs-kip
>
> Take the change implemented in 
> https://issues.apache.org/jira/browse/KAFKA-7663 and make it optional through 
> adding a couple methods to the API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-9062) Handle stalled writes to RocksDB

2024-02-26 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820819#comment-17820819
 ] 

Matthias J. Sax commented on KAFKA-9062:


Given that bulk loading was disabled, should we close this ticket?

> Handle stalled writes to RocksDB
> 
>
> Key: KAFKA-9062
> URL: https://issues.apache.org/jira/browse/KAFKA-9062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Major
>  Labels: new-streams-runtime-should-fix
>
> RocksDB may stall writes at times when background compactions or flushes are 
> having trouble keeping up. This means we can effectively end up blocking 
> indefinitely during a StateStore#put call within Streams, and may get kicked 
> from the group if the throttling does not ease up within the max poll 
> interval.
> Example: when restoring large amounts of state from scratch, we use the 
> strategy recommended by RocksDB of turning off automatic compactions and 
> dumping everything into L0. We do batch somewhat, but do not sort these small 
> batches before loading into the db, so we end up with a large number of 
> unsorted L0 files.
> When restoration is complete and we toggle the db back to normal (not bulk 
> loading) settings, a background compaction is triggered to merge all these 
> into the next level. This background compaction can take a long time to merge 
> unsorted keys, especially when the amount of data is quite large.
> Any new writes while the number of L0 files exceeds the max will be stalled 
> until the compaction can finish, and processing after restoring from scratch 
> can block beyond the polling interval



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12549) Allow state stores to opt-in transactional support

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12549.
-
Resolution: Duplicate

Closing this ticket in favor of K14412.

> Allow state stores to opt-in transactional support
> --
>
> Key: KAFKA-12549
> URL: https://issues.apache.org/jira/browse/KAFKA-12549
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> Right now Kafka Stream's EOS implementation does not make any assumptions 
> about the state store's transactional support. Allowing the state stores to 
> optionally provide transactional support can have multiple benefits. E.g., if 
> we add some APIs into the {{StateStore}} interface, like {{beginTxn}}, 
> {{commitTxn}} and {{abortTxn}}. Streams library can determine if these are 
> supported via an additional {{boolean transactional()}} API, and if yes the 
> these APIs can be used under both ALOS and EOS like the following (otherwise 
> then just fallback to the normal processing logic):
> Within thread processing loops:
> 1. store.beginTxn
> 2. store.put // during processing
> 3. streams commit // either through eos protocol or not
> 4. store.commitTxn
> 5. start the next txn by store.beginTxn
> If the state stores allow Streams to do something like above, we can have the 
> following benefits:
> * Reduce the duplicated records upon crashes for ALOS (note this is not EOS 
> still, but some middle-ground where uncommitted data within a state store 
> would not be retained if store.commitTxn failed).
> * No need to wipe the state store and re-bootstrap from scratch upon crashes 
> for EOS. E.g., if a crash-failure happened between streams commit completes 
> and store.commitTxn. We can instead just roll-forward the transaction by 
> replaying the changelog from the second recent streams committed offset 
> towards the most recent committed offset.
> * Remote stores that support txn then do not need to support wiping 
> (https://issues.apache.org/jira/browse/KAFKA-12475).
> * We can fix the known issues of emit-on-change 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams).
> * We can support "query committed data only" for interactive queries (see 
> below for reasons).
> As for the implementation of these APIs, there are several options:
> * The state store itself have natural transaction features (e.g. RocksDB).
> * Use an in-memory buffer for all puts within a transaction, and upon 
> `commitTxn` write the whole buffer as a batch to the underlying state store, 
> or just drop the whole buffer upon aborting. Then for interactive queries, 
> one can optionally only query the underlying store for committed data only.
> * Use a separate store as the transient persistent buffer. Upon `beginTxn` 
> create a new empty transient store, and upon `commitTxn` merge the store into 
> the underlying store. Same applies for interactive querying committed-only 
> data. This has a benefit compared with the one above that there's no memory 
> pressure even with long transactions, but incurs more complexity / 
> performance overhead with the separate persistent store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12549) Allow state stores to opt-in transactional support

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12549.
-
Resolution: Duplicate

Closing this ticket in favor of K14412.

> Allow state stores to opt-in transactional support
> --
>
> Key: KAFKA-12549
> URL: https://issues.apache.org/jira/browse/KAFKA-12549
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> Right now Kafka Stream's EOS implementation does not make any assumptions 
> about the state store's transactional support. Allowing the state stores to 
> optionally provide transactional support can have multiple benefits. E.g., if 
> we add some APIs into the {{StateStore}} interface, like {{beginTxn}}, 
> {{commitTxn}} and {{abortTxn}}. Streams library can determine if these are 
> supported via an additional {{boolean transactional()}} API, and if yes the 
> these APIs can be used under both ALOS and EOS like the following (otherwise 
> then just fallback to the normal processing logic):
> Within thread processing loops:
> 1. store.beginTxn
> 2. store.put // during processing
> 3. streams commit // either through eos protocol or not
> 4. store.commitTxn
> 5. start the next txn by store.beginTxn
> If the state stores allow Streams to do something like above, we can have the 
> following benefits:
> * Reduce the duplicated records upon crashes for ALOS (note this is not EOS 
> still, but some middle-ground where uncommitted data within a state store 
> would not be retained if store.commitTxn failed).
> * No need to wipe the state store and re-bootstrap from scratch upon crashes 
> for EOS. E.g., if a crash-failure happened between streams commit completes 
> and store.commitTxn. We can instead just roll-forward the transaction by 
> replaying the changelog from the second recent streams committed offset 
> towards the most recent committed offset.
> * Remote stores that support txn then do not need to support wiping 
> (https://issues.apache.org/jira/browse/KAFKA-12475).
> * We can fix the known issues of emit-on-change 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams).
> * We can support "query committed data only" for interactive queries (see 
> below for reasons).
> As for the implementation of these APIs, there are several options:
> * The state store itself have natural transaction features (e.g. RocksDB).
> * Use an in-memory buffer for all puts within a transaction, and upon 
> `commitTxn` write the whole buffer as a batch to the underlying state store, 
> or just drop the whole buffer upon aborting. Then for interactive queries, 
> one can optionally only query the underlying store for committed data only.
> * Use a separate store as the transient persistent buffer. Upon `beginTxn` 
> create a new empty transient store, and upon `commitTxn` merge the store into 
> the underlying store. Same applies for interactive querying committed-only 
> data. This has a benefit compared with the one above that there's no memory 
> pressure even with long transactions, but incurs more complexity / 
> performance overhead with the separate persistent store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16302) Builds failing due to streams test execution failures

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16302:

Component/s: streams
 unit tests

> Builds failing due to streams test execution failures
> -
>
> Key: KAFKA-16302
> URL: https://issues.apache.org/jira/browse/KAFKA-16302
> Project: Kafka
>  Issue Type: Task
>  Components: streams, unit tests
>Reporter: Justine Olshan
>Priority: Major
>
> I'm seeing this on master and many PR builds for all versions:
>  
> {code:java}
> [2024-02-22T14:37:07.076Z] * What went wrong:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1426[2024-02-22T14:37:07.076Z]
>  Execution failed for task ':streams:test'.
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1427[2024-02-22T14:37:07.076Z]
>  > The following test methods could not be retried, which is unexpected. 
> Please file a bug report at 
> https://github.com/gradle/test-retry-gradle-plugin/issues
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1428[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@78d39a69]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1429[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@3c818ac4]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1430[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@251f7d26]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1431[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@52c8295b]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1432[2024-02-22T14:37:07.076Z]
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16302) Builds failing due to streams test execution failures

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-16302:
---

Assignee: Justine Olshan

> Builds failing due to streams test execution failures
> -
>
> Key: KAFKA-16302
> URL: https://issues.apache.org/jira/browse/KAFKA-16302
> Project: Kafka
>  Issue Type: Task
>  Components: streams, unit tests
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Major
>
> I'm seeing this on master and many PR builds for all versions:
>  
> {code:java}
> [2024-02-22T14:37:07.076Z] * What went wrong:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1426[2024-02-22T14:37:07.076Z]
>  Execution failed for task ':streams:test'.
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1427[2024-02-22T14:37:07.076Z]
>  > The following test methods could not be retried, which is unexpected. 
> Please file a bug report at 
> https://github.com/gradle/test-retry-gradle-plugin/issues
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1428[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@78d39a69]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1429[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@3c818ac4]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1430[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@251f7d26]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1431[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@52c8295b]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1432[2024-02-22T14:37:07.076Z]
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16295) Align RocksDB and in-memory store inti() sequence

2024-02-21 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16295:
---

 Summary: Align RocksDB and in-memory store inti() sequence
 Key: KAFKA-16295
 URL: https://issues.apache.org/jira/browse/KAFKA-16295
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Cf [https://lists.apache.org/thread/f4z1vmpb21xhyxl6966xtcb3958fyx5d] 
{quote}For RocksDB stores, we open the store first and then call #register, 
[...] However the in-memory store actually registers itself *first*, before 
marking itself as open,[..].

I suppose it would make sense to align the two store implementations to have 
the same behavior, and the in-memory store is probably technically more correct.
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16295) Align RocksDB and in-memory store inti() sequence

2024-02-21 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16295:
---

 Summary: Align RocksDB and in-memory store inti() sequence
 Key: KAFKA-16295
 URL: https://issues.apache.org/jira/browse/KAFKA-16295
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Cf [https://lists.apache.org/thread/f4z1vmpb21xhyxl6966xtcb3958fyx5d] 
{quote}For RocksDB stores, we open the store first and then call #register, 
[...] However the in-memory store actually registers itself *first*, before 
marking itself as open,[..].

I suppose it would make sense to align the two store implementations to have 
the same behavior, and the in-memory store is probably technically more correct.
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16284) Performance regression in RocksDB

2024-02-20 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-16284:

Affects Version/s: 3.8.0

> Performance regression in RocksDB
> -
>
> Key: KAFKA-16284
> URL: https://issues.apache.org/jira/browse/KAFKA-16284
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 3.8.0
>Reporter: Lucas Brutschy
>Assignee: Lucas Brutschy
>Priority: Major
>
> In benchmarks, we are noticing a performance regression in the performance of 
> `RocksDBStore`.
> The regression happens between those two commits:
>  
> {code:java}
> trunk - 70c8b8d0af - regressed - 2024-01-06T14:00:20Z
> trunk - d5aa341a18 - not regressed - 2023-12-31T11:47:14Z
> {code}
> The regression can be reproduced by the following test:
>  
> {code:java}
> package org.apache.kafka.streams.state.internals;
> import org.apache.kafka.common.serialization.Serdes;
> import org.apache.kafka.common.utils.Bytes;
> import org.apache.kafka.streams.StreamsConfig;
> import org.apache.kafka.streams.processor.StateStoreContext;
> import org.apache.kafka.test.InternalMockProcessorContext;
> import org.apache.kafka.test.MockRocksDbConfigSetter;
> import org.apache.kafka.test.StreamsTestUtils;
> import org.apache.kafka.test.TestUtils;
> import org.junit.Before;
> import org.junit.Test;
> import java.io.File;
> import java.nio.ByteBuffer;
> import java.util.Properties;
> public class RocksDBStorePerfTest {
> InternalMockProcessorContext context;
> RocksDBStore rocksDBStore;
> final static String DB_NAME = "db-name";
> final static String METRICS_SCOPE = "metrics-scope";
> RocksDBStore getRocksDBStore() {
> return new RocksDBStore(DB_NAME, METRICS_SCOPE);
> }
> @Before
> public void setUp() {
> final Properties props = StreamsTestUtils.getStreamsConfig();
> props.put(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG, 
> MockRocksDbConfigSetter.class);
> File dir = TestUtils.tempDirectory();
> context = new InternalMockProcessorContext<>(
> dir,
> Serdes.String(),
> Serdes.String(),
> new StreamsConfig(props)
> );
> }
> @Test
> public void testPerf() {
> long start = System.currentTimeMillis();
> for (int i = 0; i < 10; i++) {
> System.out.println("Iteration: "+i+" Time: " + 
> (System.currentTimeMillis() - start));
> RocksDBStore rocksDBStore = getRocksDBStore();
> rocksDBStore.init((StateStoreContext) context, rocksDBStore);
> for (int j = 0; j < 100; j++) {
> rocksDBStore.put(new 
> Bytes(ByteBuffer.allocate(4).putInt(j).array()), "perf".getBytes());
> }
> rocksDBStore.close();
> }
> long end = System.currentTimeMillis();
> System.out.println("Time: " + (end - start));
> }
> }
>  {code}
>  
> I have isolated the regression to commit 
> [5bc3aa4|https://github.com/apache/kafka/commit/5bc3aa428067dff1f2b9075ff5d1351fb05d4b10].
>  On my machine, the test takes ~8 seconds before 
> [5bc3aa4|https://github.com/apache/kafka/commit/5bc3aa428067dff1f2b9075ff5d1351fb05d4b10]
>  and ~30 seconds after 
> [5bc3aa4|https://github.com/apache/kafka/commit/5bc3aa428067dff1f2b9075ff5d1351fb05d4b10].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16279) Avoid leaking abstractions of `StateStore`

2024-02-19 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16279:
---

 Summary: Avoid leaking abstractions of `StateStore`
 Key: KAFKA-16279
 URL: https://issues.apache.org/jira/browse/KAFKA-16279
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


The `StateStore` interface is user facing and contains a few life-cycle 
management methods (like `init()` and `close()`) – those methods are exposed 
for users to develop custom state stores.

However, we also use `StateStore` as base interface for store-handles in the 
PAPI, and thus life-cycle management methods are leaking into the PAPI (maybe 
also others – would need a dedicated discussion which one we consider useful 
for PAPI users and which not).

We should consider to change what we expose in the PAPI (atm, we only document 
via JavaDocs that eg. `close()` should never be called; but it's of course not 
ideal and would be better if `close()` et al. would not be expose for `PAPI` 
users to begin with.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16279) Avoid leaking abstractions of `StateStore`

2024-02-19 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16279:
---

 Summary: Avoid leaking abstractions of `StateStore`
 Key: KAFKA-16279
 URL: https://issues.apache.org/jira/browse/KAFKA-16279
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


The `StateStore` interface is user facing and contains a few life-cycle 
management methods (like `init()` and `close()`) – those methods are exposed 
for users to develop custom state stores.

However, we also use `StateStore` as base interface for store-handles in the 
PAPI, and thus life-cycle management methods are leaking into the PAPI (maybe 
also others – would need a dedicated discussion which one we consider useful 
for PAPI users and which not).

We should consider to change what we expose in the PAPI (atm, we only document 
via JavaDocs that eg. `close()` should never be called; but it's of course not 
ideal and would be better if `close()` et al. would not be expose for `PAPI` 
users to begin with.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14049) Relax Non Null Requirement for KStreamGlobalKTable Left Join

2024-02-18 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-14049:

Fix Version/s: 3.7.0

> Relax Non Null Requirement for KStreamGlobalKTable Left Join
> 
>
> Key: KAFKA-14049
> URL: https://issues.apache.org/jira/browse/KAFKA-14049
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Saumya Gupta
>Assignee: Florin Akermann
>Priority: Major
> Fix For: 3.7.0
>
>
> Null Values in the Stream for a Left Join would indicate a Tombstone Message 
> that needs to propagated if not actually joined with the GlobalKTable 
> message, hence these messages should not be ignored .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15143) MockFixedKeyProcessorContext is missing from test-utils

2024-02-16 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-15143:

Labels: needs-kip  (was: )

> MockFixedKeyProcessorContext is missing from test-utils
> ---
>
> Key: KAFKA-15143
> URL: https://issues.apache.org/jira/browse/KAFKA-15143
> Project: Kafka
>  Issue Type: Bug
>  Components: streams-test-utils
>Affects Versions: 3.5.0
>Reporter: Tomasz Kaszuba
>Assignee: Shashwat Pandey
>Priority: Major
>  Labels: needs-kip
>
> I am trying to test a ContextualFixedKeyProcessor but it is not possible to 
> call the init method from within a unit test since the MockProcessorContext 
> doesn't implement  
> {code:java}
> FixedKeyProcessorContext {code}
> but only
> {code:java}
> ProcessorContext
> {code}
> Shouldn't there also be a *MockFixedKeyProcessorContext* in the test utils?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


<    1   2   3   4   5   6   7   8   9   10   >