[jira] [Resolved] (KAFKA-16825) CVE vulnerabilities in Jetty and netty

2024-05-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16825.

Fix Version/s: 3.8.0
   Resolution: Fixed

> CVE vulnerabilities in Jetty and netty
> --
>
> Key: KAFKA-16825
> URL: https://issues.apache.org/jira/browse/KAFKA-16825
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.7.0
>Reporter: mooner
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> There is a vulnerability (CVE-2024-29025) in the passive dependency software 
> Netty used by Kafka, which has been fixed in version 4.1.108.Final.
> There is also a vulnerability (CVE-2024-22201) in the passive dependency 
> software Jetty, which has been fixed in version 9.4.54.v20240208.
> When will Kafka upgrade the versions of Netty and Jetty to fix these two 
> vulnerabilities?
> Reference website:
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025
> https://nvd.nist.gov/vuln/detail/CVE-2024-22201



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12399) Deprecate Log4J Appender KIP-719

2024-05-22 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12399.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Deprecate Log4J Appender KIP-719
> 
>
> Key: KAFKA-12399
> URL: https://issues.apache.org/jira/browse/KAFKA-12399
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Dongjin Lee
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> As a following job of KAFKA-9366, we have to entirely remove the log4j 1.2.7 
> dependency from the classpath by removing dependencies on log4j-appender.
> KIP-719: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Deprecate+Log4J+Appender



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7632) Support Compression Level

2024-05-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7632.
---
Fix Version/s: 3.8.0
 Assignee: Mickael Maison  (was: Dongjin Lee)
   Resolution: Fixed

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a producer, broker, and topic configuration setting 
> will allow the user to adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16771) First log directory printed twice when formatting storage

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16771:
--

 Summary: First log directory printed twice when formatting storage
 Key: KAFKA-16771
 URL: https://issues.apache.org/jira/browse/KAFKA-16771
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 3.7.0
Reporter: Mickael Maison


If multiple log directories are set, when running bin/kafka-storage.sh format, 
the first directory is printed twice. For example:

{noformat}
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties --release-version 3.6
metaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, 
dirs={/tmp/kraft-combined-logs: EMPTY, /tmp/kraft-combined-logs2: EMPTY})
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs2 with metadata.version 3.6-IV2.
{noformat}






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16769) Delete deprecated add.source.alias.to.metrics configuration

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16769:
--

 Summary: Delete deprecated add.source.alias.to.metrics 
configuration
 Key: KAFKA-16769
 URL: https://issues.apache.org/jira/browse/KAFKA-16769
 Project: Kafka
  Issue Type: Task
  Components: mirrormaker
Reporter: Mickael Maison
Assignee: Mickael Maison
 Fix For: 4.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16646) Consider only running the CVE scanner action on apache/kafka and not in forks

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16646:
--

 Summary: Consider only running the CVE scanner action on 
apache/kafka and not in forks
 Key: KAFKA-16646
 URL: https://issues.apache.org/jira/browse/KAFKA-16646
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison


Currently the CVE scanner action is failing due to CVEs in the base image. It 
seems that anybody that has a fork is getting daily emails about it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16645) CVEs in 3.7.0 docker image

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16645:
--

 Summary: CVEs in 3.7.0 docker image
 Key: KAFKA-16645
 URL: https://issues.apache.org/jira/browse/KAFKA-16645
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.7.0
Reporter: Mickael Maison


Our Docker Image CVE Scanner GitHub action reports 2 high CVEs in our base 
image:

apache/kafka:3.7.0 (alpine 3.19.1)
==
Total: 2 (HIGH: 2, CRITICAL: 0)

┌──┬┬──┬┬───┬───┬─┐
│ Library  │ Vulnerability  │ Severity │ Status │ Installed Version │ Fixed 
Version │Title│
├──┼┼──┼┼───┼───┼─┤
│ libexpat │ CVE-2023-52425 │ HIGH │ fixed  │ 2.5.0-r2  │ 2.6.0-r0  
│ expat: parsing large tokens can trigger a denial of service │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2023-52425  │
│  ├┤  ││   
├───┼─┤
│  │ CVE-2024-28757 │  ││   │ 2.6.2-r0  
│ expat: XML Entity Expansion │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2024-28757  │
└──┴┴──┴┴───┴───┴─┘

Looking at the 
[KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka#KIP975:DockerImageforApacheKafka-WhatifweobserveabugoracriticalCVEinthereleasedApacheKafkaDockerImage?]
 that introduced the docker images, it seems we should release a bugfix when 
high CVEs are detected. It would be good to investigate and assess whether 
Kafka is impacted or not.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16478) Links for Kafka 3.5.2 release are broken

2024-04-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16478.

Resolution: Fixed

> Links for Kafka 3.5.2 release are broken
> 
>
> Key: KAFKA-16478
> URL: https://issues.apache.org/jira/browse/KAFKA-16478
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 3.5.2
>Reporter: Philipp Trulson
>Assignee: Mickael Maison
>Priority: Major
>
> While trying to update our setup, I noticed that the download links for the 
> 3.5.2 links are broken. They all point to a different host and also contain 
> an additional `/kafka` in their URL. Compare:
> not working:
> [https://downloads.apache.org/kafka/kafka/3.5.2/RELEASE_NOTES.html]
> working:
> [https://archive.apache.org/dist/kafka/3.5.1/RELEASE_NOTES.html]
> [https://downloads.apache.org/kafka/3.6.2/RELEASE_NOTES.html]
> This goes for all links in the release - archives, checksums, signatures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15882) Scheduled nightly github actions workflow for CVE reports on published docker images

2024-03-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15882.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Scheduled nightly github actions workflow for CVE reports on published docker 
> images
> 
>
> Key: KAFKA-15882
> URL: https://issues.apache.org/jira/browse/KAFKA-15882
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Vedarth Sharma
>Assignee: Vedarth Sharma
>Priority: Major
> Fix For: 3.8.0
>
>
> This scheduled github actions workflow will check supported published docker 
> images for CVEs and generate reports.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16206) KRaftMigrationZkWriter tries to delete deleted topic configs twice

2024-03-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16206.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaftMigrationZkWriter tries to delete deleted topic configs twice
> --
>
> Key: KAFKA-16206
> URL: https://issues.apache.org/jira/browse/KAFKA-16206
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft, migration
>Reporter: David Arthur
>Assignee: Alyssa Huang
>Priority: Minor
> Fix For: 3.8.0
>
>
> When deleting a topic, we see spurious ERROR logs from 
> kafka.zk.migration.ZkConfigMigrationClient:
>  
> {code:java}
> Did not delete ConfigResource(type=TOPIC, name='xxx') since the node did not 
> exist. {code}
> This seems to happen because ZkTopicMigrationClient#deleteTopic is deleting 
> the topic, partitions, and config ZNodes in one shot. Subsequent calls from 
> KRaftMigrationZkWriter to delete the config encounter a NO_NODE since the 
> ZNode is already gone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16355) ConcurrentModificationException in InMemoryTimeOrderedKeyValueBuffer.evictWhile

2024-03-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16355:
--

 Summary: ConcurrentModificationException in 
InMemoryTimeOrderedKeyValueBuffer.evictWhile
 Key: KAFKA-16355
 URL: https://issues.apache.org/jira/browse/KAFKA-16355
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 3.5.1
Reporter: Mickael Maison


While a Streams application was restoring its state after an outage, it hit the 
following:

org.apache.kafka.streams.errors.StreamsException: Exception caught in process. 
taskId=0_16, processor=KSTREAM-SOURCE-00, topic=, partition=16, 
offset=454875695, stacktrace=java.util.ConcurrentModificationException
at java.base/java.util.TreeMap$PrivateEntryIterator.remove(TreeMap.java:1507)
at 
org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.evictWhile(InMemoryTimeOrderedKeyValueBuffer.java:423)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.enforceConstraints(KTableSuppressProcessorSupplier.java:178)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.process(KTableSuppressProcessorSupplier.java:165)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$setFlushListener$4(MeteredWindowStore.java:181)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:124)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:99)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:158)
at 
org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:252)
at 
org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:302)
at 
org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:179)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:173)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:47)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$put$5(MeteredWindowStore.java:201)
at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:200)
at 
org.apache.kafka.streams.processor.internals.AbstractReadWriteDecorator$WindowStoreReadWriteDecorator.put(AbstractReadWriteDecorator.java:201)
at 
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:138)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamPeek$KStreamPeekProcessor.process(KStreamPeek.java:42)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 

[jira] [Created] (KAFKA-16347) Bump ZooKeeper to 3.8.4

2024-03-06 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16347:
--

 Summary: Bump ZooKeeper to 3.8.4
 Key: KAFKA-16347
 URL: https://issues.apache.org/jira/browse/KAFKA-16347
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.1, 3.7.0
Reporter: Mickael Maison
Assignee: Mickael Maison


ZooKeeper 3.8.4 was released and contains a few CVE fixes: 
https://zookeeper.apache.org/doc/r3.8.4/releasenotes.html

We should update 3.6, 3.7 and trunk to use this new ZooKeeper release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16318) Add javadoc to KafkaMetric

2024-03-01 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16318:
--

 Summary: Add javadoc to KafkaMetric
 Key: KAFKA-16318
 URL: https://issues.apache.org/jira/browse/KAFKA-16318
 Project: Kafka
  Issue Type: Bug
  Components: docs
Reporter: Mickael Maison


KafkaMetric is part of the public API but it's missing javadoc describing the 
class and several of its methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16292) Revamp upgrade.html page

2024-02-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16292:
--

 Summary: Revamp upgrade.html page 
 Key: KAFKA-16292
 URL: https://issues.apache.org/jira/browse/KAFKA-16292
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Mickael Maison


At the moment we keep adding to this page for each release. The upgrade.html 
file is now over 2000 line long. It still contains steps for upgrading from 0.8 
to 0.9! These steps are already in the docs for each version which can be 
accessed via //documentation.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13566) producer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13566.

Resolution: Duplicate

> producer exponential backoff implementation
> ---
>
> Key: KAFKA-13566
> URL: https://issues.apache.org/jira/browse/KAFKA-13566
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13567) adminClient exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13567.

Resolution: Duplicate

> adminClient exponential backoff implementation
> --
>
> Key: KAFKA-13567
> URL: https://issues.apache.org/jira/browse/KAFKA-13567
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13565) consumer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13565.

Fix Version/s: 3.7.0
   Resolution: Duplicate

> consumer exponential backoff implementation
> ---
>
> Key: KAFKA-13565
> URL: https://issues.apache.org/jira/browse/KAFKA-13565
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14576) Move ConsoleConsumer to tools

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14576.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Move ConsoleConsumer to tools
> -
>
> Key: KAFKA-14576
> URL: https://issues.apache.org/jira/browse/KAFKA-14576
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14822) Allow restricting File and Directory ConfigProviders to specific paths

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14822.

Fix Version/s: 3.8.0
 Assignee: Gantigmaa Selenge  (was: Mickael Maison)
   Resolution: Fixed

> Allow restricting File and Directory ConfigProviders to specific paths
> --
>
> Key: KAFKA-14822
> URL: https://issues.apache.org/jira/browse/KAFKA-14822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
>  Labels: need-kip
> Fix For: 3.8.0
>
>
> In sensitive environments, it would be interesting to be able to restrict the 
> files that can be accessed by the built-in configuration providers.
> For example:
> config.providers=directory
> config.providers.directory.class=org.apache.kafka.connect.configs.DirectoryConfigProvider
> config.providers.directory.path=/var/run
> Then if a caller tries to access another path, for example
> ssl.keystore.password=${directory:/etc/passwd:keystore-password}
> it would be rejected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16246) Cleanups in ConsoleConsumer

2024-02-13 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16246:
--

 Summary: Cleanups in ConsoleConsumer
 Key: KAFKA-16246
 URL: https://issues.apache.org/jira/browse/KAFKA-16246
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Mickael Maison


When rewriting ConsoleConsumer in Java, in order to keep the conversion and 
review process simple we mimicked the logic flow and types used in the Scala 
implementation.

Once the rewrite is merged, we should refactor some of the logic to make it 
more Java-like. This include removing Optional where it makes sense and moving 
all the argument checking logic into ConsoleConsumerOptions.


See https://github.com/apache/kafka/pull/15274 for pointers.

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16238) ConnectRestApiTest broken after KIP-1004

2024-02-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16238.

Fix Version/s: 3.8.0
   Resolution: Fixed

> ConnectRestApiTest broken after KIP-1004
> 
>
> Key: KAFKA-16238
> URL: https://issues.apache.org/jira/browse/KAFKA-16238
> Project: Kafka
>  Issue Type: Improvement
>  Components: connect
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> KIP-1004 introduced a new configuration for connectors: 'tasks.max.enforce'.
> The ConnectRestApiTest system test needs to be updated to expect the new 
> configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16238) ConnectRestApiTest broken after KIP-1004

2024-02-09 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16238:
--

 Summary: ConnectRestApiTest broken after KIP-1004
 Key: KAFKA-16238
 URL: https://issues.apache.org/jira/browse/KAFKA-16238
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Reporter: Mickael Maison
Assignee: Mickael Maison


KIP-1004 introduced a new configuration for connectors: 'tasks.max.enforce'.

The ConnectRestApiTest system test needs to be updated to expect the new 
configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12937) Mirrormaker2 can only start from the beginning of a topic

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12937.

Resolution: Duplicate

> Mirrormaker2  can only start from the beginning of a topic
> --
>
> Key: KAFKA-12937
> URL: https://issues.apache.org/jira/browse/KAFKA-12937
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.8.0
> Environment: Dockerized environment
>Reporter: Daan Bosch
>Priority: Major
>
> *Goal*:
>  I want to replace Mirrormaker version 1 with Mirrormaker2.
>  To do this I want to:
>  start Mirrormaker2 from the latest offset of every topic
>  stop Mirrormaker1 
>  There should only be a couple of double messages.
> What happened:
>  Mirrormaker2 starts replicating from the start of all topics
> *How to reproduce:*
>  Start two Kafka clusters, A and B
> I produce 3000 messages to cluster A on a topic (TOPIC1)
>  Kafka Connect is running and connected to cluster B
>  Start a Mirrormaker2 task in connect to replicate messages from cluster A. 
> Wit the option:
>  consumer auto.offset.reset to latest
>  Produce another 3000 messages to cluster A on the same topic (TOPIC1)
> *Expected result:*
>  Cluster B will only contain the messages produced the second time (3000 in 
> total) on TOPIC1
> Actual result:
>  The mirror picks up all messages from the start (6000 in total) and 
> replicates them to cluster B
> *Additional logs:*
>  Logs from the consumer of the Mirrormaker task:
> mirrormaker.log:7581:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO [Consumer 
> clientId=consumer-null-4, groupId=null] Seeking to offset 0 for partition 
> perf-test-8 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7583:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-3 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7585:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-2 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7587:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-1 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7589:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7591:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-7 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7593:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-6 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7595:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-5 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7597:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-4 
> (org.apache.kafka.clients.consumer.KafkaConsumer:1582)You can see they are 
> trying to seek to a position and thus overriding the latest offset
>  
> You can see it is doing a seek to position 0 for every partition. which is 
> not what I expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8259) Build RPM for Kafka

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8259.
---
Resolution: Won't Do

> Build RPM for Kafka
> ---
>
> Key: KAFKA-8259
> URL: https://issues.apache.org/jira/browse/KAFKA-8259
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Patrick Dignan
>Priority: Minor
>
> RPM packaging eases the installation and deployment of Kafka to make it much 
> more standard.
> I noticed in https://issues.apache.org/jira/browse/KAFKA-1324 [~jkreps] 
> closed the issue because other sources provide packaging.  I think it's 
> worthwhile for the standard, open source project to provide this as a base to 
> reduce redundant work and provide this functionality for users.  Other 
> similar open source software like Elasticsearch create an RPM 
> [https://github.com/elastic/elasticsearch/blob/0ad3d90a36529bf369813ea6253f305e11aff2e9/distribution/packages/build.gradle].
>   This also makes forking internally more maintainable by reducing the amount 
> of work to be done for each version upgrade.
> I have a patch to add this functionality that I will clean up and PR on 
> Github.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-9094) Validate the replicas for partition reassignments triggered through the /admin/reassign_partitions zNode

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9094.
---
Resolution: Won't Do

> Validate the replicas for partition reassignments triggered through the 
> /admin/reassign_partitions zNode
> 
>
> Key: KAFKA-9094
> URL: https://issues.apache.org/jira/browse/KAFKA-9094
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
>
> As was mentioned by [~jsancio] in 
> [https://github.com/apache/kafka/pull/7574#discussion_r337621762] , it would 
> make sense to apply the same replica validation we apply to the KIP-455 
> reassignments API.
> Namely, validate that the replicas:
> * are not empty (e.g [])
> * are not negative negative (e.g [1,2,-1])
> * are not brokers that are not part of the cluster or otherwise unhealthy 
> (e.g not in /brokers zNode)
> The last liveness validation is subject to comments. We are re-evaluating 
> whether we want to enforce it for the API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15717) KRaft support in LeaderEpochIntegrationTest

2024-02-05 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15717.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in LeaderEpochIntegrationTest
> ---
>
> Key: KAFKA-15717
> URL: https://issues.apache.org/jira/browse/KAFKA-15717
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in LeaderEpochIntegrationTest in 
> core/src/test/scala/unit/kafka/server/epoch/LeaderEpochIntegrationTest.scala 
> need to be updated to support KRaft
> 67 : def shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader(): 
> Unit = {
> 99 : def shouldSendLeaderEpochRequestAndGetAResponse(): Unit = {
> 144 : def shouldIncreaseLeaderEpochBetweenLeaderRestarts(): Unit = {
> Scanned 305 lines. Found 0 KRaft tests out of 3 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15728) KRaft support in DescribeUserScramCredentialsRequestNotAuthorizedTest

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15728.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DescribeUserScramCredentialsRequestNotAuthorizedTest
> -
>
> Key: KAFKA-15728
> URL: https://issues.apache.org/jira/browse/KAFKA-15728
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DescribeUserScramCredentialsRequestNotAuthorizedTest 
> in 
> core/src/test/scala/unit/kafka/server/DescribeUserScramCredentialsRequestNotAuthorizedTest.scala
>  need to be updated to support KRaft
> 39 : def testDescribeNotAuthorized(): Unit = {
> Scanned 52 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10047) Unnecessary widening of (int to long) scope in FloatSerializer

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10047.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Unnecessary widening of (int to long) scope in FloatSerializer
> --
>
> Key: KAFKA-10047
> URL: https://issues.apache.org/jira/browse/KAFKA-10047
> Project: Kafka
>  Issue Type: Task
>  Components: clients
>Reporter: Guru Tahasildar
>Priority: Trivial
> Fix For: 3.8.0
>
>
> The following code is present in FloatSerializer:
> {code}
> long bits = Float.floatToRawIntBits(data);
> return new byte[] {
> (byte) (bits >>> 24),
> (byte) (bits >>> 16),
> (byte) (bits >>> 8),
> (byte) bits
> };
> {code}
> {{Float.floatToRawIntBits()}} returns an {{int}} but, the result is assigned 
> to a {{long}} so there is a widening of scope. This is not needed for any 
> subsequent operations hence, can be changed to use {{int}}.
> I would like to volunteer to make this change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-5561) Java based TopicCommand to use the Admin client

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-5561.
---
Resolution: Duplicate

> Java based TopicCommand to use the Admin client
> ---
>
> Key: KAFKA-5561
> URL: https://issues.apache.org/jira/browse/KAFKA-5561
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
>
> Hi, 
> as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it 
> could be great to have the TopicCommand using the new Admin client instead of 
> the way it works today.
> As pushed by [~gwenshap] in the above JIRA, I'm going to work on it.
> Thanks,
> Paolo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16204) Stray file core/00000000000000000001.snapshot created when running core tests

2024-01-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16204.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Stray file core/0001.snapshot created when running core tests
> -
>
> Key: KAFKA-16204
> URL: https://issues.apache.org/jira/browse/KAFKA-16204
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, unit tests
>Reporter: Mickael Maison
>Assignee: Gaurav Narula
>Priority: Major
>  Labels: newbie, newbie++
> Fix For: 3.8.0
>
>
> When running the core tests I often get a file called 
> core/0001.snapshot created in my kafka folder. It looks like 
> one of the test does not clean its resources properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16204) Stray file core/00000000000000000001.snapshot created when running core tests

2024-01-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16204:
--

 Summary: Stray file core/0001.snapshot created 
when running core tests
 Key: KAFKA-16204
 URL: https://issues.apache.org/jira/browse/KAFKA-16204
 Project: Kafka
  Issue Type: Improvement
  Components: core, unit tests
Reporter: Mickael Maison


When running the core tests I often get a file called 
core/0001.snapshot created in my kafka folder. It looks like 
one of the test does not clean its resources properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16202) Extra dot in error message in producer

2024-01-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16202:
--

 Summary: Extra dot in error message in producer
 Key: KAFKA-16202
 URL: https://issues.apache.org/jira/browse/KAFKA-16202
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison


If the broker hits a StorageException while handling a record from the 
producer, the producer prints the following warning:

[2024-01-29 15:33:30,722] WARN [Producer clientId=console-producer] Received 
invalid metadata error in produce request on partition topic1-0 due to 
org.apache.kafka.common.errors.KafkaStorageException: Disk error when trying to 
access log file on the disk.. Going to request metadata update now 
(org.apache.kafka.clients.producer.internals.Sender)

There's an extra dot between disk and Going.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16003) The znode /config/topics is not updated during KRaft migration in "dual-write" mode

2024-01-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16003.

Fix Version/s: 3.8.0
   Resolution: Fixed

> The znode /config/topics is not updated during KRaft migration in 
> "dual-write" mode
> ---
>
> Key: KAFKA-16003
> URL: https://issues.apache.org/jira/browse/KAFKA-16003
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 3.6.1
>Reporter: Paolo Patierno
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> I tried the following scenario ...
> I have a ZooKeeper-based cluster and create a my-topic-1 topic (without 
> specifying any specific configuration for it). The correct znodes are created 
> under /config/topics and /brokers/topics.
> I start a migration to KRaft but not moving forward from "dual write" mode. 
> While in this mode, I create a new my-topic-2 topic (still without any 
> specific config). I see that a new znode is created under /brokers/topics but 
> NOT under /config/topics. It seems that the KRaft controller is not updating 
> this information in ZooKeeper during the dual-write. The controller log shows 
> that the write to ZooKeeper was done, but not everything I would say:
> {code:java}
> 2023-12-13 10:23:26,229 TRACE [KRaftMigrationDriver id=3] Create Topic 
> my-topic-2, ID Macbp8BvQUKpzmq2vG_8dA. Transitioned migration state from 
> ZkMigrationLeadershipState{kraftControllerId=3, kraftControllerEpoch=7, 
> kraftMetadataOffset=445, kraftMetadataEpoch=7, 
> lastUpdatedTimeMs=1702462785587, migrationZkVersion=236, controllerZkEpoch=3, 
> controllerZkVersion=3} to ZkMigrationLeadershipState{kraftControllerId=3, 
> kraftControllerEpoch=7, kraftMetadataOffset=445, kraftMetadataEpoch=7, 
> lastUpdatedTimeMs=1702462785587, migrationZkVersion=237, controllerZkEpoch=3, 
> controllerZkVersion=3} 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-3-migration-driver-event-handler]
> 2023-12-13 10:23:26,229 DEBUG [KRaftMigrationDriver id=3] Made the following 
> ZK writes when handling KRaft delta: {CreateTopic=1} 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-3-migration-driver-event-handler] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7957) Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate

2024-01-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7957.
---
Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate
> -
>
> Key: KAFKA-7957
> URL: https://issues.apache.org/jira/browse/KAFKA-7957
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Mickael Maison
>Priority: Blocker
>  Labels: flaky-test
> Fix For: 3.8.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/18/]
> {quote}java.lang.AssertionError: Messages not sent at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:356) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766) at 
> kafka.server.DynamicBrokerReconfigurationTest.startProduceConsume(DynamicBrokerReconfigurationTest.scala:1270)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testMetricsReporterUpdate(DynamicBrokerReconfigurationTest.scala:650){quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16188) Delete deprecated kafka.common.MessageReader

2024-01-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16188:
--

 Summary: Delete deprecated kafka.common.MessageReader
 Key: KAFKA-16188
 URL: https://issues.apache.org/jira/browse/KAFKA-16188
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison
Assignee: Mickael Maison
 Fix For: 4.0.0


[KIP-641|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866569]
 introduced org.apache.kafka.tools.api.RecordReader and deprecated 
kafka.common.MessageReader in Kafka 3.5.0.

We should delete kafka.common.MessageReader in Kafka 4.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16170) Continuous never ending logs observed when running single node kafka in kraft mode with default KRaft properties in 3.7.0 RC2

2024-01-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16170.

Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/KAFKA-16144

> Continuous never ending logs observed when running single node kafka in kraft 
> mode with default KRaft properties in 3.7.0 RC2
> -
>
> Key: KAFKA-16170
> URL: https://issues.apache.org/jira/browse/KAFKA-16170
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Vedarth Sharma
>Priority: Major
> Attachments: kafka_logs.txt
>
>
> After kafka server startup, endless logs are observed, even when server is 
> sitting idle. This behaviour was not observed in previous versions.
> It is easy to reproduce this issue
>  * Download the RC tarball for 3.7.0
>  * Follow the [quickstart guide|https://kafka.apache.org/quickstart] to run 
> kafka in KRaft mode i.e. execute following commands
>  ** KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
>  ** bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
> config/kraft/server.properties
>  ** bin/kafka-server-start.sh config/kraft/server.properties
>  * Once kafka server is started wait for a few seconds and you should see 
> endless logs coming in.
> I have attached a small section of the logs in the ticket just after kafka 
> startup line, just to showcase the nature of endless logs observed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16163) Constant resignation/reelection of controller when starting a single node in combined mode

2024-01-18 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16163:
--

 Summary: Constant resignation/reelection of controller when 
starting a single node in combined mode
 Key: KAFKA-16163
 URL: https://issues.apache.org/jira/browse/KAFKA-16163
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.7.0
Reporter: Mickael Maison


When starting a single node in combined mode:
{noformat}
$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties
$ bin/kafka-server-start.sh config/kraft/server.properties{noformat}
 

it's constantly spamming the logs with:
{noformat}
[2024-01-18 17:37:09,065] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:11,967] INFO [RaftManager id=1] Did not receive fetch request 
from the majority of the voters within 3000ms. Current fetched voters are []. 
(org.apache.kafka.raft.LeaderState)
[2024-01-18 17:37:11,967] INFO [RaftManager id=1] Completed transition to 
ResignedState(localId=1, epoch=138, voters=[1], electionTimeoutMs=1864, 
unackedVoters=[], preferredSuccessors=[]) from Leader(localId=1, epoch=138, 
epochStartOffset=829, highWatermark=Optional[LogOffsetMetadata(offset=835, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=62788)])], 
voterStates={1=ReplicaState(nodeId=1, 
endOffset=Optional[LogOffsetMetadata(offset=835, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=62788)])], 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) 
(org.apache.kafka.raft.QuorumState)
[2024-01-18 17:37:13,072] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,072] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,123] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,124] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,124] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,175] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,176] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,176] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,227] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,229] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,229] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,279] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread){noformat}
This did not happen in 3.6.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16153) kraft_upgrade_test system test is broken

2024-01-17 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16153:
--

 Summary: kraft_upgrade_test system test is broken
 Key: KAFKA-16153
 URL: https://issues.apache.org/jira/browse/KAFKA-16153
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Mickael Maison


I get the following failure from all `from_kafka_version` versions:


Command '/opt/kafka-dev/bin/kafka-features.sh --bootstrap-server 
ducker05:9092,ducker06:9092,ducker07:9092 upgrade --metadata 3.8' returned 
non-zero exit status 1. Remote error message: b'SLF4J: Class path contains 
multiple SLF4J bindings.\nSLF4J: Found binding in 
[jar:file:/opt/kafka-dev/tools/build/dependant-libs-2.13.12/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 Found binding in 
[jar:file:/opt/kafka-dev/trogdor/build/dependant-libs-2.13.12/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.\nSLF4J: Actual binding is of type 
[org.slf4j.impl.Reload4jLoggerFactory]\nUnsupported metadata version 3.8. 
Supported metadata versions are 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 
3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 
3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0\n'



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15740) KRaft support in DeleteOffsetsConsumerGroupCommandIntegrationTest

2024-01-15 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15740.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DeleteOffsetsConsumerGroupCommandIntegrationTest
> -
>
> Key: KAFKA-15740
> URL: https://issues.apache.org/jira/browse/KAFKA-15740
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DeleteOffsetsConsumerGroupCommandIntegrationTest in 
> core/src/test/scala/unit/kafka/admin/DeleteOffsetsConsumerGroupCommandIntegrationTest.scala
>  need to be updated to support KRaft
> 49 : def testDeleteOffsetsNonExistingGroup(): Unit = {
> 59 : def testDeleteOffsetsOfStableConsumerGroupWithTopicPartition(): Unit = {
> 64 : def testDeleteOffsetsOfStableConsumerGroupWithTopicOnly(): Unit = {
> 69 : def testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition(): 
> Unit = {
> 74 : def testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly(): Unit = 
> {
> 79 : def testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition(): Unit = {
> 84 : def testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly(): Unit = {
> 89 : def testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition(): 
> Unit = {
> 94 : def testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicOnly(): Unit = {
> Scanned 198 lines. Found 0 KRaft tests out of 9 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16130) Test migration rollback

2024-01-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16130:
--

 Summary: Test migration rollback
 Key: KAFKA-16130
 URL: https://issues.apache.org/jira/browse/KAFKA-16130
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16119) kraft_upgrade_test system test is broken

2024-01-12 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16119.

Resolution: Invalid

After rebuilding my env from scratch I don't see this error anymore

> kraft_upgrade_test system test is broken
> 
>
> Key: KAFKA-16119
> URL: https://issues.apache.org/jira/browse/KAFKA-16119
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 3.6.0, 3.7.0, 3.6.1
>Reporter: Mickael Maison
>Priority: Major
>
> When the test attempts to restart brokers after the upgrade, brokers fail 
> with:
> [2024-01-12 13:43:40,144] ERROR Exiting Kafka due to fatal exception 
> (kafka.Kafka$)
> java.lang.NoClassDefFoundError: 
> org/apache/kafka/image/loader/MetadataLoaderMetrics
> at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:68)
> at kafka.Kafka$.buildServer(Kafka.scala:83)
> at kafka.Kafka$.main(Kafka.scala:91)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.image.loader.MetadataLoaderMetrics
> at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
> ... 4 more
> MetadataLoaderMetrics was moved from org.apache.kafka.image.loader to 
> org.apache.kafka.image.loader.metrics in 
> https://github.com/apache/kafka/commit/c7de30f38bfd6e2d62a0b5c09b5dc9707e58096b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16119) kraft_upgrade_test system test is broken

2024-01-12 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16119:
--

 Summary: kraft_upgrade_test system test is broken
 Key: KAFKA-16119
 URL: https://issues.apache.org/jira/browse/KAFKA-16119
 Project: Kafka
  Issue Type: New Feature
Affects Versions: 3.6.1, 3.6.0, 3.7.0
Reporter: Mickael Maison


When the test attempts to restart brokers after the upgrade, brokers fail with:

[2024-01-12 13:43:40,144] ERROR Exiting Kafka due to fatal exception 
(kafka.Kafka$)
java.lang.NoClassDefFoundError: 
org/apache/kafka/image/loader/MetadataLoaderMetrics
at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:68)
at kafka.Kafka$.buildServer(Kafka.scala:83)
at kafka.Kafka$.main(Kafka.scala:91)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.image.loader.MetadataLoaderMetrics
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 4 more

MetadataLoaderMetrics was moved from org.apache.kafka.image.loader to 
org.apache.kafka.image.loader.metrics in 
https://github.com/apache/kafka/commit/c7de30f38bfd6e2d62a0b5c09b5dc9707e58096b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15747) KRaft support in DynamicConnectionQuotaTest

2024-01-10 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15747.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DynamicConnectionQuotaTest
> ---
>
> Key: KAFKA-15747
> URL: https://issues.apache.org/jira/browse/KAFKA-15747
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DynamicConnectionQuotaTest in 
> core/src/test/scala/integration/kafka/network/DynamicConnectionQuotaTest.scala
>  need to be updated to support KRaft
> 77 : def testDynamicConnectionQuota(): Unit = {
> 104 : def testDynamicListenerConnectionQuota(): Unit = {
> 175 : def testDynamicListenerConnectionCreationRateQuota(): Unit = {
> 237 : def testDynamicIpConnectionRateQuota(): Unit = {
> Scanned 416 lines. Found 0 KRaft tests out of 4 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15741) KRaft support in DescribeConsumerGroupTest

2024-01-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15741.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DescribeConsumerGroupTest
> --
>
> Key: KAFKA-15741
> URL: https://issues.apache.org/jira/browse/KAFKA-15741
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DescribeConsumerGroupTest in 
> core/src/test/scala/unit/kafka/admin/DescribeConsumerGroupTest.scala need to 
> be updated to support KRaft
> 39 : def testDescribeNonExistingGroup(): Unit = {
> 55 : def testDescribeWithMultipleSubActions(): Unit = {
> 76 : def testDescribeWithStateValue(): Unit = {
> 97 : def testDescribeOffsetsOfNonExistingGroup(): Unit = {
> 113 : def testDescribeMembersOfNonExistingGroup(): Unit = {
> 133 : def testDescribeStateOfNonExistingGroup(): Unit = {
> 151 : def testDescribeExistingGroup(): Unit = {
> 169 : def testDescribeExistingGroups(): Unit = {
> 194 : def testDescribeAllExistingGroups(): Unit = {
> 218 : def testDescribeOffsetsOfExistingGroup(): Unit = {
> 239 : def testDescribeMembersOfExistingGroup(): Unit = {
> 272 : def testDescribeStateOfExistingGroup(): Unit = {
> 291 : def testDescribeStateOfExistingGroupWithRoundRobinAssignor(): Unit = {
> 310 : def testDescribeExistingGroupWithNoMembers(): Unit = {
> 334 : def testDescribeOffsetsOfExistingGroupWithNoMembers(): Unit = {
> 366 : def testDescribeMembersOfExistingGroupWithNoMembers(): Unit = {
> 390 : def testDescribeStateOfExistingGroupWithNoMembers(): Unit = {
> 417 : def testDescribeWithConsumersWithoutAssignedPartitions(): Unit = {
> 436 : def testDescribeOffsetsWithConsumersWithoutAssignedPartitions(): Unit = 
> {
> 455 : def testDescribeMembersWithConsumersWithoutAssignedPartitions(): Unit = 
> {
> 480 : def testDescribeStateWithConsumersWithoutAssignedPartitions(): Unit = {
> 496 : def testDescribeWithMultiPartitionTopicAndMultipleConsumers(): Unit = {
> 517 : def testDescribeOffsetsWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 539 : def testDescribeMembersWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 565 : def testDescribeStateWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 583 : def testDescribeSimpleConsumerGroup(): Unit = {
> 601 : def testDescribeGroupWithShortInitializationTimeout(): Unit = {
> 618 : def testDescribeGroupOffsetsWithShortInitializationTimeout(): Unit = {
> 634 : def testDescribeGroupMembersWithShortInitializationTimeout(): Unit = {
> 652 : def testDescribeGroupStateWithShortInitializationTimeout(): Unit = {
> 668 : def testDescribeWithUnrecognizedNewConsumerOption(): Unit = {
> 674 : def testDescribeNonOffsetCommitGroup(): Unit = {
> Scanned 699 lines. Found 0 KRaft tests out of 32 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15719) KRaft support in OffsetsForLeaderEpochRequestTest

2024-01-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15719.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in OffsetsForLeaderEpochRequestTest
> -
>
> Key: KAFKA-15719
> URL: https://issues.apache.org/jira/browse/KAFKA-15719
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in OffsetsForLeaderEpochRequestTest in 
> core/src/test/scala/unit/kafka/server/OffsetsForLeaderEpochRequestTest.scala 
> need to be updated to support KRaft
> 37 : def testOffsetsForLeaderEpochErrorCodes(): Unit = {
> 60 : def testCurrentEpochValidation(): Unit = {
> Scanned 127 lines. Found 0 KRaft tests out of 2 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15725) KRaft support in FetchRequestTest

2024-01-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15725.

Fix Version/s: 3.7.0
   Resolution: Fixed

> KRaft support in FetchRequestTest
> -
>
> Key: KAFKA-15725
> URL: https://issues.apache.org/jira/browse/KAFKA-15725
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.7.0
>
>
> The following tests in FetchRequestTest in 
> core/src/test/scala/unit/kafka/server/FetchRequestTest.scala need to be 
> updated to support KRaft
> 45 : def testBrokerRespectsPartitionsOrderAndSizeLimits(): Unit = {
> 147 : def testFetchRequestV4WithReadCommitted(): Unit = {
> 165 : def testFetchRequestToNonReplica(): Unit = {
> 195 : def testLastFetchedEpochValidation(): Unit = {
> 200 : def testLastFetchedEpochValidationV12(): Unit = {
> 247 : def testCurrentEpochValidation(): Unit = {
> 252 : def testCurrentEpochValidationV12(): Unit = {
> 295 : def testEpochValidationWithinFetchSession(): Unit = {
> 300 : def testEpochValidationWithinFetchSessionV12(): Unit = {
> 361 : def testDownConversionWithConnectionFailure(): Unit = {
> 428 : def testDownConversionFromBatchedToUnbatchedRespectsOffset(): Unit = {
> 509 : def testCreateIncrementalFetchWithPartitionsInErrorV12(): Unit = {
> 568 : def testFetchWithPartitionsWithIdError(): Unit = {
> 610 : def testZStdCompressedTopic(): Unit = {
> 657 : def testZStdCompressedRecords(): Unit = {
> Scanned 783 lines. Found 0 KRaft tests out of 15 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15980) Add KIP-1001 CurrentControllerId metric

2023-12-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15980.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Add KIP-1001 CurrentControllerId metric
> ---
>
> Key: KAFKA-15980
> URL: https://issues.apache.org/jira/browse/KAFKA-15980
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15000) High vulnerability PRISMA-2023-0067 reported in jackson-core

2023-12-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15000.

Resolution: Fixed

> High vulnerability PRISMA-2023-0067 reported in jackson-core
> 
>
> Key: KAFKA-15000
> URL: https://issues.apache.org/jira/browse/KAFKA-15000
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.0, 3.3.2, 3.5.1
>Reporter: Arushi Rai
>Assignee: Said BOUDJELDA
>Priority: Critical
> Fix For: 3.7.0
>
>
> Kafka is using jackson-core version 2.13.4 which has high vulnerability 
> reported [PRISMA-2023-0067. 
> |https://github.com/FasterXML/jackson-core/pull/827]
> This vulnerability is fix in Jackson-core 2.15.0 and Kafka should upgrade to 
> the same. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16005) ZooKeeper to KRaft migration rollback missing disabling controller and migration configuration on brokers

2023-12-14 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16005.

Fix Version/s: 3.7.0
   Resolution: Fixed

> ZooKeeper to KRaft migration rollback missing disabling controller and 
> migration configuration on brokers
> -
>
> Key: KAFKA-16005
> URL: https://issues.apache.org/jira/browse/KAFKA-16005
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.6.1
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
> Fix For: 3.7.0
>
>
> I was following the latest documentation additions to try the rollback 
> process of a ZK cluster migrating to KRaft, while it's still in dual-write 
> mode: 
> [https://github.com/apache/kafka/pull/14160/files#diff-e4e8d893dc2a4e999c96713dd5b5857203e0756860df0e70fb0cb041aa4d347bR3786]
> The first point is just about stopping broker, deleting __cluster_metadata 
> folder and restarting broker.
> I think it's missing at least the following steps:
>  * removing/disabling the ZooKeeper migration flag
>  * removing all properties related to controllers configuration (i.e. 
> controller.quorum.voters, controller.listener.names, ...)
> Without those steps, when the broker restarts, we have got broker re-creating 
> the __cluster_metadata folder (because it syncs with controllers while they 
> are still running).
> Also, when controllers stops, the broker starts to raise exceptions like this:
> {code:java}
> [2023-12-13 15:22:28,437] DEBUG [BrokerToControllerChannelManager id=0 
> name=quorum] Connection with localhost/127.0.0.1 (channelId=1) disconnected 
> (org.apache.kafka.common.network.Selector)java.net.ConnectException: 
> Connection refusedat 
> java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at 
> java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
> at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
> at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 
>at org.apache.kafka.common.network.Selector.poll(Selector.java:481)at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:571)at 
> org.apache.kafka.server.util.InterBrokerSendThread.pollOnce(InterBrokerSendThread.java:109)
> at 
> kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:421)
> at 
> org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130)[2023-12-13
>  15:22:28,438] INFO [BrokerToControllerChannelManager id=0 name=quorum] Node 
> 1 disconnected. (org.apache.kafka.clients.NetworkClient)[2023-12-13 
> 15:22:28,438] WARN [BrokerToControllerChannelManager id=0 name=quorum] 
> Connection to node 1 (localhost/127.0.0.1:9093) could not be established. 
> Broker may not be available. (org.apache.kafka.clients.NetworkClient) {code}
> (where I have controller locally on port 9093)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15995) Mechanism for plugins and connectors to register metrics

2023-12-12 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15995:
--

 Summary: Mechanism for plugins and connectors to register metrics
 Key: KAFKA-15995
 URL: https://issues.apache.org/jira/browse/KAFKA-15995
 Project: Kafka
  Issue Type: New Feature
Reporter: Mickael Maison
Assignee: Mickael Maison


Ticket for 
[KIP-877|https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15714) KRaft support in DynamicNumNetworkThreadsTest

2023-12-10 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15714.

Fix Version/s: 3.7.0
   Resolution: Fixed

> KRaft support in DynamicNumNetworkThreadsTest
> -
>
> Key: KAFKA-15714
> URL: https://issues.apache.org/jira/browse/KAFKA-15714
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.7.0
>
>
> The following tests in DynamicNumNetworkThreadsTest in 
> core/src/test/scala/integration/kafka/network/DynamicNumNetworkThreadsTest.scala
>  need to be updated to support KRaft
> 58 : def testDynamicNumNetworkThreads(): Unit = {
> Scanned 103 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15973) quota_test.py system tests are flaky

2023-12-05 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15973:
--

 Summary: quota_test.py system tests are flaky
 Key: KAFKA-15973
 URL: https://issues.apache.org/jira/browse/KAFKA-15973
 Project: Kafka
  Issue Type: Bug
  Components: core, system tests
Reporter: Mickael Maison


Stacktrace:
{noformat}
    TimeoutError("Kafka server didn't finish startup in 60 seconds")
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
186, in _do_run
    data = self.run_test()
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
246, in run_test
    return self.test_context.function(self.test)
  File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
433, in wrapper
    return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File "/opt/kafka-dev/tests/kafkatest/tests/client/quota_test.py", line 139, 
in test_quota
    self.kafka.start()
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 654, in 
start
    self.wait_for_start(node, monitor, timeout_sec)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 879, in 
wait_for_start
    monitor.wait_until("Kafka\s*Server.*started", timeout_sec=timeout_sec, 
backoff_sec=.25,
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
line 753, in wait_until
    return wait_until(lambda: self.acct.ssh("tail -c +%d %s | grep '%s'" % 
(self.offset + 1, self.log, pattern),
  File "/usr/local/lib/python3.9/dist-packages/ducktape/utils/util.py", line 
58, in wait_until
    raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from 
last_exception
ducktape.errors.TimeoutError: Kafka server didn't finish startup in 60 
seconds{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15645) Move ReplicationQuotasTestRig to tools

2023-12-05 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15645.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Move ReplicationQuotasTestRig to tools
> --
>
> Key: KAFKA-15645
> URL: https://issues.apache.org/jira/browse/KAFKA-15645
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Minor
> Fix For: 3.7.0
>
>
> ReplicationQuotasTestRig class used for measuring performance.
> Conains dependencies to `ReassignPartitionCommand` API.
> To move all commands to tools must move ReplicationQuotasTestRig to tools, 
> also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15912) Parallelize conversion and transformation steps in Connect

2023-11-28 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15912:
--

 Summary: Parallelize conversion and transformation steps in Connect
 Key: KAFKA-15912
 URL: https://issues.apache.org/jira/browse/KAFKA-15912
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Reporter: Mickael Maison


In busy Connect pipelines, the conversion and transformation steps can 
sometimes have a very significant impact on performance. This is especially 
true with large records with complex schemas, for example with CDC connectors.

Today in order to always preserve ordering, converters and transformations are 
called on one record at a time in a single thread in the Connect worker. As 
Connect usually handles records in batches (up to max.poll.records in sink 
pipelines, for source pipelines it depends on the connector), it could be 
highly beneficial to attempt running the converters and transformation chain in 
parallel by a pool a processing threads.

It should be possible to do some of these steps in parallel and still keep 
exact ordering. I'm even considering whether an option to lose ordering but 
allow even faster processing would make sense.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15464) Allow dynamic reloading of certificates with different DN / SANs

2023-11-24 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15464.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Allow dynamic reloading of certificates with different DN / SANs
> 
>
> Key: KAFKA-15464
> URL: https://issues.apache.org/jira/browse/KAFKA-15464
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jakub Scholz
>Assignee: Jakub Scholz
>Priority: Major
> Fix For: 3.7.0
>
>
> Kafka currently doesn't allow dynamic reloading of keystores when the new key 
> has a different DN or removes some of the SANs. While it might help to 
> prevent users from breaking their cluster, in some cases it would be great to 
> be able to bypass this validation when desired.
> More details are in the [KIP-978: Allow dynamic reloading of certificates 
> with different DN / 
> SANs|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263429128]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15793) Flaky test ZkMigrationIntegrationTest.testMigrateTopicDeletions

2023-11-17 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15793.

Resolution: Fixed

> Flaky test ZkMigrationIntegrationTest.testMigrateTopicDeletions
> ---
>
> Key: KAFKA-15793
> URL: https://issues.apache.org/jira/browse/KAFKA-15793
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Divij Vaidya
>Assignee: David Arthur
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.7.0, 3.6.1
>
> Attachments: Screenshot 2023-11-06 at 11.30.06.png
>
>
> The tests have been flaky since they were introduced in 
> [https://github.com/apache/kafka/pull/14545] (see picture attached).
> The stack traces for the flakiness can be found at 
> [https://ge.apache.org/scans/tests?search.relativeStartTime=P28D=kafka=trunk=Europe%2FBerlin=kafka.zk.ZkMigrationIntegrationTest]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15644) Fix CVE-2023-4586 in netty:handler

2023-10-26 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15644.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Fix CVE-2023-4586 in netty:handler
> --
>
> Key: KAFKA-15644
> URL: https://issues.apache.org/jira/browse/KAFKA-15644
> Project: Kafka
>  Issue Type: Bug
>Reporter: Atul Sharma
>Assignee: Atul Sharma
>Priority: Major
> Fix For: 3.7.0
>
>
> Need to remediate CVE-2023-4586 
> Ref: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-4586



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15093) Add 3.5.0 to broker/client and streams upgrade/compatibility tests

2023-10-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15093.

Fix Version/s: 3.5.2
   3.7.0
   3.6.1
   Resolution: Fixed

> Add 3.5.0 to broker/client and streams upgrade/compatibility tests
> --
>
> Key: KAFKA-15093
> URL: https://issues.apache.org/jira/browse/KAFKA-15093
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.5.2, 3.7.0, 3.6.1
>
>
> Per the penultimate bullet on the [release 
> checklist|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-Afterthevotepasses],
>  Kafka v3.5.0 is released. We should add this version to the system tests.
> Example PRs:
>  * Broker and clients: [https://github.com/apache/kafka/pull/6794]
>  * Streams: [https://github.com/apache/kafka/pull/6597/files]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15664) Add 3.4.0 streams upgrade/compatibility tests

2023-10-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15664.

Resolution: Fixed

> Add 3.4.0 streams upgrade/compatibility tests
> -
>
> Key: KAFKA-15664
> URL: https://issues.apache.org/jira/browse/KAFKA-15664
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 3.5.0
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Critical
> Fix For: 3.5.2, 3.7.0, 3.6.1
>
>
> Per the penultimate bullet on the release checklist, Kafka v3.4.0 is 
> released. We should add this version to the system tests.
> Example PR: https://github.com/apache/kafka/pull/6597/files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15664) Add 3.4.0 streams upgrade/compatibility tests

2023-10-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15664:
--

 Summary: Add 3.4.0 streams upgrade/compatibility tests
 Key: KAFKA-15664
 URL: https://issues.apache.org/jira/browse/KAFKA-15664
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison
Assignee: Mickael Maison


Per the penultimate bullet on the release checklist, Kafka v3.4.0 is released. 
We should add this version to the system tests.

Example PR: https://github.com/apache/kafka/pull/6597/files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15630) Improve documentation of offset.lag.max

2023-10-18 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15630:
--

 Summary: Improve documentation of offset.lag.max
 Key: KAFKA-15630
 URL: https://issues.apache.org/jira/browse/KAFKA-15630
 Project: Kafka
  Issue Type: Improvement
  Components: docs, mirrormaker
Reporter: Mickael Maison


It would be good to expand on the role of this configuration on offset 
translation and mention that it can be set to a smaller value, or even 0, to 
help in scenarios when records may not flow constantly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15622) Delete configs deprecated by KIP-629

2023-10-17 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15622:
--

 Summary: Delete configs deprecated by KIP-629
 Key: KAFKA-15622
 URL: https://issues.apache.org/jira/browse/KAFKA-15622
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 4.0.0
Reporter: Mickael Maison
Assignee: Mickael Maison


[KIP-629|https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase]
 deprecated a bunch of configurations. We should delete them in the next major 
release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14684) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskThreadedTest

2023-10-16 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14684.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskThreadedTest
> -
>
> Key: KAFKA-14684
> URL: https://issues.apache.org/jira/browse/KAFKA-14684
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15596) Upgrade ZooKeeper to 3.8.3

2023-10-12 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15596.

Fix Version/s: 3.7.0
   3.6.1
   Resolution: Fixed

> Upgrade ZooKeeper to 3.8.3
> --
>
> Key: KAFKA-15596
> URL: https://issues.apache.org/jira/browse/KAFKA-15596
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.7.0, 3.6.1
>
>
> ZooKeeper 3.8.3 fixes 
> [CVE-2023-44981|https://www.cve.org/CVERecord?id=CVE-2023-44981] as described 
> in https://lists.apache.org/thread/7o6cch0gm7hzz0zcj2zs16hnl1dxm6oy



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15596) Upgrade ZooKeeper to 3.8.3

2023-10-12 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15596:
--

 Summary: Upgrade ZooKeeper to 3.8.3
 Key: KAFKA-15596
 URL: https://issues.apache.org/jira/browse/KAFKA-15596
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison
Assignee: Mickael Maison


ZooKeeper 3.8.3 fixes 
[CVE-2023-44981|https://www.cve.org/CVERecord?id=CVE-2023-44981] as described 
in https://lists.apache.org/thread/7o6cch0gm7hzz0zcj2zs16hnl1dxm6oy



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15521) Refactor build.gradle to align gradle swagger plugin with swagger dependencies

2023-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15521.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Refactor build.gradle to align gradle swagger plugin with swagger dependencies
> --
>
> Key: KAFKA-15521
> URL: https://issues.apache.org/jira/browse/KAFKA-15521
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Mickael Maison
>Assignee: Atul Sharma
>Priority: Major
> Fix For: 3.7.0
>
>
> We use both the Swagger Gradle plugin 
> "io.swagger.core.v3.swagger-gradle-plugin" and 2 Swagger dependencies 
> swaggerAnnotations and swaggerJaxrs2. The version for the Gradle plugin is in 
> build.gradle while the version for the dependency is in 
> gradle/dependencies.gradle.
> When we upgrade the version of one or the other it sometimes cause build 
> breakages, for example https://github.com/apache/kafka/pull/13387 and 
> https://github.com/apache/kafka/pull/14464
> We should try to have the version defined in a single place to avoid breaking 
> the build again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15549) Bump swagger dependency version

2023-10-05 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15549:
--

 Summary: Bump swagger dependency version
 Key: KAFKA-15549
 URL: https://issues.apache.org/jira/browse/KAFKA-15549
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15500) Code bug in SslPrincipalMapper.java

2023-09-29 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15500.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Code bug in SslPrincipalMapper.java
> ---
>
> Key: KAFKA-15500
> URL: https://issues.apache.org/jira/browse/KAFKA-15500
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, security
>Affects Versions: 3.5.1
>Reporter: Svyatoslav
>Assignee: Svyatoslav
>Priority: Major
> Fix For: 3.7.0
>
>
> Code bug in:
> if (toLowerCase && result != null) {
>                 result = result.toLowerCase(Locale.ENGLISH);
>             } else if (toUpperCase{color:#FF} & {color}result != null) {
>                 result = result.toUpperCase(Locale.ENGLISH);
>             }



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15521) Refactor build.gradle to align gradle swagger plugin with swagger dependencies

2023-09-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15521:
--

 Summary: Refactor build.gradle to align gradle swagger plugin with 
swagger dependencies
 Key: KAFKA-15521
 URL: https://issues.apache.org/jira/browse/KAFKA-15521
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Mickael Maison


We use both the Swagger Gradle plugin 
"io.swagger.core.v3.swagger-gradle-plugin" and 2 Swagger dependencies 
swaggerAnnotations and swaggerJaxrs2. The version for the Gradle plugin is in 
build.gradle while the version for the dependency is in 
gradle/dependencies.gradle.

When we upgrade the version of one or the other it sometimes cause build 
breakages, for example https://github.com/apache/kafka/pull/13387 and 
https://github.com/apache/kafka/pull/14464

We should try to have the version defined in a single place to avoid breaking 
the build again.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15517) Improve MirrorMaker logging in case of authorization errors

2023-09-28 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15517:
--

 Summary: Improve MirrorMaker logging in case of authorization 
errors
 Key: KAFKA-15517
 URL: https://issues.apache.org/jira/browse/KAFKA-15517
 Project: Kafka
  Issue Type: Improvement
  Components: mirrormaker
Reporter: Mickael Maison


In case MirrorMaker is missing DESCRIBE_CONFIGS on the source cluster, all 
youget in the logs are lines like:

{noformat}
2023-09-27 11:56:54,989 ERROR 
[my-cluster-source->my-cluster-target.MirrorSourceConnector|worker] Scheduler 
for MirrorSourceConnector caught exception in scheduled task: refreshing topics 
(org.apache.kafka.connect.mirror.Scheduler) [Scheduler for 
MirrorSourceConnector-refreshing topics]
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TopicAuthorizationException: Topic authorization 
failed.
{noformat}

It would be good to report the exact call that failed and include the cluster 
as well to make it easy to figure out which permissions are missing.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15469) Document built-in configuration providers

2023-09-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15469:
--

 Summary: Document built-in configuration providers
 Key: KAFKA-15469
 URL: https://issues.apache.org/jira/browse/KAFKA-15469
 Project: Kafka
  Issue Type: Task
  Components: documentation
Reporter: Mickael Maison


Kafka has 3 built-in ConfigProvider implementations:
* DirectoryConfigProvider
* EnvVarConfigProvider
* FileConfigProvider

These don't appear anywhere in the documentation. We should at least mention 
them and probably even demonstrate how to use them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14206) Upgrade zookeeper to 3.7.1 to address security vulnerabilities

2023-08-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14206.

Fix Version/s: 3.5.0
   Resolution: Fixed

Kafka 3.5.0 uses ZooKeeper 3.6.4

> Upgrade zookeeper to 3.7.1 to address security vulnerabilities
> --
>
> Key: KAFKA-14206
> URL: https://issues.apache.org/jira/browse/KAFKA-14206
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 3.2.1
>Reporter: Valeriy Kassenbayev
>Priority: Blocker
> Fix For: 3.5.0
>
>
> Kafka 3.2.1 is using ZooKeeper, which is affected by 
> [CVE-2021-37136|https://security.snyk.io/vuln/SNYK-JAVA-IONETTY-1584064] and 
> [CVE-2021-37137:|https://www.cve.org/CVERecord?id=CVE-2021-37137]
> {code:java}
>   ✗ Denial of Service (DoS) [High 
> Severity][https://security.snyk.io/vuln/SNYK-JAVA-IONETTY-1584063] in 
> io.netty:netty-codec@4.1.63.Final
>     introduced by org.apache.kafka:kafka_2.13@3.2.1 > 
> org.apache.zookeeper:zookeeper@3.6.3 > io.netty:netty-handler@4.1.63.Final > 
> io.netty:netty-codec@4.1.63.Final
>   This issue was fixed in versions: 4.1.68.Final
>   ✗ Denial of Service (DoS) [High 
> Severity][https://security.snyk.io/vuln/SNYK-JAVA-IONETTY-1584064] in 
> io.netty:netty-codec@4.1.63.Final
>     introduced by org.apache.kafka:kafka_2.13@3.2.1 > 
> org.apache.zookeeper:zookeeper@3.6.3 > io.netty:netty-handler@4.1.63.Final > 
> io.netty:netty-codec@4.1.63.Final
>   This issue was fixed in versions: 4.1.68.Final {code}
> The issues were fixed in the next versions of ZooKeeper (starting from 
> 3.6.4). ZooKeeper 3.7.1 is the next stable 
> [release|https://zookeeper.apache.org/releases.html] at the moment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-14595) Move ReassignPartitionsCommand to tools

2023-08-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison reopened KAFKA-14595:


> Move ReassignPartitionsCommand to tools
> ---
>
> Key: KAFKA-14595
> URL: https://issues.apache.org/jira/browse/KAFKA-14595
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Nikolay Izhikov
>Priority: Major
> Fix For: 3.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15258) Consider moving MockAdminClient to the public API

2023-07-26 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15258:
--

 Summary: Consider moving MockAdminClient to the public API
 Key: KAFKA-15258
 URL: https://issues.apache.org/jira/browse/KAFKA-15258
 Project: Kafka
  Issue Type: Task
  Components: admin
Reporter: Mickael Maison


MockConsumer and MockProducer are part of the public API. They are useful for 
developers wanting to test their applications. On the other hand 
MockAdminClient is not part of the public API (it's under test). We should 
consider moving it to src so users can also easily test applications that 
depend on Admin.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10775) DOAP has incorrect category

2023-07-18 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10775.

Fix Version/s: 3.1.0
 Assignee: Sebb
   Resolution: Fixed

> DOAP has incorrect category
> ---
>
> Key: KAFKA-10775
> URL: https://issues.apache.org/jira/browse/KAFKA-10775
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Sebb
>Priority: Major
> Fix For: 3.1.0
>
>
> https://github.com/apache/kafka/blob/0df461582c78449fd39e35b241a77a7acf5735e2/doap_Kafka.rdf#L36
> reads:
>  rdf:resource="https://projects.apache.org/projects.html?category#big-data; />
> This should be
> http://projects.apache.org/category/big-data; />
> c.f.
> http://svn.apache.org/repos/asf/bigtop/site/trunk/content/resources/bigtop.rdf



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15186) AppInfo metrics don't contain the client-id

2023-07-13 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15186:
--

 Summary: AppInfo metrics don't contain the client-id
 Key: KAFKA-15186
 URL: https://issues.apache.org/jira/browse/KAFKA-15186
 Project: Kafka
  Issue Type: Task
  Components: metrics
Reporter: Mickael Maison


All Kafka components register AppInfo metrics to track the application start 
time or commit id.

The AppInfoParser class registers a JMX MBean with the provided client-id but 
when it adds metrics to the Metrics registry the client-id is not included. 

This means if you use a custom MetricsReporter, the metrics you get don't have 
the client-id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15172) Allow exact mirroring of ACLs between clusters

2023-07-10 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15172:
--

 Summary: Allow exact mirroring of ACLs between clusters
 Key: KAFKA-15172
 URL: https://issues.apache.org/jira/browse/KAFKA-15172
 Project: Kafka
  Issue Type: Task
  Components: mirrormaker
Reporter: Mickael Maison


When mirroring ACLs, MirrorMaker downgrades allow ALL ACLs to allow READ. The 
rationale to is prevent other clients to produce to remote topics. 

However in disaster recovery scenarios, where the target cluster is not used 
and just a "hot standby", it would be preferable to have exactly the same ACLs 
on both clusters to speed up failover.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15151) Missing connector-stopped-task-count metric

2023-07-06 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15151.

Resolution: Invalid

> Missing connector-stopped-task-count metric
> ---
>
> Key: KAFKA-15151
> URL: https://issues.apache.org/jira/browse/KAFKA-15151
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Mickael Maison
>Assignee: Yash Mayya
>Priority: Major
>
> We have task-count metrics for all other states but when adding the STOPPED 
> state we did not add the respective metric.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15151) Missing connector-stopped-task-count metric

2023-07-06 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15151:
--

 Summary: Missing connector-stopped-task-count metric
 Key: KAFKA-15151
 URL: https://issues.apache.org/jira/browse/KAFKA-15151
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Reporter: Mickael Maison


We have task-count metrics for all other states but when adding the STOPPED 
state we did not add the respective metric.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15122) Moving partitions between log dirs leads to kafka.log:type=Log metrics being deleted

2023-06-26 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15122.

Fix Version/s: 3.5.0
   Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/KAFKA-14544 which is fixed 
in 3.5.0

> Moving partitions between log dirs leads to kafka.log:type=Log metrics being 
> deleted
> 
>
> Key: KAFKA-15122
> URL: https://issues.apache.org/jira/browse/KAFKA-15122
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.5.0
>Reporter: Mickael Maison
>Priority: Major
> Fix For: 3.5.0
>
>
> # Start a broker with 2 log directories
> # Create a topic-partition
> Metrics with the following names are created: 
> kafka.log:type=Log,name=Size,topic=,partition=0
> # Using kafka-reassign-partitions move that partition to the other log 
> directory
> A tag is-future=true is added to the existing metrics, 
> kafka.log:type=Log,name=Size,topic=,partition=0,is-future=true
> # Using kafka-reassign-partitions move that partition back to its original 
> log directory
> The metrics are deleted!
> I don't expect the metrics to be renamed during the first reassignment. The 
> metrics should not be deleted during the second reassignment, the topic still 
> exists. Restarting the broker resolves the issue.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15122) Moving partitions between log dirs leads to kafka.log:type=Log metrics being deleted

2023-06-26 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15122:
--

 Summary: Moving partitions between log dirs leads to 
kafka.log:type=Log metrics being deleted
 Key: KAFKA-15122
 URL: https://issues.apache.org/jira/browse/KAFKA-15122
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.5.0
Reporter: Mickael Maison


# Start a broker with 2 log directories
# Create a topic-partition
Metrics with the following names are created: 
kafka.log:type=Log,name=Size,topic=,partition=0
# Using kafka-reassign-partitions move that partition to the other log directory
A tag isFuture=true is added to the existing metrics, 
kafka.log:type=Log,name=Size,topic=,partition=0,isFuture=true
# Using kafka-reassign-partitions move that partition back to its original log 
directory
The metrics are deleted!

I don't expect the metrics to be renamed during the first reassignment. The 
metrics should not be deleted during the second reassignment, the topic still 
exists. Restarting the broker resolves the issue.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15093) Add 3.5.0 to broker/client and streams upgrade/compatibility tests

2023-06-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15093:
--

 Summary: Add 3.5.0 to broker/client and streams 
upgrade/compatibility tests
 Key: KAFKA-15093
 URL: https://issues.apache.org/jira/browse/KAFKA-15093
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison
Assignee: Mickael Maison


Per the penultimate bullet on the [release 
checklist|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-Afterthevotepasses],
 Kafka v3.5.0 is released. We should add this version to the system tests.

Example PRs:
 * Broker and clients: [https://github.com/apache/kafka/pull/6794]
 * Streams: [https://github.com/apache/kafka/pull/6597/files]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15019) Improve handling of broker heartbeat timeouts

2023-06-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15019.

Fix Version/s: 3.5.0
   Resolution: Fixed

> Improve handling of broker heartbeat timeouts
> -
>
> Key: KAFKA-15019
> URL: https://issues.apache.org/jira/browse/KAFKA-15019
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.5.0
>
>
> Improve handling of overload situations in the KRaft controller



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14996) The KRaft controller should properly handle overly large user operations

2023-06-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14996.

Fix Version/s: 3.5.0
 Assignee: Colin McCabe  (was: Edoardo Comar)
   Resolution: Fixed

> The KRaft controller should properly handle overly large user operations
> 
>
> Key: KAFKA-14996
> URL: https://issues.apache.org/jira/browse/KAFKA-14996
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 3.5.0
>Reporter: Edoardo Comar
>Assignee: Colin McCabe
>Priority: Blocker
> Fix For: 3.5.0
>
>
> If an attempt is made to create a topic with
> num partitions >= QuorumController.MAX_RECORDS_PER_BATCH  (1)
> the client receives an UnknownServerException - it could rather receive a 
> better error.
> The controller logs
> {{2023-05-12 19:25:10,018] WARN [QuorumController id=1] createTopics: failed 
> with unknown server exception IllegalStateException at epoch 2 in 21956 us.  
> Renouncing leadership and reverting to the last committed offset 174. 
> (org.apache.kafka.controller.QuorumController)}}
> {{java.lang.IllegalStateException: Attempted to atomically commit 10001 
> records, but maxRecordsPerBatch is 1}}
> {{    at 
> org.apache.kafka.controller.QuorumController.appendRecords(QuorumController.java:812)}}
> {{    at 
> org.apache.kafka.controller.QuorumController$ControllerWriteEvent.run(QuorumController.java:719)}}
> {{    at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)}}
> {{    at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)}}
> {{    at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)}}
> {{    at java.base/java.lang.Thread.run(Thread.java:829)}}
> {{[}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15010) KRaft Controller doesn't reconcile with Zookeeper metadata upon becoming new controller while in dual write mode.

2023-06-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15010.

Resolution: Fixed

> KRaft Controller doesn't reconcile with Zookeeper metadata upon becoming new 
> controller while in dual write mode.
> -
>
> Key: KAFKA-15010
> URL: https://issues.apache.org/jira/browse/KAFKA-15010
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.5.0
>Reporter: Akhilesh Chaganti
>Assignee: David Arthur
>Priority: Blocker
> Fix For: 3.5.0
>
>
> When a KRaft controller fails over, the existing migration driver (in dual 
> write mode) can fail in between Zookeeper writes and may leave Zookeeper with 
> incomplete and inconsistent data. So when a new controller becomes active 
> (and by extension new migration driver becomes active), this first thing we 
> should do is load the in-memory snapshot and use it to write metadata to 
> Zookeeper to have a steady state. We currently do not do this and it may 
> leave Zookeeper in inconsistent state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8713) [Connect] JsonConverter NULL Values are replaced by default values even in NULLABLE fields

2023-05-31 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8713.
---
Resolution: Fixed

> [Connect] JsonConverter NULL Values are replaced by default values even in 
> NULLABLE fields
> --
>
> Key: KAFKA-8713
> URL: https://issues.apache.org/jira/browse/KAFKA-8713
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.3.0, 2.2.1
>Reporter: Cheng Pan
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.5.0
>
>
> Class JsonConverter line: 582
> {code:java}
> private static JsonNode convertToJson(Schema schema, Object logicalValue) 
> {
> if (logicalValue == null) {
> if (schema == null) // Any schema is valid and we don't have a 
> default, so treat this as an optional schema
> return null;
> if (schema.defaultValue() != null)
> return convertToJson(schema, schema.defaultValue());
> if (schema.isOptional())
> return JsonNodeFactory.instance.nullNode();
> throw new DataException("Conversion error: null value for field 
> that is required and has no default value");
> }
> 
> }
> {code}
> h1.Expect:
> Value `null` is valid for an optional filed, even though the filed has a 
> default value.
>  Only when field is required, the converter return default value fallback 
> when value is `null`.
> h1.Actual:
> Always return default value if `null` was given.
> h1. Example:
> I'm not sure if the current behavior is the exactly expected, but at least on 
> MySQL, a table  define as 
> {code:sql}
> create table t1 {
>name varchar(40) not null,
>create_time datetime default '1999-01-01 11:11:11' null,
>update_time datetime default '1999-01-01 11:11:11' null
> }
> {code}
> Just insert a record:
> {code:sql}
> INSERT INTO `t1` (`name`,  `update_time`) VALUES ('kafka', null);
> {code}
> The result is:
> {code:json}
> {
> "name": "kafka",
> "create_time": "1999-01-01 11:11:11",
> "update_time": null
> }
> {code}
> But when I use debezium pull binlog and send the record to Kafka with 
> JsonConverter, the result changed to:
> {code:json}
> {
> "name": "kafka",
> "create_time": "1999-01-01 11:11:11",
> "update_time": "1999-01-01 11:11:11"
> }
> {code}
> For more details, see: https://issues.jboss.org/browse/DBZ-1064



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15016) LICENSE-binary file contains dependencies not included anymore

2023-05-24 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15016.

Fix Version/s: 3.6.0
   Resolution: Fixed

> LICENSE-binary file contains dependencies not included anymore
> --
>
> Key: KAFKA-15016
> URL: https://issues.apache.org/jira/browse/KAFKA-15016
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.6.0
>
>
> While adjusting LICENSE-binary for 3.5.0 I noticed a few entries are not 
> dependencies anymore. We should resync the file properly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15016) LICENSE-binary file contains dependencies not included anymore

2023-05-22 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15016:
--

 Summary: LICENSE-binary file contains dependencies not included 
anymore
 Key: KAFKA-15016
 URL: https://issues.apache.org/jira/browse/KAFKA-15016
 Project: Kafka
  Issue Type: Bug
Reporter: Mickael Maison
Assignee: Mickael Maison


While adjusting LICENSE-binary for 3.5.0 I noticed a few entries are not 
dependencies anymore. We should resync the file properly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15015) Binaries contain 2 versions of reload4j

2023-05-22 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15015:
--

 Summary: Binaries contain 2 versions of reload4j
 Key: KAFKA-15015
 URL: https://issues.apache.org/jira/browse/KAFKA-15015
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.5.0, 3.4.1
Reporter: Mickael Maison


These releases ship 2 versions of reload4j:
- reload4j-1.2.19.jar
- reload4j-1.2.25.jar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14980) MirrorMaker consumers don't get configs prefixed with source.cluster

2023-05-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14980.

Resolution: Fixed

> MirrorMaker consumers don't get configs prefixed with source.cluster
> 
>
> Key: KAFKA-14980
> URL: https://issues.apache.org/jira/browse/KAFKA-14980
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.5.0
>Reporter: Mickael Maison
>Assignee: Chris Egerton
>Priority: Blocker
> Fix For: 3.5.0
>
>
> As part of KAFKA-14021, we made a change to 
> MirrorConnectorConfig.sourceConsumerConfig() to grab all configs that start 
> with "source.". Previously it was grabbing configs prefixed with 
> "source.cluster.". 
> This means existing connector configuration stop working, as configurations 
> such as bootstrap.servers are not passed to source consumers.
> For example, the following connector configuration was valid in 3.4 and now 
> makes the connector tasks fail:
> {code:json}
> {
> "connector.class": 
> "org.apache.kafka.connect.mirror.MirrorSourceConnector",
> "name": "source",
> "topics": "test",
> "tasks.max": "30",
> "source.cluster.alias": "one",
> "target.cluster.alias": "two",
> "source.cluster.bootstrap.servers": "localhost:9092",
>"target.cluster.bootstrap.servers": "localhost:29092"
> }
> {code}
> The connector attempts to start source consumers with bootstrap.servers = [] 
> and the task crash with 
> {noformat}
> org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:837)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:671)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.newConsumer(MirrorUtils.java:59)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceTask.start(MirrorSourceTask.java:103)
>   at 
> org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.initializeAndStart(AbstractWorkerSourceTask.java:274)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:202)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
>   at 
> org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:75)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: org.apache.kafka.common.config.ConfigException: No resolvable 
> bootstrap urls given in bootstrap.servers
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15008) GCS Sink Connector to parse JSON having leading 0's for an integer field

2023-05-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15008.

Resolution: Invalid

The issue you describe does not seem to be in Kafka Connect. This connector is 
not developed by the Apache Kafka project. I suggest you report this issue to 
whoever is providing you this connector.

> GCS Sink Connector to parse JSON having leading 0's for an integer field
> 
>
> Key: KAFKA-15008
> URL: https://issues.apache.org/jira/browse/KAFKA-15008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Lubna Naqvi
>Priority: Major
>
> Our Kafka data which is in JSON format has an attribute (gtin) which is a 
> number starting with Zero and Kafka Connect GCS Sink Connector fails to parse 
> it. gtin is a very common attribute across any item related Kafka source and 
> it is causing a major(blocking) issue as we are not able to parse this 
> attribute.
>  
> We'd like to have the ability to parse the integer field in JSON message if 
> it is prefixed with zero. Can this be investigated for consideration into 
> release? 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14980) MirrorMaker consumers don't get configs prefixed with source.cluster

2023-05-09 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14980:
--

 Summary: MirrorMaker consumers don't get configs prefixed with 
source.cluster
 Key: KAFKA-14980
 URL: https://issues.apache.org/jira/browse/KAFKA-14980
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.5.0
Reporter: Mickael Maison


As part of KAFKA-14021, we made a change to 
MirrorConnectorConfig.sourceConsumerConfig() to grab all configs that start 
with "source.". Previously it was grabbing configs prefixed with 
"source.cluster.". 

This means existing connector configuration stop working, as configurations 
such as bootstrap.servers are not passed to source consumers.

For example, the following connector configuration was valid in 3.4 and now 
makes the connector tasks fail:

{code:json}
{
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"name": "source",
"topics": "test",
"tasks.max": "30",
"source.cluster.alias": "one",
"target.cluster.alias": "two",
"source.cluster.bootstrap.servers": "localhost:9092",
   "target.cluster.bootstrap.servers": "localhost:29092"
}
{code}


The connector attempts to start source consumers with bootstrap.servers = [] 
and the task crash with 


{noformat}
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:837)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:671)
at 
org.apache.kafka.connect.mirror.MirrorUtils.newConsumer(MirrorUtils.java:59)
at 
org.apache.kafka.connect.mirror.MirrorSourceTask.start(MirrorSourceTask.java:103)
at 
org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.initializeAndStart(AbstractWorkerSourceTask.java:274)
at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:202)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
at 
org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:75)
at 
org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable 
bootstrap urls given in bootstrap.servers
{noformat}






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14957) Default value for state.dir is confusing

2023-05-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14957:
--

 Summary: Default value for state.dir is confusing
 Key: KAFKA-14957
 URL: https://issues.apache.org/jira/browse/KAFKA-14957
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Mickael Maison


The default value for state.dir is documented as 
/var/folders/0t/68svdzmx1sld0mxjl8dgmmzmgq/T//kafka-streams

This is misleading, the value will be different in each environment as it 
computed using System.getProperty("java.io.tmpdir"). We should update the 
description to mention how the path is computed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14925) The website shouldn't load external resources

2023-04-24 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14925.

Resolution: Fixed

> The website shouldn't load external resources
> -
>
> Key: KAFKA-14925
> URL: https://issues.apache.org/jira/browse/KAFKA-14925
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Reporter: Mickael Maison
>Assignee: Atul Sharma
>Priority: Major
>
> In includes/_header.htm, we load a resource from fontawesome.com



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14876) Public documentation for new Kafka Connect offset management REST APIs in 3.5

2023-04-24 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14876.

Resolution: Fixed

> Public documentation for new Kafka Connect offset management REST APIs in 3.5
> -
>
> Key: KAFKA-14876
> URL: https://issues.apache.org/jira/browse/KAFKA-14876
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Yash Mayya
>Assignee: Yash Mayya
>Priority: Major
> Fix For: 3.5.0
>
>
> Add public documentation for the new Kafka Connect offset management REST 
> APIs being introduced in 
> [KIP-875:|https://cwiki.apache.org/confluence/display/KAFKA/KIP-875%3A+First-class+offsets+support+in+Kafka+Connect]
>  in 3.5
>  * *GET* /connectors/\{connector}/offsets



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14930) Public documentation for new Kafka Connect offset management REST APIs

2023-04-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14930:
--

 Summary: Public documentation for new Kafka Connect offset 
management REST APIs
 Key: KAFKA-14930
 URL: https://issues.apache.org/jira/browse/KAFKA-14930
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Reporter: Mickael Maison


Add public documentation for the 3 new Kafka Connect offset management REST 
APIs being introduced in 
[KIP-875:|https://cwiki.apache.org/confluence/display/KAFKA/KIP-875%3A+First-class+offsets+support+in+Kafka+Connect]{*}{*}
 * *PATCH* /connectors/\{connector}/offsets
 * *DELETE* /connectors/\{connector}/offsets)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14925) The website shouldn't load external resources

2023-04-20 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14925:
--

 Summary: The website shouldn't load external resources
 Key: KAFKA-14925
 URL: https://issues.apache.org/jira/browse/KAFKA-14925
 Project: Kafka
  Issue Type: Improvement
  Components: website
Reporter: Mickael Maison


In includes/_header.htm, we load a resource from fontawesome.com



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14921) Avoid non numeric values for metrics

2023-04-19 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14921:
--

 Summary: Avoid non numeric values for metrics
 Key: KAFKA-14921
 URL: https://issues.apache.org/jira/browse/KAFKA-14921
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison
Assignee: Mickael Maison


Many monitoring tools such as prometheus and graphite only support numeric 
values. This makes it hard to collect and monitor non numeric metrics.

We should avoid using Gauges with arbitrary types and provide numeric 
alternatives to such existing metrics.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14893) Public API for reporting Yammer metrics

2023-04-11 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14893:
--

 Summary: Public API for reporting Yammer metrics
 Key: KAFKA-14893
 URL: https://issues.apache.org/jira/browse/KAFKA-14893
 Project: Kafka
  Issue Type: Improvement
  Components: core, metrics
Reporter: Mickael Maison
Assignee: Mickael Maison


Server side metrics registered via the Yammer library are currently exposed via 
the KafkaMetricsReporter interface. This is configured by setting 
kafka.metrics.reporters in the server configuration.

However the interface is defined in Scala in the core module so it is not part 
of the public API. This API also assumes implementations can access 
KafkaYammerMetrics.defaultRegistry(), which is also not part of the public API, 
in order to report metrics.

Also this API should support reconfigurable configurations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14731) Upgrade ZooKeeper to 3.6.4

2023-04-11 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14731.

Resolution: Fixed

> Upgrade ZooKeeper to 3.6.4
> --
>
> Key: KAFKA-14731
> URL: https://issues.apache.org/jira/browse/KAFKA-14731
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.0.2, 3.1.2, 3.4.0, 3.2.3, 3.3.2, 3.5.0
>Reporter: Ron Dagostino
>Assignee: Ron Dagostino
>Priority: Major
> Fix For: 3.2.4, 3.1.3, 3.0.3, 3.5.0, 3.4.1, 3.3.3
>
>
> We have https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-14661 
> opened to upgrade ZooKeeper from 3.6.3 to 3.8.1, and that will likely be 
> actioned in time for 3.5.0.  But in the meantime, ZooKeeper 3.6.4 has been 
> released, so we should take the patch version bump in trunk now and also 
> apply the bump to the next patch releases of 3.0, 3.1, 3.2, 3.3, and 3.4.
> Note that KAFKA-14661 should *not* be applied to branches prior to trunk (and 
> presumably 3.5).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >