[jira] [Created] (KAFKA-16572) allow defining number of disks per broker in ClusterTest

2024-04-16 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16572: -- Summary: allow defining number of disks per broker in ClusterTest Key: KAFKA-16572 URL: https://issues.apache.org/jira/browse/KAFKA-16572 Project: Kafka

[jira] [Resolved] (KAFKA-16559) allow defining number of disks per broker in TestKitNodes

2024-04-16 Thread Chia-Ping Tsai (Jira)
[ https://issues.apache.org/jira/browse/KAFKA-16559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-16559. Fix Version/s: 3.8.0 Resolution: Fixed > allow defining number of disks per broker

Re: [PR] KAFKA-16467: Add how to integrate with kafka repo [kafka-site]

2024-04-16 Thread via GitHub
showuon commented on code in PR #596: URL: https://github.com/apache/kafka-site/pull/596#discussion_r1568111065 ## README.md: ## @@ -10,4 +10,32 @@ You can run it with the following command, note that it requires docker: Then you can open

[jira] [Created] (KAFKA-16571) reassign_partitions_test.bounce_brokers should wait for messages to be sent to every partition

2024-04-16 Thread David Mao (Jira)
David Mao created KAFKA-16571: - Summary: reassign_partitions_test.bounce_brokers should wait for messages to be sent to every partition Key: KAFKA-16571 URL: https://issues.apache.org/jira/browse/KAFKA-16571

[jira] [Created] (KAFKA-16570) FenceProducers API returns "unexpected error" when successful

2024-04-16 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-16570: -- Summary: FenceProducers API returns "unexpected error" when successful Key: KAFKA-16570 URL: https://issues.apache.org/jira/browse/KAFKA-16570 Project: Kafka

[jira] [Created] (KAFKA-16569) Target Assignment Format Change

2024-04-16 Thread Ritika Reddy (Jira)
Ritika Reddy created KAFKA-16569: Summary: Target Assignment Format Change Key: KAFKA-16569 URL: https://issues.apache.org/jira/browse/KAFKA-16569 Project: Kafka Issue Type: Sub-task

[jira] [Created] (KAFKA-16568) Add JMH Benchmarks for assignor performance testing

2024-04-16 Thread Ritika Reddy (Jira)
Ritika Reddy created KAFKA-16568: Summary: Add JMH Benchmarks for assignor performance testing Key: KAFKA-16568 URL: https://issues.apache.org/jira/browse/KAFKA-16568 Project: Kafka Issue

[jira] [Created] (KAFKA-16567) Add New Stream Metrics based on KIP-869

2024-04-16 Thread Walter Hernandez (Jira)
Walter Hernandez created KAFKA-16567: Summary: Add New Stream Metrics based on KIP-869 Key: KAFKA-16567 URL: https://issues.apache.org/jira/browse/KAFKA-16567 Project: Kafka Issue Type:

[VOTE] KIP-924: customizable task assignment for Streams

2024-04-16 Thread Rohan Desai
https://cwiki.apache.org/confluence/display/KAFKA/KIP-924%3A+customizable+task+assignment+for+Streams As this KIP has been open for a while, and gone through a couple rounds of review/revision, I'm calling a vote to get it approved.

Re: [DISCUSS] KIP-1035: StateStore managed changelog offsets

2024-04-16 Thread Nick Telford
That does make sense. The one thing I can't figure out is how per-Task StateStore instances are constructed. It looks like we construct one StateStore instance for the whole Topology (in InternalTopologyBuilder), and pass that into ProcessorStateManager (via StateManagerUtil) for each Task, which

Re: [DISCUSS] KIP-1036: Extend RecordDeserializationException exception

2024-04-16 Thread Sophie Blee-Goldman
Also ignore everything I said about Streams earlier. I didn't look closely enough on my first pass over the KIP and thought this was changing the DeserializationExceptionHandler in Streams. I see now that this is actually about the consumer client's DeserializationException so everything I said

Re: [DISCUSS] KIP-1036: Extend RecordDeserializationException exception

2024-04-16 Thread Sophie Blee-Goldman
Ah, thanks for the additional context. I should have looked at the code before I opened my mouth (so to speak) In that case, I fully agree that using Record instead of ConsumerRecord makes sense. It does indeed seem like by definition, if there is a DeserializationException then there is no

[jira] [Created] (KAFKA-16566) Update static membership fencing system test to support new protocol

2024-04-16 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16566: -- Summary: Update static membership fencing system test to support new protocol Key: KAFKA-16566 URL: https://issues.apache.org/jira/browse/KAFKA-16566 Project:

Re: [DISCUSS] KIP-1036: Extend RecordDeserializationException exception

2024-04-16 Thread Frédérik Rouleau
Thanks Sophie, I can write something in the KIP on how KStreams solves that issue, but as I can't create a Wiki account, I will have to find someone to do this on my behalf (if someone can work on solving that wiki account creation, it would be great). The biggest difference between Record and

[jira] [Created] (KAFKA-16565) IncrementalAssignmentConsumerEventHandler throws error when attempting to remove a partition that isn't assigned

2024-04-16 Thread Kirk True (Jira)
Kirk True created KAFKA-16565: - Summary: IncrementalAssignmentConsumerEventHandler throws error when attempting to remove a partition that isn't assigned Key: KAFKA-16565 URL:

Re: [DISCUSS] KIP-1036: Extend RecordDeserializationException exception

2024-04-16 Thread Sophie Blee-Goldman
As for the ConsumerRecord vs Record thing -- I personally think the other alternative that Kirk mentioned would make more sense here, that is, returning a Optional> rather than changing the type from ConsumerRecord to Record. I'm not sure why checkstyle is saying we shouldn't use the Record

Re: [DISCUSS] KIP-1036: Extend RecordDeserializationException exception

2024-04-16 Thread Sophie Blee-Goldman
I think some missing context here (which can maybe be added in the Motivation section as background) is that the deserialization is actually done within Streams, not within the Consumer. Since the consumers in Kafka Streams might be subscribed to multiple topics with different data types, it has

Re: [DISCUSS] KIP-1035: StateStore managed changelog offsets

2024-04-16 Thread Sophie Blee-Goldman
I don't think we need to *require* a constructor accept the TaskId, but we would definitely make sure that the RocksDB state store changes its constructor to one that accepts the TaskID (which we can do without deprecation since its an internal API), and custom state stores can just decide for

Re: Permission to contribute to Apache Kafka Project

2024-04-16 Thread Josep Prat
Hi Robin, You are now set up. Thanks for your interest in Apache Kafka. Best, On Tue, Apr 16, 2024 at 3:31 PM Robin Han wrote: > Hi there, > > My Jira ID is 'robinhan' and I'd like to ask permission to contribute to > the Apache Kafka Project. > > > I have encountered an error when upgrading

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2817

2024-04-16 Thread Apache Jenkins Server
See

Permission to contribute to Apache Kafka Project

2024-04-16 Thread Robin Han
Hi there, My Jira ID is 'robinhan' and I'd like to ask permission to contribute to the Apache Kafka Project. I have encountered an error when upgrading from version 3.4.0 to 3.7.0 in Kraft mode. I would like to fix this issue by submitting a Jira Ticket and a Github PR. Unfortunately, I don't

[jira] [Created] (KAFKA-16564) Apply `Xlint` to java code in core module

2024-04-16 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16564: -- Summary: Apply `Xlint` to java code in core module Key: KAFKA-16564 URL: https://issues.apache.org/jira/browse/KAFKA-16564 Project: Kafka Issue Type:

Re: [DISCUSS] KIP-932: Queues for Kafka

2024-04-16 Thread Andrew Schofield
Hi Jun, Thanks for you reply. 42.1. That’s a sensible improvement. Done. 47,56. Done. All instances of BaseOffset changed to FirstOffset. 105. I think that would be in a future KIP. Personally, I don’t mind having a non-contiguous set of values in this KIP. 114. Done. 115. If the poll is just

[jira] [Created] (KAFKA-16563) migration to KRaft hanging after KeeperException

2024-04-16 Thread Luke Chen (Jira)
Luke Chen created KAFKA-16563: - Summary: migration to KRaft hanging after KeeperException Key: KAFKA-16563 URL: https://issues.apache.org/jira/browse/KAFKA-16563 Project: Kafka Issue Type: Bug

Re: [DISCUSS] KIP-1037: Allow WriteTxnMarkers API with Alter Cluster Permission

2024-04-16 Thread Andrew Schofield
Hi Nikhil, I agree with Christo. This is a good improvement and I think your choice of Alter permission on the cluster is the best available. Thanks, Andrew > On 15 Apr 2024, at 12:33, Christo Lolov wrote: > > Heya Nikhil, > > Thank you for raising this KIP! > > Your proposal makes sense to me.

Re: [PR] KAFKA-16467: Add how to integrate with kafka repo [kafka-site]

2024-04-16 Thread via GitHub
FrankYang0529 commented on PR #596: URL: https://github.com/apache/kafka-site/pull/596#issuecomment-2058627999 Hi @showuon, thanks for reviewing. I've addressed all comments and add you as co-author. -- This is an automated message from the Apache Git Service. To respond to the message,

[jira] [Resolved] (KAFKA-16562) Install the ginkgo to tools folder

2024-04-16 Thread Chia-Ping Tsai (Jira)
[ https://issues.apache.org/jira/browse/KAFKA-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-16562. Resolution: Invalid sorry this jira is for yunikorn ... > Install the ginkgo to tools

[jira] [Created] (KAFKA-16562) Install the ginkgo to tools folder

2024-04-16 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16562: -- Summary: Install the ginkgo to tools folder Key: KAFKA-16562 URL: https://issues.apache.org/jira/browse/KAFKA-16562 Project: Kafka Issue Type:

Re: [DISCUSS] KIP-1036: Extend RecordDeserializationException exception

2024-04-16 Thread Frédérik Rouleau
Hi Almog, I think you do not understand the behavior that was introduced with the KIP-334. When you have a DeserializationException, if you set the proper seek call to skip the faulty record, the next poll call will return the remaining records to process and not a new list of records. When the

[jira] [Created] (KAFKA-16561) Disable `allow.auto.create.topics` in MirrorMaker2 Consumer Config

2024-04-16 Thread Yangkun Ai (Jira)
Yangkun Ai created KAFKA-16561: -- Summary: Disable `allow.auto.create.topics` in MirrorMaker2 Consumer Config Key: KAFKA-16561 URL: https://issues.apache.org/jira/browse/KAFKA-16561 Project: Kafka

[jira] [Created] (KAFKA-16560) Refactor/cleanup BrokerNode/ControllerNode/ClusterConfig

2024-04-16 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16560: -- Summary: Refactor/cleanup BrokerNode/ControllerNode/ClusterConfig Key: KAFKA-16560 URL: https://issues.apache.org/jira/browse/KAFKA-16560 Project: Kafka

[jira] [Created] (KAFKA-16559) Support to define the number of data folders in ClusterTest

2024-04-16 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16559: -- Summary: Support to define the number of data folders in ClusterTest Key: KAFKA-16559 URL: https://issues.apache.org/jira/browse/KAFKA-16559 Project: Kafka

Re: [DISCUSS] KIP-936 Throttle number of active PIDs

2024-04-16 Thread Claude Warren
The difference between p.i.q.window.count and p.i.q.window.num: To be honest, I may have misunderstood your definition of window num. But here is what I have in mind: 1. p.i.q.window.size.seconds the length of time that a window will exist. This is also the maximum time between PID uses

Re: [DISCUSS] KIP-936 Throttle number of active PIDs

2024-04-16 Thread Claude Warren
Let's put aside the CPC datasketch idea and just discuss the Bloom filter approach. I thinkthe problem with the way the KIP is worded is that PIDs are only added if they are not seen in either of the Bloom filters. So an early PID is added to the first filter and the associated metric is