Re: [VOTE] KIP-1023: Follower fetch from tiered offset

2024-04-26 Thread Luke Chen
Hi Abhijeet,

Thanks for the KIP.
+1 from me.

Thanks.
Luke

On Fri, Apr 26, 2024 at 5:41 PM Omnia Ibrahim 
wrote:

> Thanks for the KIP. +1 non-binding from me
>
> > On 26 Apr 2024, at 06:29, Abhijeet Kumar 
> wrote:
> >
> > Hi All,
> >
> > I would like to start the vote for KIP-1023 - Follower fetch from tiered
> > offset
> >
> > The KIP is here:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1023%3A+Follower+fetch+from+tiered+offset
> >
> > Regards.
> > Abhijeet.
>
>


[jira] [Resolved] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-26 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou resolved KAFKA-16621.

Resolution: Not A Bug

Fixed in KAFKA-15182: Normalize source connector offsets before invoking 
SourceConnector::alterOffsets (#14003)

> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} {"offset":2}
>  
> {"partition":2,"topic":"topic","cluster":"A"}{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns 
> {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1023: Follower fetch from tiered offset

2024-04-26 Thread Omnia Ibrahim
Thanks for the KIP. +1 non-binding from me

> On 26 Apr 2024, at 06:29, Abhijeet Kumar  wrote:
> 
> Hi All,
> 
> I would like to start the vote for KIP-1023 - Follower fetch from tiered
> offset
> 
> The KIP is here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1023%3A+Follower+fetch+from+tiered+offset
> 
> Regards.
> Abhijeet.



Re: [VOTE] KIP-1023: Follower fetch from tiered offset

2024-04-26 Thread Kamal Chandraprakash
+1 (non-binding). Thanks for the KIP!

--
Kamal

On Fri, Apr 26, 2024 at 11:00 AM Abhijeet Kumar 
wrote:

> Hi All,
>
> I would like to start the vote for KIP-1023 - Follower fetch from tiered
> offset
>
> The KIP is here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1023%3A+Follower+fetch+from+tiered+offset
>
> Regards.
> Abhijeet.
>


Re: [DISCUSS] KIP-924: customizable task assignment for Streams

2024-04-26 Thread Rohan Desai
117: as Sophie laid out, there are two cases here right:
1. cases that are considered invalid by the existing assignors but are
still valid assignments in the sense that they can be used to generate a
valid consumer group assignment (from the perspective of the consumer group
protocol). An assignment that excludes a task is one such example, and
Sophie pointed out a good use case for it. I also think it makes sense to
allow these. It's hard to predict how a user might want to use the custom
assignor, and its reasonable to expect them to use it with care and not
hand-hold them.
2. cases that are not valid because it is impossible to compute a valid
consumer group assignment from them. In this case it seems totally
reasonable to just throw a fatal exception that gets passed to the uncaught
exception handler. If this case happens then there is some bug in the
user's assignor and its totally reasonable to fail the application in that
case. We _could_ try to be more graceful and default to one of the existing
assignors. But it's usually better to fail hard and fast when there is some
illegal state detected imo.

On Fri, Apr 19, 2024 at 4:18 PM Rohan Desai  wrote:

> Bruno, I've incorporated your feedback into the KIP document.
>
> On Fri, Apr 19, 2024 at 3:55 PM Rohan Desai 
> wrote:
>
>> Thanks for the feedback Bruno! For the most part I think it makes sense,
>> but leaving a couple follow-up thoughts/questions:
>>
>> re 4: I think Sophie's point was slightly different - that we might want
>> to wrap the return type for `assign` in a class so that its easily
>> extensible. This makes sense to me. Whether we do that or not, we can have
>> the return type be a Set instead of a Map as well.
>>
>> re 6: Yes, it's a callback that's called with the final assignment. I
>> like your suggested name.
>>
>> On Fri, Apr 5, 2024 at 12:17 PM Rohan Desai 
>> wrote:
>>
>>> Thanks for the feedback Sophie!
>>>
>>> re1: Totally agree. The fact that it's related to the partition assignor
>>> is clear from just `task.assignor`. I'll update.
>>> re3: This is a good point, and something I would find useful personally.
>>> I think its worth adding an interface that lets the plugin observe the
>>> final assignment. I'll add that.
>>> re4: I like the new `NodeAssignment` type. I'll update the KIP with that.
>>>
>>> On Thu, Nov 9, 2023 at 11:18 PM Rohan Desai 
>>> wrote:
>>>
 Thanks for the feedback so far! I think pretty much all of it is
 reasonable. I'll reply to it inline:

 > 1. All the API logic is granular at the Task level, except the
 previousOwnerForPartition func. I’m not clear what’s the motivation
 behind it, does our controller also want to change how the
 partitions->tasks mapping is formed?
 You're right that this is out of place. I've removed this method as
 it's not needed by the task assignor.

 > 2. Just on the API layering itself: it feels a bit weird to have the
 three built-in functions (defaultStandbyTaskAssignment etc) sitting in
 the ApplicationMetadata class. If we consider them as some default util
 functions, how about introducing moving those into their own static util
 methods to separate from the ApplicationMetadata “fact objects” ?
 Agreed. Updated in the latest revision of the kip. These have been
 moved to TaskAssignorUtils

 > 3. I personally prefer `NodeAssignment` to be a read-only object
 containing the decisions made by the assignor, including the
 requestFollowupRebalance flag. For manipulating the half-baked results
 inside the assignor itself, maybe we can just be flexible to let users use
 whatever struts / their own classes even, if they like. WDYT?
 Agreed. Updated in the latest version of the kip.

 > 1. For the API, thoughts on changing the method signature to return a
 (non-Optional) TaskAssignor? Then we can either have the default
 implementation return new HighAvailabilityTaskAssignor or just have a
 default implementation class that people can extend if they don't want to
 implement every method.
 Based on some other discussion, I actually decided to get rid of the
 plugin interface, and instead use config to specify individual plugin
 behaviour. So the method you're referring to is no longer part of the
 proposal.

 > 3. Speaking of ApplicationMetadata, the javadoc says it's read only
 but
 theres methods that return void on it? It's not totally clear to me how
 that interface is supposed to be used by the assignor. It'd be nice if we
 could flip that interface such that it becomes part of the output instead
 of an input to the plugin.
 I've moved those methods to a util class. They're really utility
 methods the assignor might want to call to do some default or optimized
 assignment for some cases like rack-awareness.

 > 4. We should consider wrapping UUID in a ProcessID class so that we
 control

[jira] [Created] (KAFKA-16627) Remove ClusterConfig parameter in BeforeEach and AfterEach

2024-04-26 Thread Kuan Po Tseng (Jira)
Kuan Po Tseng created KAFKA-16627:
-

 Summary: Remove ClusterConfig parameter in BeforeEach and AfterEach
 Key: KAFKA-16627
 URL: https://issues.apache.org/jira/browse/KAFKA-16627
 Project: Kafka
  Issue Type: Improvement
Reporter: Kuan Po Tseng
Assignee: Kuan Po Tseng


In the past we modify configs like server broker properties by modifying the 
ClusterConfig reference passed to BeforeEach and AfterEach based on the 
requirements of the tests.

While after KAFKA-16560, the ClusterConfig become immutable, modify the 
ClusterConfig reference no longer reflects any changes to the test cluster. 
Then pass ClusterConfig to BeforeEach and AfterEach become redundant. We should 
remove this behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1018: Introduce max remote fetch timeout config

2024-04-26 Thread Kamal Chandraprakash
Hi all,

If there are no more comments, I'll start a vote thread by tomorrow.
Please review the KIP.

Thanks,
Kamal

On Sat, Mar 30, 2024 at 11:08 PM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Hi all,
>
> Bumping the thread. Please review this KIP. Thanks!
>
> On Thu, Feb 1, 2024 at 9:11 PM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
>> Hi Jorge,
>>
>> Thanks for the review! Added your suggestions to the KIP. PTAL.
>>
>> The `fetch.max.wait.ms` config will be also applicable for topics
>> enabled with remote storage.
>> Updated the description to:
>>
>> ```
>> The maximum amount of time the server will block before answering the
>> fetch request
>> when it is reading near to the tail of the partition (high-watermark) and
>> there isn't
>> sufficient data to immediately satisfy the requirement given by
>> fetch.min.bytes.
>> ```
>>
>> --
>> Kamal
>>
>> On Thu, Feb 1, 2024 at 12:12 AM Jorge Esteban Quilcate Otoya <
>> quilcate.jo...@gmail.com> wrote:
>>
>>> Hi Kamal,
>>>
>>> Thanks for this KIP! It should help to solve one of the main issues with
>>> tiered storage at the moment that is dealing with individual consumer
>>> configurations to avoid flooding logs with interrupted exceptions.
>>>
>>> One of the topics discussed in [1][2] was on the semantics of `
>>> fetch.max.wait.ms` and how it's affected by remote storage. Should we
>>> consider within this KIP the update of `fetch.max.wail.ms` docs to
>>> clarify
>>> it only applies to local storage?
>>>
>>> Otherwise, LGTM -- looking forward to see this KIP adopted.
>>>
>>> [1] https://issues.apache.org/jira/browse/KAFKA-15776
>>> [2] https://github.com/apache/kafka/pull/14778#issuecomment-1820588080
>>>
>>> On Tue, 30 Jan 2024 at 01:01, Kamal Chandraprakash <
>>> kamal.chandraprak...@gmail.com> wrote:
>>>
>>> > Hi all,
>>> >
>>> > I have opened a KIP-1018
>>> > <
>>> >
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1018%3A+Introduce+max+remote+fetch+timeout+config+for+DelayedRemoteFetch+requests
>>> > >
>>> > to introduce dynamic max-remote-fetch-timeout broker config to give
>>> more
>>> > control to the operator.
>>> >
>>> >
>>> >
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1018%3A+Introduce+max+remote+fetch+timeout+config+for+DelayedRemoteFetch+requests
>>> >
>>> > Let me know if you have any feedback or suggestions.
>>> >
>>> > --
>>> > Kamal
>>> >
>>>
>>


Re: Confluence edit access

2024-04-26 Thread Matthias J. Sax

Thanks. You should be all set.

-Matthias

On 4/25/24 10:49 PM, Claude Warren wrote:

My Confluence ID is "claude"

On Thu, Apr 25, 2024 at 8:40 PM Matthias J. Sax  wrote:


What's your wiki ID? We can grant write access on our side if you have
already an account.

-Matthias

On 4/25/24 4:06 AM, Claude Warren wrote:

I would like to get edit access to the Kafka confluence so that I can

work

on KIP-936.  Can someone here do that or do I need to go through Infra?

Claude








[jira] [Resolved] (KAFKA-16609) Update parse_describe_topic to support new topic describe output

2024-04-26 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True resolved KAFKA-16609.
---
  Reviewer: Lucas Brutschy
Resolution: Fixed

> Update parse_describe_topic to support new topic describe output
> 
>
> Key: KAFKA-16609
> URL: https://issues.apache.org/jira/browse/KAFKA-16609
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, system tests
>Affects Versions: 3.8.0
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>  Labels: system-test-failure
> Fix For: 3.8.0
>
>
> It appears that recent changes to the describe topic output has broken the 
> system test's ability to parse the output.
> {noformat}
> test_id:
> kafkatest.tests.core.reassign_partitions_test.ReassignPartitionsTest.test_reassign_partitions.bounce_brokers=False.reassign_from_offset_zero=True.metadata_quorum=ISOLATED_KRAFT.use_new_coordinator=True.group_protocol=consumer
> status: FAIL
> run time:   50.333 seconds
> IndexError('list index out of range')
> Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/venv/lib/python3.7/site-packages/ducktape/tests/runner_client.py",
>  line 184, in _do_run
> data = self.run_test()
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/venv/lib/python3.7/site-packages/ducktape/tests/runner_client.py",
>  line 262, in run_test
> return self.test_context.function(self.test)
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/venv/lib/python3.7/site-packages/ducktape/mark/_mark.py",
>  line 433, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 175, in test_reassign_partitions
> self.run_produce_consume_validate(core_test_action=lambda: 
> self.reassign_partitions(bounce_brokers))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 105, in run_produce_consume_validate
> core_test_action(*args)
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 175, in 
> self.run_produce_consume_validate(core_test_action=lambda: 
> self.reassign_partitions(bounce_brokers))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 82, in reassign_partitions
> partition_info = 
> self.kafka.parse_describe_topic(self.kafka.describe_topic(self.topic))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 1400, in parse_describe_topic
> fields = list(map(lambda x: x.split(" ")[1], fields))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 1400, in 
> fields = list(map(lambda x: x.split(" ")[1], fields))
> IndexError: list index out of range
> {noformat} 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1023: Follower fetch from tiered offset

2024-04-26 Thread Jun Rao
Hi, Abhijeet,

Thanks for the KIP. +1

Jun

On Thu, Apr 25, 2024 at 10:30 PM Abhijeet Kumar 
wrote:

> Hi All,
>
> I would like to start the vote for KIP-1023 - Follower fetch from tiered
> offset
>
> The KIP is here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1023%3A+Follower+fetch+from+tiered+offset
>
> Regards.
> Abhijeet.
>


Re: [DISCUSS] KIP-932: Queues for Kafka

2024-04-26 Thread Andrew Schofield
Hi David,
Thanks for your response.

001: OK, I'll include converting an empty classic or consumer group to
a share group in the thinking for the future KIP on group management that
I have in mind for later in 2024.

004: Added to the KIP explicitly.

006: That description helps.

I have changed the KIP so that `group.share.enable` is only an internal
configuration that we use for tests, meaning that this configuration has
been removed entirely from the KIP. The user enables share groups by
including "share" in `group.coordinator.rebalance.protocols`. When we get
to a production-ready release of KIP-932, `group.version` will be used to
enable the new records that we persist. I hope this is an acceptable approach.

007: There are two questions here I think. First, would there ever be an
assignor that would work properly for both consumer groups and share groups.
I suppose the answer is no.

Second, is the interface for these two types of assignors the same? The
answer is that it is. I think we should exploit this in the code.

Let's say there's an entirely separate ShareGroupPartitionAssignor which
has no relationship with PartitionAssignor. Then, we need to create independent
and almost identical implementations of the machinery in
org.apache.kafka.coordinator.group.common such as TargetAssignmentBuilder.
This is undesirable code duplication.

Here's my suggestion.
a) o.a.k.coordinator.group.assignor.PartitionAssignor is no longer implemented
directly by any assignors.
b) o.a.k.coordinator.group.assignor.ConsumerGroupPartitionAssignor
and o.a.k.coordinator.group.assignor.ShareGroupPartitionAssignor extend
this interface. By implementing one of these interfaces in an assignor,
you're choosing a group type.
c) o.a.k.coordinator.group.assignor.Range/UniformAssignor are modified
to implement ConsumerGroupPartitionAssignor.
d) o.a.k.coordinator.group.share.SimpleAssignor implements
ShareGroupPartitionAssignor.
e) Wherever the broker code cares which kind of assignor it gets, it uses the
appropriate group-specific interface. But the code that calculates the
assignments is generic and uses PartitionAssignor.

017: Yes, I agree. I've added num-partitions in the share-coordinator-metrics
group.

018/019: I have moved the share group-specific metrics recorded by the
SPL into a separate group. Please take another look at the table of
broker metrics.

021: I think it is preferable to have a fencing mechanism to protect against
zombie share-partition leaders. The question is how to do it in a robust
way without too much overhead. I think there's a way using existing concepts.

Each partition leader has a leader epoch that increments when a new leader is
elected. Reads and writes of the share-group state from the share coordinator 
can
use the leader epoch as a fence. When a new leader is elected, it reads the 
state
providing its new, higher leader epoch. The share coordinator notices that the
leader epoch has increased and will no longer honour requests from a lower 
epoch.
The share coordinator includes the leader epoch in records it writes to the
share-group state topic. When the state epoch is bumped, the leader epoch is
initialized to -1 which means that the leader epoch for this state epoch is
not yet set. When the SPL calls the share coordinator, it provides the leader
epoch and the share coordinator can initialise its copy of the leader epoch.

I have updated the KIP accordingly.

022: `share.coordinator.state.topic.*` works for me. Configs changed.

023: We assume the clients will do the right thing. If they do not, the effect
is essentially that the desired balance of consumers to partitions is not
being honoured. That's going to happen temporarily as rebalancing occurs
for a group with lots of partitions and members anyway.


Thanks,
Andrew

> On 25 Apr 2024, at 14:53, David Jacot  wrote:
> 
> Hi Andrew,
> 
> Thanks for your responses and sorry for my late reply.
> 
> 001: Makes sense. One thing that we could consider here is to allow
> converting an empty classic or consumer group to a share group. If already
> do this between empty classic and consumer groups.
> 
> 004: I see. It would be great to call it out in the KIP.
> 
> 006: I view `group.coordinator.rebalance.protocols` and `group.version` as
> two different concepts. The former is purely here to enable/disable
> protocols where the latter is mainly here to gate the versions of records
> that we persist. Both are indeed required to enable the new consumer group
> protocol. The reason is that we could imagine having a new version to the
> `group.version` for different purposes (e.g. queues) but one may not want
> to enable the new consumer protocol. Unless we have a strong reason not to
> use these two, I would use them from day 1. This is actually something that
> I got wrong in KIP-848, I think.
> 
> 007: I see that you extend ShareGroupPartitionAssignor from
> PartitionAssignor. I wonder if we should separate them because it means
> that a 

Re: [VOTE] KIP-1023: Follower fetch from tiered offset

2024-04-26 Thread Christo Lolov
Heya Abhijeet,

Thanks a lot for pushing this forward, especially with the explanation of
EARLIEST_PENDING_UPLOAD_OFFSET_TIMESTAMP!
+1 from me :)

Best,
Christo

On Fri, 26 Apr 2024 at 12:50, Luke Chen  wrote:

> Hi Abhijeet,
>
> Thanks for the KIP.
> +1 from me.
>
> Thanks.
> Luke
>
> On Fri, Apr 26, 2024 at 5:41 PM Omnia Ibrahim 
> wrote:
>
> > Thanks for the KIP. +1 non-binding from me
> >
> > > On 26 Apr 2024, at 06:29, Abhijeet Kumar 
> > wrote:
> > >
> > > Hi All,
> > >
> > > I would like to start the vote for KIP-1023 - Follower fetch from
> tiered
> > > offset
> > >
> > > The KIP is here:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1023%3A+Follower+fetch+from+tiered+offset
> > >
> > > Regards.
> > > Abhijeet.
> >
> >
>


[VOTE] KIP-1033: Add Kafka Streams exception handler for exceptions occurring during processing

2024-04-26 Thread Damien Gasparina
Hi all,

We would like to start a vote for KIP-1033: Add Kafka Streams
exception handler for exceptions occurring during processing

The KIP is available on
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1033%3A+Add+Kafka+Streams+exception+handler+for+exceptions+occurring+during+processing

If you have any suggestions or feedback, feel free to participate to
the discussion thread:
https://lists.apache.org/thread/1nhhsrogmmv15o7mk9nj4kvkb5k2bx9s

Best regards,
Damien Sebastien and Loic


[jira] [Created] (KAFKA-16629) add broker-related tests to ConfigCommandIntegrationTest

2024-04-26 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16629:
--

 Summary: add broker-related tests to ConfigCommandIntegrationTest
 Key: KAFKA-16629
 URL: https://issues.apache.org/jira/browse/KAFKA-16629
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


[https://github.com/apache/kafka/pull/15645] will rewrite the 
ConfigCommandIntegrationTest by java and new test infra. However, it still 
lacks of enough tests for broker-related configs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1023: Follower fetch from tiered offset

2024-04-26 Thread Satish Duggana
Thanks Abhijeet for the KIP.
+1 from me.

~Satish

On Fri, 26 Apr 2024 at 8:35 PM, Jun Rao  wrote:

> Hi, Abhijeet,
>
> Thanks for the KIP. +1
>
> Jun
>
> On Thu, Apr 25, 2024 at 10:30 PM Abhijeet Kumar <
> abhijeet.cse@gmail.com>
> wrote:
>
> > Hi All,
> >
> > I would like to start the vote for KIP-1023 - Follower fetch from tiered
> > offset
> >
> > The KIP is here:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1023%3A+Follower+fetch+from+tiered+offset
> >
> > Regards.
> > Abhijeet.
> >
>


[jira] [Created] (KAFKA-16628) Add system test for validating static consumer bounce and assignment when not eager

2024-04-26 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16628:
--

 Summary: Add system test for validating static consumer bounce and 
assignment when not eager
 Key: KAFKA-16628
 URL: https://issues.apache.org/jira/browse/KAFKA-16628
 Project: Kafka
  Issue Type: Task
  Components: consumer, system tests
Reporter: Lianet Magrans


Existing system 
[test|https://github.com/apache/kafka/blob/e7792258df934a5c8470c2925c5d164c7d5a8e6c/tests/kafkatest/tests/client/consumer_test.py#L209]
 include a test for validating that partitions are not re-assigned when a 
static member is bounced, but the test design and setup is intended for testing 
this for the Eager assignment strategy only (based on the eager protocol where 
all dynamic members revoke their partitions when a rebalance happens). 
We should considering adding a test that would ensure that partitions are not 
re-assigned when using the cooperative sticky assignor or the new consumer 
group protocol assignments. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2850

2024-04-26 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6527) Transient failure in DynamicBrokerReconfigurationTest.testDefaultTopicConfig

2024-04-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-6527.
---
Resolution: Fixed

It is enabled by https://github.com/apache/kafka/pull/15796

> Transient failure in DynamicBrokerReconfigurationTest.testDefaultTopicConfig
> 
>
> Key: KAFKA-6527
> URL: https://issues.apache.org/jira/browse/KAFKA-6527
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: TaiJuWu
>Priority: Blocker
>  Labels: flakey, flaky-test
> Fix For: 3.8.0
>
>
> {code:java}
> java.lang.AssertionError: Log segment size increase not applied
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:355)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865)
>   at 
> kafka.server.DynamicBrokerReconfigurationTest.testDefaultTopicConfig(DynamicBrokerReconfigurationTest.scala:348)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2851

2024-04-26 Thread Apache Jenkins Server
See