Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1837

2023-05-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-926: introducing acks=min.insync.replicas config

2023-05-10 Thread 67
Hi Luke,

It's a good point that add this config and get better P99 latency, but is this 
changing the meaning of "in sync replicas"? consider a situation with 
"replica=3 acks=2", when two broker fail and left only the broker that does't 
have the message, it is in sync, so will be elected as leader, will it cause a 
NOT NOTICED lost of acked messages?


qiangLiu







在2023年05月10 12时58分,"Ismael Juma"写道:

Hi Luke,

As discussed in the other KIP, there are some subtleties when it comes to
the semantics of the system if we don't wait for all members of the isr
before we ack. I don't understand why you say the leader election question
is out of scope - it seems to be a core aspect to me.

Ismael


On Wed, May 10, 2023, 8:50 AM Luke Chen  wrote:

> Hi Ismael,
>
> No, I didn't know about this similar KIP! I hope I've known that so that I
> don't need to spend time to write it again! :(
> I checked the KIP and all the discussions (here
> ). I
> think the consensus is that adding a client config to `acks=quorum` is
> fine.
> This comment
>  from
> Guozhang pretty much concluded what I'm trying to do.
>
>
>
>
>
>
>
>
> *1. Add one more value to client-side acks config:   0: no acks needed at
> all.   1: ack from the leader.   all: ack from ALL the ISR replicas
>  quorum: this is the new value, it requires ack from enough number of ISR
> replicas no smaller than majority of the replicas AND no smaller
> than{min.isr}.2. Clarify in the docs that if a user wants to tolerate X
> failures, she needs to set client acks=all or acks=quorum (better tail
> latency than "all") with broker {min.sir} to be X+1; however, "all" is not
> necessarily stronger than "quorum".*
>
> Concerns from KIP-250 are:
> 1. Introducing a new leader LEO based election method. This is not clear in
> the KIP-250 and needs more discussion
> 2. The KIP-250 also tried to optimize the consumer latency to read messages
> beyond high watermark, which also has some discussion about how to achieve
> that, and no conclusion
>
> Both of the above 2 concerns are out of the scope of my current KIP.
> So, I think it's good to provide this `acks=quorum` or
> `acks=min.insync.replicas` option to users to give them another choice.
>
>
> Thank you.
> Luke
>
>
> On Wed, May 10, 2023 at 8:54 AM Ismael Juma  wrote:
>
> > Hi Luke,
> >
> > Are you aware of
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-250+Add+Support+for+Quorum-based+Producer+Acknowledgment
> > ?
> >
> > Ismael
> >
> > On Tue, May 9, 2023 at 10:14 PM Luke Chen  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to start a discussion for the KIP-926: introducing
> > > acks=min.insync.replicas config. This KIP is to introduce
> > > `acks=min.insync.replicas` config value in producer, to improve the
> write
> > > throughput and still guarantee high durability.
> > >
> > > Please check the link for more detail:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-926%3A+introducing+acks%3Dmin.insync.replicas+config
> > >
> > > Any feedback is welcome.
> > >
> > > Thank you.
> > > Luke
> > >
> >
>






Re: [VOTE] KIP-872: Add Serializer#serializeToByteBuffer() to reduce memory copying

2023-05-10 Thread Luke Chen
+1(binding) from me.
Thanks for the improvement!

Luke

On Sun, May 7, 2023 at 6:34 PM Divij Vaidya  wrote:

> Vote +1 (non binding)
>
> I think that this is a nice improvement as it prevents an unnecessary data
> copy for users who are using ByteBuffer serialization on the producer.
>
> --
> Divij Vaidya
>
>
>
> On Sun, May 7, 2023 at 9:24 AM ShunKang Lin 
> wrote:
>
> > Hi everyone,
> >
> > I'd like to open the vote for KIP-872, which proposes to add
> > Serializer#serializeToByteBuffer() to reduce memory copying.
> >
> > The proposal is here:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=228495828
> >
> > The pull request is here:
> > https://github.com/apache/kafka/pull/12685
> >
> > Thanks to all who reviewed the proposal, and thanks in advance for taking
> > the time to vote!
> >
> > Best,
> > ShunKang
> >
>


Re: [DISCUSS] KIP-927: Improve the kafka-metadata-quorum output

2023-05-10 Thread Luke Chen
Hi Fede,

Thanks for the KIP.
LGTM.

@Divij
I think the timestamp is timezone independent, so it should be fine.

Luke

On Thu, May 11, 2023 at 2:04 AM Divij Vaidya 
wrote:

> Thank you for the KIP.
>
> The current proposal has the limitation that it uses a duration syntax for
> representation of a timestamp. Also, the syntax relies on the locale and
> timezone of the caller machine. This makes it difficult to share the output
> with others. As an example, let's say you want to share the state of the
> quorum with me which was captured but "3s ago" gives me no information on
> when it was executed.
>
> Alternatively, may I suggest introducing a parameter called
> "--datetime-format=" which takes as value the ISO-8601 [1] format and
> prints the timestamp based on the provided format. It would solve the
> problem of readability (epochs are hard to read!) as well as the problem of
> portability of output across machines.
>
> What do you think?
>
> [1] https://en.wikipedia.org/wiki/ISO_8601
>
> --
> Divij Vaidya
>
>
>
> On Wed, May 10, 2023 at 6:10 PM Federico Valeri 
> wrote:
>
> > Hi all, I'd like to start a new discussion thread on KIP-927: Improve
> > the kafka-metadata-quorum output.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-927%3A+Improve+the+kafka-metadata-quorum+output
> >
> > This KIP is small and proposes to add a new optional flag to have a
> > human-readable timestamp output.
> >
> > Thank you!
> >
> > Br
> > Fede
> >
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1836

2023-05-10 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-864: Add End-To-End Latency Metrics to Connectors

2023-05-10 Thread Jorge Esteban Quilcate Otoya
Hi everyone,

Bumping this vote thread. 2 +1 binding and 1 +1 non-binding so far.

Cheers,
Jorge.

On Mon, 27 Feb 2023 at 18:56, Knowles Atchison Jr 
wrote:

> +1 (non binding)
>
> On Mon, Feb 27, 2023 at 11:21 AM Chris Egerton 
> wrote:
>
> > Hi all,
> >
> > I could have sworn I +1'd this but I can't seem to find a record of that.
> >
> > In the hopes that this action is idempotent, +1 (binding). Thanks for the
> > KIP!
> >
> > Cheers,
> >
> > Chris
> >
> > On Mon, Feb 27, 2023 at 6:28 AM Mickael Maison  >
> > wrote:
> >
> > > Thanks for the KIP
> > >
> > > +1 (binding)
> > >
> > > On Thu, Jan 26, 2023 at 4:36 PM Jorge Esteban Quilcate Otoya
> > >  wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I'd like to call for a vote on KIP-864, which proposes to add metrics
> > to
> > > > measure end-to-end latency in source and sink connectors.
> > > >
> > > > KIP:
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-864%3A+Add+End-To-End+Latency+Metrics+to+Connectors
> > > >
> > > > Discussion thread:
> > > > https://lists.apache.org/thread/k6rh2mr7pg94935fgpqw8b5fj308f2n7
> > > >
> > > > Many thanks,
> > > > Jorge.
> > >
> >
>


Re: [DISCUSS] Apache Kafka 3.5.0 release

2023-05-10 Thread Sophie Blee-Goldman
Thanks Mickael -- the fix has been merged to 3.5 now

On Wed, May 10, 2023 at 1:12 AM Mickael Maison 
wrote:

> Hi Sophie,
>
> Yes that's fine, thanks for letting me know!
>
> Mickael
>
> On Tue, May 9, 2023 at 10:54 PM Sophie Blee-Goldman
>  wrote:
> >
> > Hey Mickael, I noticed a bug in the new versioned key-value byte store
> > where it's delegating to the wrong API
> > (copy/paste error I assume). I extracted this into its own PR which I
> think
> > should be included in the 3.5 release.
> >
> > The tests are still running, but it's just a one-liner so I'll merge it
> > when they're done, and cherrypick to 3.5 if
> > that's ok with you. See https://github.com/apache/kafka/pull/13695
> >
> > Thanks for running the release!
> >
> > On Tue, May 9, 2023 at 1:28 PM Randall Hauch  wrote:
> >
> > > Thanks, Mickael.
> > >
> > > I've cherry-picked that commit to the `3.5` branch (
> > > https://issues.apache.org/jira/browse/KAFKA-14974).
> > >
> > > Best regards,
> > > Randall
> > >
> > > On Tue, May 9, 2023 at 2:13 PM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Hi Randall/Luke,
> > > >
> > > > Yes you can go ahead and merge these into 3.5. I've not started
> making
> > > > a release yet because:
> > > > - I found a regression today in MirrorMaker:
> > > > https://issues.apache.org/jira/browse/KAFKA-14980
> > > > - The 3.5 branch builder job in Jenkins has been disabled:
> > > > https://issues.apache.org/jira/browse/INFRA-24577
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > > On Tue, May 9, 2023 at 8:40 PM Luke Chen  wrote:
> > > > >
> > > > > Hi Mickael,
> > > > >
> > > > > Since we haven't had the CR created yet, I'm thinking we should
> > > backport
> > > > > this doc improvement to v3.5.0 to make the doc complete.
> > > > > https://github.com/apache/kafka/pull/13660
> > > > >
> > > > > What do you think?
> > > > >
> > > > > Luke
> > > > >
> > > > > On Sat, May 6, 2023 at 11:31 PM David Arthur 
> wrote:
> > > > >
> > > > > > I resolved these three:
> > > > > > * KAFKA-14840 is merged to trunk and 3.5. I removed the 3.4.1 fix
> > > > version
> > > > > > * KAFKA-14805 is merged to trunk and 3.5
> > > > > > * KAFKA-14918 is merged to trunk and 3.5
> > > > > >
> > > > > > KAFKA-14692 (docs issue) is still a not done
> > > > > >
> > > > > > Looks like KAFKA-14084 is now resolved as well (it's in trunk and
> > > 3.5).
> > > > > >
> > > > > > I'll try to find out about KAFKA-14698, I think it's likely a
> > > WONTFIX.
> > > > > >
> > > > > > -David
> > > > > >
> > > > > > On Fri, May 5, 2023 at 10:43 AM Mickael Maison <
> > > > mickael.mai...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi David,
> > > > > > >
> > > > > > > Thanks for the update!
> > > > > > > You still own 4 other tickets targeting 3.5: KAFKA-14840,
> > > > KAFKA-14805,
> > > > > > > KAFKA-14918, KAFKA-14692. Should I move all of them to the next
> > > > > > > release?
> > > > > > > Also KAFKA-14698 and KAFKA-14084 are somewhat related to the
> > > > > > > migration. Should I move them too?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Mickael
> > > > > > >
> > > > > > > On Fri, May 5, 2023 at 4:27 PM David Arthur
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > Hey Mickael, my two ZK migration fixes are in 3.5 now.
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > David
> > > > > > > >
> > > > > > > > On Fri, May 5, 2023 at 9:37 AM Mickael Maison <
> > > > > > mickael.mai...@gmail.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Divij,
> > > > > > > > >
> > > > > > > > > Some dependencies (ZooKeeper, Snappy, Swagger, zstd, etc)
> have
> > > > been
> > > > > > > > > updated since 3.4.
> > > > > > > > > Regarding your PR, I would have been in favor of bringing
> this
> > > > to 3.5
> > > > > > > > > a couple of weeks ago, but we're now a week past code
> freeze
> > > for
> > > > 3.5.
> > > > > > > > > Apart if this fixes CVEs, or significant bugs, I think we
> > > should
> > > > only
> > > > > > > > > merge it in trunk.
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Mickael
> > > > > > > > >
> > > > > > > > > On Fri, May 5, 2023 at 1:49 PM Divij Vaidya <
> > > > divijvaidy...@gmail.com
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > Hey Mickael
> > > > > > > > > >
> > > > > > > > > > Should we consider performing an update of the minor
> versions
> > > > of
> > > > > > the
> > > > > > > > > > dependencies in 3.5 (per
> > > > > > https://github.com/apache/kafka/pull/13673
> > > > > > > )?
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Divij Vaidya
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Tue, May 2, 2023 at 5:48 PM Mickael Maison <
> > > > > > > mickael.mai...@gmail.com>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi Luke,
> > > > > > > > > > >
> > > > > > > > > > > Yes I think it makes sense to backport both to 3.5.
> > > > > > > > > > >
> > > > 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1835

2023-05-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 472274 lines...]
[2023-05-10T18:33:27.694Z] 
[2023-05-10T18:33:27.694Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED
[2023-05-10T18:33:27.694Z] 
[2023-05-10T18:33:27.694Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED
[2023-05-10T18:33:28.727Z] 
[2023-05-10T18:33:28.727Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED
[2023-05-10T18:33:28.727Z] 
[2023-05-10T18:33:28.727Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys STARTED
[2023-05-10T18:33:40.574Z] [Checks API] No suitable checks publisher found.
[Pipeline] echo
[2023-05-10T18:33:40.575Z] Skipping Kafka Streams archetype test for Java 11
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2023-05-10T18:33:40.739Z] 
[2023-05-10T18:33:40.739Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys PASSED
[2023-05-10T18:33:40.739Z] 
[2023-05-10T18:33:40.739Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient STARTED
[2023-05-10T18:33:42.802Z] 
[2023-05-10T18:33:42.802Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient PASSED
[2023-05-10T18:33:42.802Z] 
[2023-05-10T18:33:42.802Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient STARTED
[2023-05-10T18:33:44.866Z] 
[2023-05-10T18:33:44.866Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2023-05-10T18:33:44.866Z] 
[2023-05-10T18:33:44.866Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2023-05-10T18:34:30.491Z] 
[2023-05-10T18:34:30.491Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2023-05-10T18:34:30.491Z] 
[2023-05-10T18:34:30.491Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2023-05-10T18:35:16.544Z] 
[2023-05-10T18:35:16.544Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2023-05-10T18:35:16.544Z] 
[2023-05-10T18:35:16.544Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2023-05-10T18:35:23.574Z] 
[2023-05-10T18:35:23.574Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2023-05-10T18:35:23.574Z] 
[2023-05-10T18:35:23.574Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2023-05-10T18:36:10.522Z] 
[2023-05-10T18:36:10.522Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2023-05-10T18:36:10.522Z] 
[2023-05-10T18:36:10.522Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2023-05-10T18:36:13.503Z] 
[2023-05-10T18:36:13.503Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2023-05-10T18:36:13.503Z] 
[2023-05-10T18:36:13.503Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2023-05-10T18:36:17.643Z] 
[2023-05-10T18:36:17.643Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED

Re: [DISCUSS] KIP-927: Improve the kafka-metadata-quorum output

2023-05-10 Thread Divij Vaidya
Thank you for the KIP.

The current proposal has the limitation that it uses a duration syntax for
representation of a timestamp. Also, the syntax relies on the locale and
timezone of the caller machine. This makes it difficult to share the output
with others. As an example, let's say you want to share the state of the
quorum with me which was captured but "3s ago" gives me no information on
when it was executed.

Alternatively, may I suggest introducing a parameter called
"--datetime-format=" which takes as value the ISO-8601 [1] format and
prints the timestamp based on the provided format. It would solve the
problem of readability (epochs are hard to read!) as well as the problem of
portability of output across machines.

What do you think?

[1] https://en.wikipedia.org/wiki/ISO_8601

--
Divij Vaidya



On Wed, May 10, 2023 at 6:10 PM Federico Valeri 
wrote:

> Hi all, I'd like to start a new discussion thread on KIP-927: Improve
> the kafka-metadata-quorum output.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-927%3A+Improve+the+kafka-metadata-quorum+output
>
> This KIP is small and proposes to add a new optional flag to have a
> human-readable timestamp output.
>
> Thank you!
>
> Br
> Fede
>


Re: [DISCUSS] KIP-926: introducing acks=min.insync.replicas config

2023-05-10 Thread Alexandre Dupriez
Hi, Luke,

Thanks for the KIP. It clearly highlights the tradeoff between latency
and durability and proposes an approach relaxing a durability
constraint to provide lower ingestion latency. Please find a few
comments/questions.

100. The KIP makes one statement which may be considered critical:
"Note that in acks=min.insync.replicas case, the slow follower might
be easier to become out of sync than acks=all.". Would you have some
data on that behaviour when using the new ack semantic? It would be
interesting to analyse and especially look at the percentage of time
the number of replicas in ISR is reduced to the configured
min.insync.replicas. A (perhaps naive) hypothesis would be that the
new ack semantic indeed provides better produce latency, but at the
cost of precipitating the slowest replica(s) out of the ISR?

The underlying reasoning is that if a follower replica (or set of
replicas) is (are) consistently slower to fetch than their peer(s) and
the increased batching the slow follower(s) may benefit from does not
offset for the extra fetch time, there is a risk that the leader LEO
may be harder to reach within the replica max lag time for this
(these) followers. This could also potentially lead to a higher rate
of ISR shrinkage and expansion similarly to a low replica max lag
time.

101. I understand the impact on produce latency, but I am not sure
about the impact on durability. Is your durability model built against
the replication factor or the number of min insync replicas?

102. Could a new type of replica which would not be allowed to enter
the ISR be an alternative? Such replica could attempt replication on a
best-effort basis and would provide the permanent guarantee not to
interfere with foreground traffic.

Thanks,
Alexandre



Le mer. 10 mai 2023 à 15:22, Divij Vaidya  a écrit :
>
> Thank you Luke for starting off this discussion. I have been thinking about
> this and other similar changes to the replication for a while now. The KIP
> that Ismael surfaced (where was that discussion thread hiding all this
> time!) addresses exactly the improvements that I have been wondering about.
>
> Let me state certain points here and tell me what you think about them.
>
> #1 We need to change the leader election if we introduce the new ack=min.isr
> I will expand on Ismael's comment about the necessity to change the leader
> election with an example.
> 1. {A, B, C} are in ISR, A is leader, min.insync.replicas=2
> 2. Write comes in with acks=min.insync.replicas and {A,B} receives the
> write and it gets acknowledged to the producer. C hasn't still received the
> write.
> 3. A fails. Leadership transfers to C.
> 4. C hasn't received the write in step 2 and hence, it will ask B to
> truncate itself and match the prefix of C.
>
> As you can observe, if we don't change the leader election strategy to
> choosing a leader with the largest LEO, we may end up in a situation where
> we are losing ACK'ed messages. This is a durability loss.
>
> #2 Now that we have established based on statement 1 above that it is
> necessary to modify the leader election, I believe we definitely should do
> it (and revive conversation from KIP-250). Determining the ISR with the
> largest LEO comes with a cost of multiple round trips with controllers.
> This is an acceptable cost because it improves the steady state scenario
> (lower latency for writes) while adding additional overhead of
> rare/exceptional scenarios (leadership failover).
> Another advantage of choosing the leader with the largest LEO is evident in
> case of an unclean leader election. We can extend this new leader election
> logic to choose the out-of-sync replica with the largest LEO in case of
> unclean leader election. This will reduce the amount of data loss in such a
> scenario. I have a draft for this here
> 
> but
> I never ended up writing a KIP for it.
>
> #3 Now, if we agree that we need to change the leader election to
> improve steady state, should we consider a raft-like quorum based algorithm
> instead of the current one? IMO, yes we should implement a quorum based
> algorithm, but not in the scope of this change. That is a bigger change and
> requires a different KIP which shouldn't block the immediate advantages of
> your proposal.
>
> #4 Changes to the replication protocol are tricky and full of edge case
> scenarios. How do we develop in the future and gain confidence about the
> changes? This is where formal models like TLA+ comes into the picture.
> Modeling Kafka's replication protocol in TLA+ helps us in demonstrating
> provable semantics AND it also helps in quick iteration of ideas. As an
> example, for your proposal, we can extend the (work in progress) model
> here:
> https://github.com/divijvaidya/kafka-specification/blob/master/Kip405.tla#L112
> and assert that the invariants hold true even after we make the change
> about ack 

[jira] [Resolved] (KAFKA-14985) ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit() test is flaky

2023-05-10 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-14985.
--
Resolution: Duplicate

Resolving as duplicate of existing open JIRA.

> ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit()
>  test is flaky
> 
>
> Key: KAFKA-14985
> URL: https://issues.apache.org/jira/browse/KAFKA-14985
> Project: Kafka
>  Issue Type: Test
>Reporter: Manyanda Chitimbo
>Assignee: Manyanda Chitimbo
>Priority: Major
>
> The test sometimes fails with the following error
> {code:java}
> Gradle Test Run :core:test > Gradle Test Executor 14 > ConnectionQuotasTest > 
> testListenerConnectionRateLimitWhenActualRateAboveLimit() FAILED
>     java.util.concurrent.ExecutionException: 
> org.opentest4j.AssertionFailedError: Expected rate (30 +- 7), but got 
> 37.47891810856393 (600 connections / 16.009 sec) ==> expected: <30.0> but 
> was: <37.47891810856393>
>         at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
>         at 
> kafka.network.ConnectionQuotasTest.$anonfun$testListenerConnectionRateLimitWhenActualRateAboveLimit$3(ConnectionQuotasTest.scala:412)
>         at scala.collection.immutable.List.foreach(List.scala:333)
>         at 
> kafka.network.ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit(ConnectionQuotasTest.scala:412)
>         Caused by:
>         org.opentest4j.AssertionFailedError: Expected rate (30 +- 7), but got 
> 37.47891810856393 (600 connections / 16.009 sec) ==> expected: <30.0> but 
> was: <37.47891810856393>
>             at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>             at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>             at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>             at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:86)
>             at 
> app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1021)
>             at 
> app//kafka.network.ConnectionQuotasTest.acceptConnectionsAndVerifyRate(ConnectionQuotasTest.scala:904)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: apply for permission to contribute to Apache Kafka

2023-05-10 Thread Matthias J. Sax
I just checked permissions and you should be all set. Did you try to log 
out and log in again?


-Matthias

On 5/9/23 10:04 PM, Doe John wrote:

Thanks,

After obtaining permission, I want to assign this JIRA ticket 
 to myself, but there's no 「Assign」 button for me.

image.png
Is there any problem here?

Best Regards,
John Doe



Luke Chen mailto:show...@gmail.com>> 于2023年5月10日 
周三 01:04写道:


Done.

Thanks.
Luke

On Sat, May 6, 2023 at 9:38 PM Doe John mailto:zh2725284...@gmail.com>> wrote:

 > my Jira ID: johndoe
 >
 > on email zh2725284...@gmail.com 
 >
 > Thanks!
 >



Re: Question ❓

2023-05-10 Thread Matthias J. Sax

Partitions are not for different users.

If you want to isolate users, you would do it at the topic level. You 
could use ACLs to grant access to different topics: 
https://kafka.apache.org/documentation/#security_authz



-Matthias

On 5/9/23 11:11 AM, влад тасканов wrote:


Hi. I recently start­ed studying kafka and raised a question. Is it possible 
for each user to make a separate queue? as I understand it, there is a broker 
with different topics, and each topic had the number of partitions = the number 
of use­rs. if yes, you can link to an example or explanation. Google didn't 
help me.


[DISCUSS] KIP-927: Improve the kafka-metadata-quorum output

2023-05-10 Thread Federico Valeri
Hi all, I'd like to start a new discussion thread on KIP-927: Improve
the kafka-metadata-quorum output.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-927%3A+Improve+the+kafka-metadata-quorum+output

This KIP is small and proposes to add a new optional flag to have a
human-readable timestamp output.

Thank you!

Br
Fede


[jira] [Created] (KAFKA-14985) ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit() test is flaky

2023-05-10 Thread Manyanda Chitimbo (Jira)
Manyanda Chitimbo created KAFKA-14985:
-

 Summary: 
ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit() 
test is flaky
 Key: KAFKA-14985
 URL: https://issues.apache.org/jira/browse/KAFKA-14985
 Project: Kafka
  Issue Type: Test
Reporter: Manyanda Chitimbo


The test sometimes fails with the following error
{code:java}
Gradle Test Run :core:test > Gradle Test Executor 14 > ConnectionQuotasTest > 
testListenerConnectionRateLimitWhenActualRateAboveLimit() FAILED
    java.util.concurrent.ExecutionException: 
org.opentest4j.AssertionFailedError: Expected rate (30 +- 7), but got 
37.47891810856393 (600 connections / 16.009 sec) ==> expected: <30.0> but was: 
<37.47891810856393>
        at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
        at 
kafka.network.ConnectionQuotasTest.$anonfun$testListenerConnectionRateLimitWhenActualRateAboveLimit$3(ConnectionQuotasTest.scala:412)
        at scala.collection.immutable.List.foreach(List.scala:333)
        at 
kafka.network.ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit(ConnectionQuotasTest.scala:412)
        Caused by:
        org.opentest4j.AssertionFailedError: Expected rate (30 +- 7), but got 
37.47891810856393 (600 connections / 16.009 sec) ==> expected: <30.0> but was: 
<37.47891810856393>
            at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
            at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
            at 
app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
            at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:86)
            at 
app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1021)
            at 
app//kafka.network.ConnectionQuotasTest.acceptConnectionsAndVerifyRate(ConnectionQuotasTest.scala:904)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14984) DynamicBrokerReconfigurationTest.testThreadPoolResize() test is flaky

2023-05-10 Thread Manyanda Chitimbo (Jira)
Manyanda Chitimbo created KAFKA-14984:
-

 Summary: DynamicBrokerReconfigurationTest.testThreadPoolResize() 
test is flaky 
 Key: KAFKA-14984
 URL: https://issues.apache.org/jira/browse/KAFKA-14984
 Project: Kafka
  Issue Type: Test
Reporter: Manyanda Chitimbo


The test sometimes fails with the below log 
{code:java}
kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize() failed, 
log available in 
.../core/build/reports/testOutput/kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize().test.stdoutGradle
 Test Run :core:test > Gradle Test Executor 6 > 
DynamicBrokerReconfigurationTest > testThreadPoolResize() FAILED
    org.opentest4j.AssertionFailedError: Invalid threads: expected 6, got 8: 
List(data-plane-kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0, 
data-plane-kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0, 
data-plane-kafka-socket-acceptor-ListenerName(INTERNAL)-SSL-0, 
data-plane-kafka-socket-acceptor-ListenerName(EXTERNAL)-SASL_SSL-0, 
data-plane-kafka-socket-acceptor-ListenerName(INTERNAL)-SSL-0, 
data-plane-kafka-socket-acceptor-ListenerName(INTERNAL)-SSL-0, 
data-plane-kafka-socket-acceptor-ListenerName(EXTERNAL)-SASL_SSL-0, 
data-plane-kafka-socket-acceptor-ListenerName(EXTERNAL)-SASL_SSL-0) ==> 
expected:  but was: 
        at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
        at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
        at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
        at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
        at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
        at 
app//kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1634)
        at 
app//kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:872)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1834

2023-05-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 567823 lines...]
[2023-05-10T15:11:29.020Z] 
[2023-05-10T15:11:29.020Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > VersionedKeyValueStoreIntegrationTest > shouldRestore PASSED
[2023-05-10T15:11:29.020Z] 
[2023-05-10T15:11:29.020Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > VersionedKeyValueStoreIntegrationTest > 
shouldPutGetAndDelete STARTED
[2023-05-10T15:11:29.020Z] 
[2023-05-10T15:11:29.020Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > VersionedKeyValueStoreIntegrationTest > 
shouldPutGetAndDelete PASSED
[2023-05-10T15:11:29.020Z] 
[2023-05-10T15:11:29.020Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned STARTED
[2023-05-10T15:12:21.677Z] 
[2023-05-10T15:12:21.677Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned PASSED
[2023-05-10T15:12:21.677Z] 
[2023-05-10T15:12:21.677Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted STARTED
[2023-05-10T15:12:26.358Z] 
[2023-05-10T15:12:26.358Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted PASSED
[2023-05-10T15:12:29.463Z] 
[2023-05-10T15:12:29.463Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers STARTED
[2023-05-10T15:12:57.674Z] 
[2023-05-10T15:12:57.674Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers PASSED
[2023-05-10T15:12:57.674Z] 
[2023-05-10T15:12:57.674Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount STARTED
[2023-05-10T15:13:16.796Z] 
[2023-05-10T15:13:16.796Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED
[2023-05-10T15:13:16.796Z] 
[2023-05-10T15:13:16.796Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED
[2023-05-10T15:13:17.698Z] 
[2023-05-10T15:13:17.698Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED
[2023-05-10T15:13:17.698Z] 
[2023-05-10T15:13:17.698Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys STARTED
[2023-05-10T15:13:28.215Z] 
[2023-05-10T15:13:28.215Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys PASSED
[2023-05-10T15:13:28.215Z] 
[2023-05-10T15:13:28.215Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient STARTED
[2023-05-10T15:13:30.169Z] 
[2023-05-10T15:13:30.169Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient PASSED
[2023-05-10T15:13:30.169Z] 
[2023-05-10T15:13:30.169Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient STARTED
[2023-05-10T15:13:32.123Z] 
[2023-05-10T15:13:32.123Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2023-05-10T15:13:32.123Z] 
[2023-05-10T15:13:32.123Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2023-05-10T15:14:10.678Z] 
[2023-05-10T15:14:10.678Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2023-05-10T15:14:10.678Z] 
[2023-05-10T15:14:10.678Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2023-05-10T15:14:43.176Z] 
[2023-05-10T15:14:43.176Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 180 > StreamsAssignmentScaleTest > 

Re: [DISCUSS] KIP-926: introducing acks=min.insync.replicas config

2023-05-10 Thread Divij Vaidya
Thank you Luke for starting off this discussion. I have been thinking about
this and other similar changes to the replication for a while now. The KIP
that Ismael surfaced (where was that discussion thread hiding all this
time!) addresses exactly the improvements that I have been wondering about.

Let me state certain points here and tell me what you think about them.

#1 We need to change the leader election if we introduce the new ack=min.isr
I will expand on Ismael's comment about the necessity to change the leader
election with an example.
1. {A, B, C} are in ISR, A is leader, min.insync.replicas=2
2. Write comes in with acks=min.insync.replicas and {A,B} receives the
write and it gets acknowledged to the producer. C hasn't still received the
write.
3. A fails. Leadership transfers to C.
4. C hasn't received the write in step 2 and hence, it will ask B to
truncate itself and match the prefix of C.

As you can observe, if we don't change the leader election strategy to
choosing a leader with the largest LEO, we may end up in a situation where
we are losing ACK'ed messages. This is a durability loss.

#2 Now that we have established based on statement 1 above that it is
necessary to modify the leader election, I believe we definitely should do
it (and revive conversation from KIP-250). Determining the ISR with the
largest LEO comes with a cost of multiple round trips with controllers.
This is an acceptable cost because it improves the steady state scenario
(lower latency for writes) while adding additional overhead of
rare/exceptional scenarios (leadership failover).
Another advantage of choosing the leader with the largest LEO is evident in
case of an unclean leader election. We can extend this new leader election
logic to choose the out-of-sync replica with the largest LEO in case of
unclean leader election. This will reduce the amount of data loss in such a
scenario. I have a draft for this here

but
I never ended up writing a KIP for it.

#3 Now, if we agree that we need to change the leader election to
improve steady state, should we consider a raft-like quorum based algorithm
instead of the current one? IMO, yes we should implement a quorum based
algorithm, but not in the scope of this change. That is a bigger change and
requires a different KIP which shouldn't block the immediate advantages of
your proposal.

#4 Changes to the replication protocol are tricky and full of edge case
scenarios. How do we develop in the future and gain confidence about the
changes? This is where formal models like TLA+ comes into the picture.
Modeling Kafka's replication protocol in TLA+ helps us in demonstrating
provable semantics AND it also helps in quick iteration of ideas. As an
example, for your proposal, we can extend the (work in progress) model
here:
https://github.com/divijvaidya/kafka-specification/blob/master/Kip405.tla#L112
and assert that the invariants hold true even after we make the change
about ack (currently model doesn't support ack, it only supports changing
HW which will remain same even after this KIP).

Once we align on the above four points, I will be happy to work with you to
explore the possible solutions in detail.

--
Divij Vaidya



On Wed, May 10, 2023 at 6:59 AM Ismael Juma  wrote:

> Hi Luke,
>
> As discussed in the other KIP, there are some subtleties when it comes to
> the semantics of the system if we don't wait for all members of the isr
> before we ack. I don't understand why you say the leader election question
> is out of scope - it seems to be a core aspect to me.
>
> Ismael
>
>
> On Wed, May 10, 2023, 8:50 AM Luke Chen  wrote:
>
> > Hi Ismael,
> >
> > No, I didn't know about this similar KIP! I hope I've known that so that
> I
> > don't need to spend time to write it again! :(
> > I checked the KIP and all the discussions (here
> > ).
> I
> > think the consensus is that adding a client config to `acks=quorum` is
> > fine.
> > This comment
> >  from
> > Guozhang pretty much concluded what I'm trying to do.
> >
> >
> >
> >
> >
> >
> >
> >
> > *1. Add one more value to client-side acks config:   0: no acks needed at
> > all.   1: ack from the leader.   all: ack from ALL the ISR replicas
> >  quorum: this is the new value, it requires ack from enough number of ISR
> > replicas no smaller than majority of the replicas AND no smaller
> > than{min.isr}.2. Clarify in the docs that if a user wants to tolerate X
> > failures, she needs to set client acks=all or acks=quorum (better tail
> > latency than "all") with broker {min.sir} to be X+1; however, "all" is
> not
> > necessarily stronger than "quorum".*
> >
> > Concerns from KIP-250 are:
> > 1. Introducing a new leader LEO based election method. This is not clear
> in
> > the KIP-250 and needs more 

[jira] [Created] (KAFKA-14983) Upgrade jetty-server to 9.4.51

2023-05-10 Thread Beltran (Jira)
Beltran created KAFKA-14983:
---

 Summary: Upgrade jetty-server to 9.4.51
 Key: KAFKA-14983
 URL: https://issues.apache.org/jira/browse/KAFKA-14983
 Project: Kafka
  Issue Type: Task
Reporter: Beltran


Kafka latest versions e.g. 3.4.0 includes jetty-server-9.4.48.v20220622.jar 
that includes 2 vulnerabilities: CVE-2023-26048 and CVE-2023-26049. Upgrading 
them to 9.4.51 would fix those issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14514) Implement range broker side assignor

2023-05-10 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14514.
-
Fix Version/s: 3.6.0
   Resolution: Fixed

> Implement range broker side assignor
> 
>
> Key: KAFKA-14514
> URL: https://issues.apache.org/jira/browse/KAFKA-14514
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1833

2023-05-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 472431 lines...]
[2023-05-10T11:26:08.957Z] > Task :connect:json:testClasses UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :connect:json:testJar
[2023-05-10T11:26:08.957Z] > Task :raft:compileTestJava UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :raft:testClasses UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :connect:json:testSrcJar
[2023-05-10T11:26:08.957Z] > Task :server-common:compileTestJava UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :server-common:testClasses UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :group-coordinator:compileTestJava UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :group-coordinator:testClasses UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :metadata:compileTestJava UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task :metadata:testClasses UP-TO-DATE
[2023-05-10T11:26:08.957Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2023-05-10T11:26:11.613Z] 
[2023-05-10T11:26:11.613Z] > Task :connect:api:javadoc
[2023-05-10T11:26:11.613Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/connect/api/src/main/java/org/apache/kafka/connect/source/SourceRecord.java:44:
 warning - Tag @link: reference not found: org.apache.kafka.connect.data
[2023-05-10T11:26:13.379Z] 1 warning
[2023-05-10T11:26:14.325Z] 
[2023-05-10T11:26:14.325Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2023-05-10T11:26:14.325Z] > Task :connect:api:jar UP-TO-DATE
[2023-05-10T11:26:14.325Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2023-05-10T11:26:14.325Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2023-05-10T11:26:14.325Z] > Task :connect:json:jar UP-TO-DATE
[2023-05-10T11:26:14.325Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2023-05-10T11:26:14.325Z] > Task :connect:api:javadocJar
[2023-05-10T11:26:14.325Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2023-05-10T11:26:14.325Z] > Task :connect:api:testClasses UP-TO-DATE
[2023-05-10T11:26:14.325Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2023-05-10T11:26:14.325Z] > Task :connect:json:publishToMavenLocal
[2023-05-10T11:26:14.325Z] > Task :connect:api:testJar
[2023-05-10T11:26:14.325Z] > Task :connect:api:testSrcJar
[2023-05-10T11:26:14.325Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2023-05-10T11:26:14.325Z] > Task :connect:api:publishToMavenLocal
[2023-05-10T11:26:17.147Z] > Task :streams:javadoc
[2023-05-10T11:26:17.147Z] > Task :streams:javadocJar
[2023-05-10T11:26:18.917Z] 
[2023-05-10T11:26:18.917Z] > Task :clients:javadoc
[2023-05-10T11:26:18.917Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API
[2023-05-10T11:26:18.917Z] 
[2023-05-10T11:26:18.917Z]  This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
[2023-05-10T11:26:18.917Z]  The type field in both files must match and must 
not change. The type field
[2023-05-10T11:26:18.917Z]  is used both for passing ScramCredentialUpsertion 
and for the internal
[2023-05-10T11:26:18.917Z]  UserScramCredentialRecord. Do not change the type 
field."
[2023-05-10T11:26:19.862Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
[2023-05-10T11:26:20.808Z] 2 warnings
[2023-05-10T11:26:21.752Z] 
[2023-05-10T11:26:21.752Z] > Task :clients:javadocJar
[2023-05-10T11:26:22.697Z] > Task :clients:srcJar
[2023-05-10T11:26:22.697Z] > Task :clients:testJar
[2023-05-10T11:26:23.642Z] > Task :clients:testSrcJar
[2023-05-10T11:26:23.643Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2023-05-10T11:26:23.643Z] > Task :clients:publishToMavenLocal
[2023-05-10T11:26:39.972Z] > Task :core:compileScala
[2023-05-10T11:28:29.376Z] > Task :core:classes
[2023-05-10T11:28:29.376Z] > Task :core:compileTestJava NO-SOURCE
[2023-05-10T11:28:48.611Z] > Task :core:compileTestScala
[2023-05-10T11:30:22.362Z] > Task :core:testClasses
[2023-05-10T11:30:22.362Z] > Task :streams:compileTestJava UP-TO-DATE
[2023-05-10T11:30:22.362Z] > Task :streams:testClasses UP-TO-DATE
[2023-05-10T11:30:22.362Z] > Task :streams:testJar
[2023-05-10T11:30:22.362Z] > Task :streams:testSrcJar
[2023-05-10T11:30:23.325Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2023-05-10T11:30:23.325Z] > Task :streams:publishToMavenLocal
[2023-05-10T11:30:23.325Z] 
[2023-05-10T11:30:23.325Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 9.0.
[2023-05-10T11:30:23.325Z] 

Re: Question ❓

2023-05-10 Thread hudeqi
The current request queue is very single. In fact, there will be many 
performance problems when the business scenario of a single cluster becomes 
complicated. Not only to divide according to user, but also to isolate 
according to request category, this is just my idea.

best,
hudeqi


 -原始邮件-
 发件人: "влад тасканов" 
 发送时间: 2023-05-10 02:11:21 (星期三)
 收件人: dev@kafka.apache.org
 抄送: 
 主题: Question ❓
 


[jira] [Created] (KAFKA-14982) Improve the kafka-metadata-quorum output

2023-05-10 Thread Luke Chen (Jira)
Luke Chen created KAFKA-14982:
-

 Summary: Improve the kafka-metadata-quorum output
 Key: KAFKA-14982
 URL: https://issues.apache.org/jira/browse/KAFKA-14982
 Project: Kafka
  Issue Type: Improvement
Reporter: Luke Chen
Assignee: Federico Valeri


When running kafka-metadata-quorum script to get the quorum replication status, 
I found the LastFetchTimestamp and LastCaughtUpTimestamp output is not human 
readable. The timestamp 1683701749161 is just a random integer to me. We should 
convert it into date/time (ex: May 10, 08:00 UTC), or if possible, convert it 
into strings like "10 seconds ago", "5 minutes ago"...

 

 
{code:java}
 % ./bin/kafka-metadata-quorum.sh --bootstrap-server localhost:9092 describe 
--replication
NodeId    LogEndOffset    Lag    LastFetchTimestamp    LastCaughtUpTimestamp    
Status      
1         166             0      1683701749161         1683701749161            
Leader      
6         166             0      1683701748776         1683701748776            
Follower    
7         166             0      1683701748773         1683701748773            
Follower    
2         166             0      1683701748766         1683701748766            
Observer {code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Question ❓

2023-05-10 Thread влад тасканов

Hi. I recently start­ed studying kafka and raised a question. Is it possible 
for each user to make a separate queue? as I understand it, there is a broker 
with different topics, and each topic had the number of partitions = the number 
of use­rs. if yes, you can link to an example or explanation. Google didn't 
help me.

Re: [DISCUSS] Apache Kafka 3.5.0 release

2023-05-10 Thread Mickael Maison
Hi Sophie,

Yes that's fine, thanks for letting me know!

Mickael

On Tue, May 9, 2023 at 10:54 PM Sophie Blee-Goldman
 wrote:
>
> Hey Mickael, I noticed a bug in the new versioned key-value byte store
> where it's delegating to the wrong API
> (copy/paste error I assume). I extracted this into its own PR which I think
> should be included in the 3.5 release.
>
> The tests are still running, but it's just a one-liner so I'll merge it
> when they're done, and cherrypick to 3.5 if
> that's ok with you. See https://github.com/apache/kafka/pull/13695
>
> Thanks for running the release!
>
> On Tue, May 9, 2023 at 1:28 PM Randall Hauch  wrote:
>
> > Thanks, Mickael.
> >
> > I've cherry-picked that commit to the `3.5` branch (
> > https://issues.apache.org/jira/browse/KAFKA-14974).
> >
> > Best regards,
> > Randall
> >
> > On Tue, May 9, 2023 at 2:13 PM Mickael Maison 
> > wrote:
> >
> > > Hi Randall/Luke,
> > >
> > > Yes you can go ahead and merge these into 3.5. I've not started making
> > > a release yet because:
> > > - I found a regression today in MirrorMaker:
> > > https://issues.apache.org/jira/browse/KAFKA-14980
> > > - The 3.5 branch builder job in Jenkins has been disabled:
> > > https://issues.apache.org/jira/browse/INFRA-24577
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Tue, May 9, 2023 at 8:40 PM Luke Chen  wrote:
> > > >
> > > > Hi Mickael,
> > > >
> > > > Since we haven't had the CR created yet, I'm thinking we should
> > backport
> > > > this doc improvement to v3.5.0 to make the doc complete.
> > > > https://github.com/apache/kafka/pull/13660
> > > >
> > > > What do you think?
> > > >
> > > > Luke
> > > >
> > > > On Sat, May 6, 2023 at 11:31 PM David Arthur  wrote:
> > > >
> > > > > I resolved these three:
> > > > > * KAFKA-14840 is merged to trunk and 3.5. I removed the 3.4.1 fix
> > > version
> > > > > * KAFKA-14805 is merged to trunk and 3.5
> > > > > * KAFKA-14918 is merged to trunk and 3.5
> > > > >
> > > > > KAFKA-14692 (docs issue) is still a not done
> > > > >
> > > > > Looks like KAFKA-14084 is now resolved as well (it's in trunk and
> > 3.5).
> > > > >
> > > > > I'll try to find out about KAFKA-14698, I think it's likely a
> > WONTFIX.
> > > > >
> > > > > -David
> > > > >
> > > > > On Fri, May 5, 2023 at 10:43 AM Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi David,
> > > > > >
> > > > > > Thanks for the update!
> > > > > > You still own 4 other tickets targeting 3.5: KAFKA-14840,
> > > KAFKA-14805,
> > > > > > KAFKA-14918, KAFKA-14692. Should I move all of them to the next
> > > > > > release?
> > > > > > Also KAFKA-14698 and KAFKA-14084 are somewhat related to the
> > > > > > migration. Should I move them too?
> > > > > >
> > > > > > Thanks,
> > > > > > Mickael
> > > > > >
> > > > > > On Fri, May 5, 2023 at 4:27 PM David Arthur
> > > > > >  wrote:
> > > > > > >
> > > > > > > Hey Mickael, my two ZK migration fixes are in 3.5 now.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > David
> > > > > > >
> > > > > > > On Fri, May 5, 2023 at 9:37 AM Mickael Maison <
> > > > > mickael.mai...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Divij,
> > > > > > > >
> > > > > > > > Some dependencies (ZooKeeper, Snappy, Swagger, zstd, etc) have
> > > been
> > > > > > > > updated since 3.4.
> > > > > > > > Regarding your PR, I would have been in favor of bringing this
> > > to 3.5
> > > > > > > > a couple of weeks ago, but we're now a week past code freeze
> > for
> > > 3.5.
> > > > > > > > Apart if this fixes CVEs, or significant bugs, I think we
> > should
> > > only
> > > > > > > > merge it in trunk.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Mickael
> > > > > > > >
> > > > > > > > On Fri, May 5, 2023 at 1:49 PM Divij Vaidya <
> > > divijvaidy...@gmail.com
> > > > > >
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Hey Mickael
> > > > > > > > >
> > > > > > > > > Should we consider performing an update of the minor versions
> > > of
> > > > > the
> > > > > > > > > dependencies in 3.5 (per
> > > > > https://github.com/apache/kafka/pull/13673
> > > > > > )?
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Divij Vaidya
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Tue, May 2, 2023 at 5:48 PM Mickael Maison <
> > > > > > mickael.mai...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Luke,
> > > > > > > > > >
> > > > > > > > > > Yes I think it makes sense to backport both to 3.5.
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > Mickael
> > > > > > > > > >
> > > > > > > > > > On Tue, May 2, 2023 at 11:38 AM Luke Chen <
> > show...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > Hi Mickael,
> > > > > > > > > > >
> > > > > > > > > > > There are 1 bug and 1 improvement that I'd like to
> > > backport to
> > > > > > 3.5.
> > > > > > > > > > > 1. A small improvement for ZK migration based on
> > >