Re: [DISCUSSION] KIP-864: Support --bootstrap-server in kafka-streams-application-reset

2022-09-08 Thread Guozhang Wang
Hello Николай,

Thanks for writing the KIP, I think it's rather straightforward and better
to be consistent in tooling params. I'm +1.

Guozhang


On Mon, Sep 5, 2022 at 11:25 PM Николай Ижиков  wrote:

> Hello.
>
> Do we still want to make parameter names consistent in tools?
> If yes, please, share your feedback on KIP.
>
> > 31 авг. 2022 г., в 11:50, Николай Ижиков 
> написал(а):
> >
> > Hello.
> >
> > I would like to start discussion on small KIP [1]
> > The goal of KIP is to add the same —boostrap-server parameter to
> `kafka-streams-appliation-reset.sh` tool as other tools use.
> > Please, share your feedback.
> >
> > [1]
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-864%3A+Support+--bootstrap-server+in+kafka-streams-application-reset
> >
>
>

-- 
-- Guozhang


Re: [DISCUSS] KIP-848: The Next Generation of the Consumer Rebalance Protocol

2022-09-08 Thread Guozhang Wang
Hello David,

One of Jun's comments make me thinking:

```
In this case, a new assignment is triggered by the client side
assignor. When constructing the HB, the consumer will always consult
the client side assignor and propagate the information to the group
coordinator. In other words, we don't expect users to call
Consumer#enforceRebalance anymore.
```

As I looked at the current PartitionAssignor's interface, we actually do
not have a way yet to instruct how to construct the next HB request, e.g.
when the assignor wants to enforce a new rebalance with a new assignment,
we'd need some customizable APIs inside the PartitionAssignor to indicate
the next HB telling broker about so. WDYT about adding such an API on the
PartitionAssignor?


Guozhang


On Tue, Sep 6, 2022 at 6:09 AM David Jacot 
wrote:

> Hi Jun,
>
> I have updated the KIP to include your feedback. I have also tried to
> clarify the parts which were not cleared.
>
> Best,
> David
>
> On Fri, Sep 2, 2022 at 4:18 PM David Jacot  wrote:
> >
> > Hi Jun,
> >
> > Thanks for your feedback. Let me start by answering your questions
> > inline and I will update the KIP next week.
> >
> > > Thanks for the KIP. Overall, the main benefits of the KIP seem to be
> fewer
> > > RPCs during rebalance and more efficient support of wildcard. A few
> > > comments below.
> >
> > I would also add that the KIP removes the global sync barrier in the
> > protocol which is essential to improve group stability and
> > scalability, and the KIP also simplifies the client by moving most of
> > the logic to the server side.
> >
> > > 30. ConsumerGroupHeartbeatRequest
> > > 30.1 ServerAssignor is a singleton. Do we plan to support rolling
> changing
> > > of the partition assignor in the consumers?
> >
> > Definitely. The group coordinator will use the assignor used by a
> > majority of the members. This allows the group to move from one
> > assignor to another by a roll. This is explained in the Assignor
> > Selection chapter.
> >
> > > 30.2 For each field, could you explain whether it's required in every
> > > request or the scenarios when it needs to be filled? For example, it's
> not
> > > clear to me when TopicPartitions needs to be filled.
> >
> > The client is expected to set those fields in case of a connection
> > issue (e.g. timeout) or when the fields have changed since the last
> > HB. The server populates those fields as long as the member is not
> > fully reconciled - the member should acknowledge that it has the
> > expected epoch and assignment. I will clarify this in the KIP.
> >
> > > 31. In the current consumer protocol, the rack affinity between the
> client
> > > and the broker is only considered during fetching, but not during
> assigning
> > > partitions to consumers. Sometimes, once the assignment is made, there
> is
> > > no opportunity for read affinity because no replicas of assigned
> partitions
> > > are close to the member. I am wondering if we should use this
> opportunity
> > > to address this by including rack in GroupMember.
> >
> > That's an interesting idea. I don't see any issue with adding the rack
> > to the members. I will do so.
> >
> > > 32. On the metric side, often, it's useful to know how busy a group
> > > coordinator is. By moving the event loop model, it seems that we could
> add
> > > a metric that tracks the fraction of the time the event loop is doing
> the
> > > actual work.
> >
> > That's a great idea. I will add it. Thanks.
> >
> > > 33. Could we add a section on coordinator failover handling? For
> example,
> > > does it need to trigger the check if any group with the wildcard
> > > subscription now has a new matching topic?
> >
> > Sure. When the new group coordinator takes over, it has to:
> > * Setup the session timeouts.
> > * Trigger a new assignment if a client side assignor is used. We don't
> > store the information about the member selected to run the assignment
> > so we have to start a new one.
> > * Update the topics metadata, verify the wildcard subscriptions, and
> > trigger a rebalance if needed.
> >
> > > 34. ConsumerGroupMetadataValue, ConsumerGroupPartitionMetadataValue,
> > > ConsumerGroupMemberMetadataValue: Could we document what the epoch
> field
> > > reflects? For example, does the epoch in ConsumerGroupMetadataValue
> reflect
> > > the latest group epoch? What about the one in
> > > ConsumerGroupPartitionMetadataValue and
> ConsumerGroupMemberMetadataValue?
> >
> > Sure. I will clarify that but it is always the latest group epoch.
> > When the group state is updated, the group epoch is bumped so we use
> > that one for all the change records related to the update.
> >
> > > 35. "the group coordinator will ensure that the following invariants
> are
> > > met: ... All members exists." It's possible for a member not to get any
> > > assigned partitions, right?
> >
> > That's right. Here I meant that the members provided by the assignor
> > in the assignment must exist in the group. The assignor can 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1213

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 504935 lines...]
[2022-09-09T04:37:27.990Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-09-09T04:37:27.990Z] 
[2022-09-09T04:37:27.990Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-09-09T04:37:31.862Z] 
[2022-09-09T04:37:31.862Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-09-09T04:37:31.862Z] 
[2022-09-09T04:37:31.862Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2022-09-09T04:37:59.012Z] 
[2022-09-09T04:37:59.012Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-09-09T04:37:59.012Z] 
[2022-09-09T04:37:59.012Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-09-09T04:37:59.012Z] 
[2022-09-09T04:37:59.012Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-09-09T04:37:59.012Z] 
[2022-09-09T04:37:59.012Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-09-09T04:38:00.799Z] 
[2022-09-09T04:38:00.799Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-09-09T04:38:01.756Z] 
[2022-09-09T04:38:01.756Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-09-09T04:38:04.999Z] 
[2022-09-09T04:38:04.999Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-09-09T04:38:04.999Z] 
[2022-09-09T04:38:04.999Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-09-09T04:38:08.252Z] 
[2022-09-09T04:38:08.252Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-09-09T04:38:08.252Z] 
[2022-09-09T04:38:08.252Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-09-09T04:38:14.179Z] 
[2022-09-09T04:38:14.179Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-09-09T04:38:14.179Z] 
[2022-09-09T04:38:14.179Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-09-09T04:38:15.483Z] 
[2022-09-09T04:38:15.483Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() PASSED
[2022-09-09T04:38:15.483Z] 
[2022-09-09T04:38:15.483Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() STARTED
[2022-09-09T04:38:35.308Z] 
[2022-09-09T04:38:35.308Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() PASSED
[2022-09-09T04:38:35.308Z] 
[2022-09-09T04:38:35.308Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() STARTED
[2022-09-09T04:38:39.535Z] 
[2022-09-09T04:38:39.536Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() PASSED
[2022-09-09T04:38:39.536Z] 
[2022-09-09T04:38:39.536Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2022-09-09T04:38:43.935Z] 
[2022-09-09T04:38:43.935Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-09-09T04:38:43.935Z] 
[2022-09-09T04:38:43.935Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-09-09T04:38:48.337Z] 
[2022-09-09T04:38:48.337Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-09-09T04:38:48.337Z] 
[2022-09-09T04:38:48.337Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-09-09T04:38:50.449Z] 
[2022-09-09T04:38:50.449Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-09-09T04:38:53.624Z] 
[2022-09-09T04:38:53.624Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2022-09-09T04:38:54.681Z] 
[2022-09-09T04:38:54.681Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2022-09-09T04:38:54.681Z] 
[2022-09-09T04:38:54.681Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() STARTED
[2022-09-09T04:38:54.681Z] 
[2022-09-09T04:38:54.681Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() PASSED
[2022-09-09T04:38:54.681Z] 
[2022-09-09T04:38:54.681Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() STARTED
[2022-09-09T04:38:54.681Z] 
[2022-09-09T04:38:54.681Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() PASSED
[2022-09-09T04:38:54.681Z] 
[2022-09-09T04:38:54.681Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets() STARTED
[2022-09-09T04:39:39.565Z] 
[2022-09-09T04:39:39.565Z] 

[jira] [Resolved] (KAFKA-14210) client quota config key is sanitized in Kraft broker

2022-09-08 Thread dengziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengziming resolved KAFKA-14210.

Resolution: Not A Problem

>  client quota config key is sanitized in Kraft broker
> --
>
> Key: KAFKA-14210
> URL: https://issues.apache.org/jira/browse/KAFKA-14210
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dengziming
>Assignee: dengziming
>Priority: Major
>
> we sanitized the key in zk mode but don't sanitized in Kraft mode.
> {code:java}
> public class AdminClientExample {
>   public static void main(String[] args) throws ExecutionException, 
> InterruptedException {
>   Properties properties = new Properties();
>   properties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9092");
>   Admin admin = Admin.create(properties);
>   // Alter config
>   admin.alterClientQuotas(Collections.singleton(
>   new ClientQuotaAlteration(
>   new 
> ClientQuotaEntity(Collections.singletonMap(ClientQuotaEntity.CLIENT_ID, 
> "default")),
>   Collections.singletonList(new 
> ClientQuotaAlteration.Op("request_percentage", 0.02)))
>   )).all().get();
>   Map> clientQuotaEntityMapMap = 
> admin.describeClientQuotas(
>   
> ClientQuotaFilter.contains(Collections.singletonList(ClientQuotaFilterComponent.ofDefaultEntity(ClientQuotaEntity.CLIENT_ID)))
>   ).entities().get();
> System.out.println(clientQuotaEntityMapMap);
>   }
> } {code}
> The code should have return request_percentage=0.02, but it returned 
> Map.empty.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1212

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 503685 lines...]
[2022-09-09T02:05:19.464Z] 
[2022-09-09T02:05:19.464Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient STARTED
[2022-09-09T02:05:21.267Z] 
[2022-09-09T02:05:21.267Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2022-09-09T02:05:21.267Z] 
[2022-09-09T02:05:21.267Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2022-09-09T02:06:03.033Z] 
[2022-09-09T02:06:03.033Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2022-09-09T02:06:03.033Z] 
[2022-09-09T02:06:03.033Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2022-09-09T02:06:38.771Z] 
[2022-09-09T02:06:38.771Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-09-09T02:06:38.771Z] 
[2022-09-09T02:06:38.771Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-09-09T02:06:49.020Z] 
[2022-09-09T02:06:49.020Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-09-09T02:06:49.020Z] 
[2022-09-09T02:06:49.020Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2022-09-09T02:07:38.047Z] 
[2022-09-09T02:07:38.047Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-09-09T02:07:38.047Z] 
[2022-09-09T02:07:38.047Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-09-09T02:07:38.048Z] 
[2022-09-09T02:07:38.048Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-09-09T02:07:38.048Z] 
[2022-09-09T02:07:38.048Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-09-09T02:07:40.766Z] 
[2022-09-09T02:07:40.766Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-09-09T02:07:40.766Z] 
[2022-09-09T02:07:40.766Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-09-09T02:07:44.457Z] 
[2022-09-09T02:07:44.457Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-09-09T02:07:44.457Z] 
[2022-09-09T02:07:44.457Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-09-09T02:07:48.147Z] 
[2022-09-09T02:07:48.147Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-09-09T02:07:48.147Z] 
[2022-09-09T02:07:48.147Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-09-09T02:07:54.062Z] 
[2022-09-09T02:07:54.062Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-09-09T02:07:54.062Z] 
[2022-09-09T02:07:54.062Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-09-09T02:07:55.032Z] 
[2022-09-09T02:07:55.033Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() PASSED
[2022-09-09T02:07:55.033Z] 
[2022-09-09T02:07:55.033Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() STARTED
[2022-09-09T02:08:17.530Z] 
[2022-09-09T02:08:17.530Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() PASSED
[2022-09-09T02:08:17.530Z] 
[2022-09-09T02:08:17.530Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() STARTED
[2022-09-09T02:08:20.233Z] 
[2022-09-09T02:08:20.233Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() PASSED
[2022-09-09T02:08:20.233Z] 
[2022-09-09T02:08:20.233Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2022-09-09T02:08:23.904Z] 
[2022-09-09T02:08:23.904Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-09-09T02:08:23.904Z] 
[2022-09-09T02:08:23.904Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-09-09T02:08:28.809Z] 
[2022-09-09T02:08:28.809Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-09-09T02:08:28.809Z] 
[2022-09-09T02:08:28.809Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-09-09T02:08:29.774Z] 
[2022-09-09T02:08:29.774Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-09-09T02:08:33.445Z] 
[2022-09-09T02:08:33.445Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2022-09-09T02:08:33.445Z] 
[2022-09-09T02:08:33.445Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2022-09-09T02:08:33.445Z] 
[2022-09-09T02:08:33.445Z] FineGrainedAutoResetIntegrationTest 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #62

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 571971 lines...]
[2022-09-09T02:00:20.285Z] 
[2022-09-09T02:00:20.285Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldShutDownClientIfGlobalStreamThreadWantsToReplaceThread PASSED
[2022-09-09T02:00:20.285Z] 
[2022-09-09T02:00:20.285Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldReplaceSingleThread STARTED
[2022-09-09T02:00:22.415Z] 
[2022-09-09T02:00:22.415Z] Exception: java.lang.AssertionError thrown from the 
UncaughtExceptionHandler in thread 
"appId_StreamsUncaughtExceptionHandlerIntegrationTestnull-de23a241-e506-4dfe-88c0-3ece36693dc0-StreamThread-1"
[2022-09-09T02:00:23.656Z] 
[2022-09-09T02:00:23.656Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldReplaceSingleThread PASSED
[2022-09-09T02:00:23.656Z] 
[2022-09-09T02:00:23.656Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldShutdownMultipleThreadApplication STARTED
[2022-09-09T02:00:30.654Z] 
[2022-09-09T02:00:30.654Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldShutdownMultipleThreadApplication PASSED
[2022-09-09T02:00:30.654Z] 
[2022-09-09T02:00:30.654Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldShutdownClient STARTED
[2022-09-09T02:00:34.147Z] 
[2022-09-09T02:00:34.147Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldShutdownClient PASSED
[2022-09-09T02:00:34.147Z] 
[2022-09-09T02:00:34.147Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldShutdownSingleThreadApplication STARTED
[2022-09-09T02:00:41.738Z] 
[2022-09-09T02:00:41.738Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldShutdownSingleThreadApplication PASSED
[2022-09-09T02:00:41.738Z] 
[2022-09-09T02:00:41.738Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldEmitSameRecordAfterFailover STARTED
[2022-09-09T02:00:46.449Z] 
[2022-09-09T02:00:46.449Z] 
org.apache.kafka.streams.integration.StreamsUncaughtExceptionHandlerIntegrationTest
 > shouldEmitSameRecordAfterFailover PASSED
[2022-09-09T02:00:47.410Z] 
[2022-09-09T02:00:47.411Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[at_least_once] STARTED
[2022-09-09T02:01:39.991Z] 
[2022-09-09T02:01:39.991Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[at_least_once] PASSED
[2022-09-09T02:01:39.991Z] 
[2022-09-09T02:01:39.991Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once] STARTED
[2022-09-09T02:02:31.499Z] 
[2022-09-09T02:02:31.500Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once] PASSED
[2022-09-09T02:02:31.500Z] 
[2022-09-09T02:02:31.500Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once_v2] STARTED
[2022-09-09T02:03:23.910Z] 
[2022-09-09T02:03:23.911Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once_v2] PASSED
[2022-09-09T02:03:23.911Z] 
[2022-09-09T02:03:23.911Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] STARTED
[2022-09-09T02:03:28.618Z] 
[2022-09-09T02:03:28.618Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] PASSED
[2022-09-09T02:03:28.618Z] 
[2022-09-09T02:03:28.618Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED
[2022-09-09T02:03:36.377Z] 
[2022-09-09T02:03:36.377Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] PASSED
[2022-09-09T02:03:36.377Z] 
[2022-09-09T02:03:36.377Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED
[2022-09-09T02:03:42.077Z] 
[2022-09-09T02:03:42.077Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] PASSED
[2022-09-09T02:03:42.077Z] 
[2022-09-09T02:03:42.077Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED
[2022-09-09T02:03:49.742Z] 
[2022-09-09T02:03:49.742Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] PASSED
[2022-09-09T02:03:49.742Z] 

[jira] [Created] (KAFKA-14210) client quota config key is sanitized in Kraft broker

2022-09-08 Thread dengziming (Jira)
dengziming created KAFKA-14210:
--

 Summary:  client quota config key is sanitized in Kraft 
broker
 Key: KAFKA-14210
 URL: https://issues.apache.org/jira/browse/KAFKA-14210
 Project: Kafka
  Issue Type: Improvement
Reporter: dengziming
Assignee: dengziming


Update client quota default config successfully:

root@7604c498c154:/opt/kafka-3.2.0# bin/kafka-configs.sh --bootstrap-server 
localhost:9092 --alter --entity-type clients --entity-default --add-config 
REQUEST_PERCENTAGE_OVERRIDE_CONFIG=0.01

Describe client quota default config, but there is no output:
Only quota configs can be added for 'clients' using --bootstrap-server. 
Unexpected config names: Set(REQUEST_PERCENTAGE_OVERRIDE_CONFIG)
root@7604c498c154:/opt/kafka-3.2.0# bin/kafka-configs.sh --bootstrap-server 
localhost:9092 --describe --entity-type clients --entity-default

The reason is that we sanitized the key in zk mode but don't sanitized in Kraft 
mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-857: Streaming recursion in Kafka Streams

2022-09-08 Thread Guozhang Wang
Hello Nick,

Thanks for the patient explanations. I think I'm convinced that we should
not exclude repartitioning inside the recursive operator indeed, and in
fact we cannot programmingly forbid it by the UnaryOperator itself anyways.

Just a few nits on the page itself:

1) could you still include in the example code a manual repartitioning step
inside the recursive step? It is to illustrate that as long as the final
stream key-value types are back to the same as the starting stream, we can
include arbitrary intermediate topics in the middle.

2) Regarding that the unary operator has to be terminated: I saw in some
cases of iterative streaming, where the iterations do not necessarily
require termination, and instead it relies on the follow-up step (e.g. like
a windowding aggregation) to emit final results. So I'm wondering if the
requirement of the operator to be terminating -- we cannot enforce it
programmingly though -- is just to make sure that we do not getting
exploded processing space (which seems like an implementation detail not
necessarily have to be exposed and required to users), or does it have any
correlations to processing semantics?

Could we, e.g. add another code example similar to this one:
https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/iteration/IterateExample.java#L87-L117

The main purpose is just to make sure and illustrate that the proposed API
could cover various cases with both terminating and potentially
non-terminating operators?


Other than that, I do not have any further comments. Great job!

Guozhang


On Tue, Sep 6, 2022 at 10:25 AM Nick Telford  wrote:

> The more I think about this, the more I think that automatic repartitioning
> is not required in the "recursively" method itself. I've removed references
> to this from the KIP, which further simplifies everything.
>
> I don't see any need to restrict users from repartitioning, either before,
> after or inside the "recursively" method. I can't see a scenario where the
> recursion would cause problems with it.
>
> Nick
>
> On Tue, 6 Sept 2022 at 18:08, Nick Telford  wrote:
>
> > Hi Guozhang,
> >
> > I mentioned this in the "Rejected Alternatives" section. Repartitioning
> > gives us several significant advantages over using an explicit topic and
> > "to":
> >
> >- Repartition topics are automatically created and managed by the
> >Streams runtime; explicit topics have to be created and managed by
> the user.
> >- Repartitioning topics have no retention criteria and instead purge
> >records once consumed, this prevents data loss. Explicit topics need
> >retention criteria, which have to be set large enough to avoid data
> loss,
> >often wasting considerable resources.
> >- The "recursively" method requires significantly less code than
> >recursion via an explicit topic, and is significantly easier to
> understand.
> >
> > Ultimately, I don't think repartitioning inside the unary operator adds
> > much complexity to the implementation. Certainly no more than other DSL
> > operations.
> >
> > Regards,
> > Nick
> >
> > On Tue, 6 Sept 2022 at 17:28, Guozhang Wang  wrote:
> >
> >> Hello Nick,
> >>
> >> Thanks for the re-written KIP! I read through it again, and so far have
> >> just one quick question on top of my head regarding repartitioning: it
> >> seems to me that when there's an intermediate topic inside the recursion
> >> step, then using this new API would basically give us the same behavior
> as
> >> using the existing `to` APIs. Of course, with the new API the user can
> >> make
> >> it more explicit that it is supposed to be recursive, but efficiency
> wise
> >> it provides no further optimizations. Is my understanding correct? If
> yes,
> >> I'm wondering if it's worthy the complexity to allow repartitioning
> inside
> >> the unary operator, or should we just restrict the recursion inside a
> >> single sub-topology.
> >>
> >>
> >> Guozhang
> >>
> >> On Tue, Sep 6, 2022 at 9:05 AM Nick Telford 
> >> wrote:
> >>
> >> > Hi everyone,
> >> >
> >> > I've re-written the KIP, with a new design that I think resolves the
> >> issues
> >> > you highlighted, and also simplifies usage.
> >> >
> >> >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-857%3A+Streaming+recursion+in+Kafka+Streams
> >> >
> >> > Note: I'm still working out the "automatic repartitioning" in my head,
> >> as I
> >> > don't think it's quite right. It may turn out that the additional
> >> overload
> >> > (with the Produced argument) is not necessary.
> >> >
> >> > Thanks for all your feedback so far. Let me know what you think!
> >> >
> >> > Regards,
> >> >
> >> > Nick
> >> >
> >> > On Thu, 25 Aug 2022 at 17:46, Nick Telford 
> >> wrote:
> >> >
> >> > > Hi Sophie,
> >> > >
> >> > > The reason I chose to add a new overload of "to", instead of
> creating
> >> a
> >> > > new method, is simply because I felt that "to" was 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1211

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 504255 lines...]
[2022-09-08T23:23:54.645Z] 
[2022-09-08T23:23:54.645Z] KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore(TestInfo) PASSED
[2022-09-08T23:23:54.645Z] 
[2022-09-08T23:23:54.645Z] KStreamAggregationIntegrationTest > 
shouldCountUnlimitedWindows() STARTED
[2022-09-08T23:23:55.710Z] 
[2022-09-08T23:23:55.710Z] > Task :clients:javadoc
[2022-09-08T23:23:55.710Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/OAuthBearerLoginCallbackHandler.java:147:
 warning - Tag @link: reference not found: 
[2022-09-08T23:23:56.403Z] 
[2022-09-08T23:23:56.403Z] KStreamAggregationIntegrationTest > 
shouldCountUnlimitedWindows() PASSED
[2022-09-08T23:23:56.403Z] 
[2022-09-08T23:23:56.403Z] KStreamAggregationIntegrationTest > 
shouldReduceWindowed(TestInfo) STARTED
[2022-09-08T23:23:56.788Z] 1 warning
[2022-09-08T23:23:58.033Z] 
[2022-09-08T23:23:58.033Z] > Task :clients:javadocJar
[2022-09-08T23:23:58.033Z] > Task :clients:testJar
[2022-09-08T23:23:58.033Z] > Task :clients:testSrcJar
[2022-09-08T23:23:58.033Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2022-09-08T23:23:58.033Z] > Task :clients:publishToMavenLocal
[2022-09-08T23:23:59.101Z] 
[2022-09-08T23:23:59.101Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2022-09-08T23:23:59.101Z] 
[2022-09-08T23:23:59.101Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2022-09-08T23:23:59.101Z] 
[2022-09-08T23:23:59.101Z] See 
https://docs.gradle.org/7.5.1/userguide/command_line_interface.html#sec:command_line_warnings
[2022-09-08T23:23:59.101Z] 
[2022-09-08T23:23:59.101Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to ensure correctness.
[2022-09-08T23:23:59.101Z] Please consult deprecation warnings for more details.
[2022-09-08T23:23:59.101Z] 
[2022-09-08T23:23:59.101Z] BUILD SUCCESSFUL in 43s
[2022-09-08T23:23:59.101Z] 79 actionable tasks: 33 executed, 46 up-to-date
[Pipeline] sh
[2022-09-08T23:24:00.252Z] 
[2022-09-08T23:24:00.252Z] KStreamAggregationIntegrationTest > 
shouldReduceWindowed(TestInfo) PASSED
[2022-09-08T23:24:00.252Z] 
[2022-09-08T23:24:00.252Z] KStreamAggregationIntegrationTest > 
shouldCountSessionWindows() STARTED
[2022-09-08T23:24:01.285Z] 
[2022-09-08T23:24:01.285Z] KStreamAggregationIntegrationTest > 
shouldCountSessionWindows() PASSED
[2022-09-08T23:24:01.285Z] 
[2022-09-08T23:24:01.285Z] KStreamAggregationIntegrationTest > 
shouldAggregateWindowed(TestInfo) STARTED
[2022-09-08T23:24:02.670Z] + grep ^version= gradle.properties
[2022-09-08T23:24:02.670Z] + cut -d= -f 2
[Pipeline] dir
[2022-09-08T23:24:03.744Z] Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/quickstart
[Pipeline] {
[Pipeline] sh
[2022-09-08T23:24:05.000Z] 
[2022-09-08T23:24:05.000Z] KStreamAggregationIntegrationTest > 
shouldAggregateWindowed(TestInfo) PASSED
[2022-09-08T23:24:06.427Z] + mvn clean install -Dgpg.skip
[2022-09-08T23:24:08.653Z] [INFO] Scanning for projects...
[2022-09-08T23:24:09.934Z] [INFO] 

[2022-09-08T23:24:09.934Z] [INFO] Reactor Build Order:
[2022-09-08T23:24:09.934Z] [INFO] 
[2022-09-08T23:24:09.934Z] [INFO] Kafka Streams :: Quickstart   
 [pom]
[2022-09-08T23:24:09.934Z] [INFO] streams-quickstart-java   
 [maven-archetype]
[2022-09-08T23:24:09.934Z] [INFO] 
[2022-09-08T23:24:09.934Z] [INFO] < 
org.apache.kafka:streams-quickstart >-
[2022-09-08T23:24:09.934Z] [INFO] Building Kafka Streams :: Quickstart 
3.4.0-SNAPSHOT[1/2]
[2022-09-08T23:24:09.934Z] [INFO] [ pom 
]-
[2022-09-08T23:24:09.934Z] [INFO] 
[2022-09-08T23:24:09.934Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart ---
[2022-09-08T23:24:09.934Z] [INFO] 
[2022-09-08T23:24:09.934Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart ---
[2022-09-08T23:24:10.900Z] [INFO] 
[2022-09-08T23:24:10.900Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart ---
[2022-09-08T23:24:11.279Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-09-08T23:24:11.279Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-09-08T23:24:11.279Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-09-08T23:24:11.279Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-09-08T23:24:11.279Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2022-09-08T23:24:11.976Z] [INFO] 
[2022-09-08T23:24:11.976Z] [INFO] --- 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #61

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 572655 lines...]
[2022-09-08T18:03:11.401Z] 
[2022-09-08T18:03:11.401Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted STARTED
[2022-09-08T18:03:16.656Z] 
[2022-09-08T18:03:16.656Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = false] PASSED
[2022-09-08T18:03:16.656Z] 
[2022-09-08T18:03:16.656Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectEndOffsetInformation STARTED
[2022-09-08T18:03:17.703Z] 
[2022-09-08T18:03:17.703Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectEndOffsetInformation PASSED
[2022-09-08T18:03:17.703Z] 
[2022-09-08T18:03:17.703Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectCommittedOffsetInformation STARTED
[2022-09-08T18:03:18.401Z] 
[2022-09-08T18:03:18.402Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted PASSED
[2022-09-08T18:03:18.925Z] 
[2022-09-08T18:03:18.925Z] 
org.apache.kafka.streams.integration.TaskMetadataIntegrationTest > 
shouldReportCorrectCommittedOffsetInformation PASSED
[2022-09-08T18:03:18.925Z] 
[2022-09-08T18:03:18.925Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted STARTED
[2022-09-08T18:03:20.498Z] 
[2022-09-08T18:03:20.498Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers STARTED
[2022-09-08T18:03:25.806Z] 
[2022-09-08T18:03:25.806Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted PASSED
[2022-09-08T18:03:27.927Z] 
[2022-09-08T18:03:27.927Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers STARTED
[2022-09-08T18:03:58.798Z] 
[2022-09-08T18:03:58.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers PASSED
[2022-09-08T18:03:58.798Z] 
[2022-09-08T18:03:58.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount STARTED
[2022-09-08T18:03:58.798Z] 
[2022-09-08T18:03:58.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED
[2022-09-08T18:03:58.798Z] 
[2022-09-08T18:03:58.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED
[2022-09-08T18:03:58.798Z] 
[2022-09-08T18:03:58.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED
[2022-09-08T18:03:58.798Z] 
[2022-09-08T18:03:58.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys STARTED
[2022-09-08T18:04:00.096Z] 
[2022-09-08T18:04:00.096Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers PASSED
[2022-09-08T18:04:00.096Z] 
[2022-09-08T18:04:00.096Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount STARTED
[2022-09-08T18:04:01.269Z] 
[2022-09-08T18:04:01.269Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED
[2022-09-08T18:04:01.269Z] 
[2022-09-08T18:04:01.269Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED
[2022-09-08T18:04:02.500Z] 
[2022-09-08T18:04:02.500Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED
[2022-09-08T18:04:02.500Z] 
[2022-09-08T18:04:02.500Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys STARTED
[2022-09-08T18:04:03.798Z] 
[2022-09-08T18:04:03.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys PASSED
[2022-09-08T18:04:03.798Z] 
[2022-09-08T18:04:03.798Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient STARTED
[2022-09-08T18:04:04.971Z] 
[2022-09-08T18:04:04.971Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient PASSED

[jira] [Resolved] (KAFKA-14204) QuorumController must correctly handle overly large batches

2022-09-08 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio resolved KAFKA-14204.

Resolution: Fixed

> QuorumController must correctly handle overly large batches
> ---
>
> Key: KAFKA-14204
> URL: https://issues.apache.org/jira/browse/KAFKA-14204
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, kraft
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Blocker
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1210

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 502535 lines...]
[2022-09-08T20:58:08.436Z] 
[2022-09-08T20:58:08.436Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2022-09-08T20:58:08.436Z] 
[2022-09-08T20:58:08.436Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2022-09-08T20:58:32.806Z] 
[2022-09-08T20:58:32.806Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2022-09-08T20:58:32.806Z] 
[2022-09-08T20:58:32.806Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2022-09-08T20:58:52.548Z] 
[2022-09-08T20:58:52.548Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-09-08T20:58:52.548Z] 
[2022-09-08T20:58:52.548Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-09-08T20:58:59.286Z] 
[2022-09-08T20:58:59.286Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-09-08T20:58:59.286Z] 
[2022-09-08T20:58:59.286Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2022-09-08T20:59:18.005Z] 
[2022-09-08T20:59:18.005Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-09-08T20:59:18.005Z] 
[2022-09-08T20:59:18.005Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-09-08T20:59:19.070Z] 
[2022-09-08T20:59:19.070Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-09-08T20:59:19.070Z] 
[2022-09-08T20:59:19.070Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-09-08T20:59:20.134Z] 
[2022-09-08T20:59:20.134Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-09-08T20:59:21.198Z] 
[2022-09-08T20:59:21.198Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-09-08T20:59:23.490Z] 
[2022-09-08T20:59:23.490Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-09-08T20:59:23.490Z] 
[2022-09-08T20:59:23.490Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-09-08T20:59:28.378Z] 
[2022-09-08T20:59:28.378Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-09-08T20:59:28.378Z] 
[2022-09-08T20:59:28.378Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-09-08T20:59:39.934Z] 
[2022-09-08T20:59:39.934Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-09-08T20:59:39.934Z] 
[2022-09-08T20:59:39.934Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-09-08T20:59:41.071Z] 
[2022-09-08T20:59:41.071Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() PASSED
[2022-09-08T20:59:41.071Z] 
[2022-09-08T20:59:41.071Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() STARTED
[2022-09-08T21:00:02.791Z] 
[2022-09-08T21:00:02.791Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() PASSED
[2022-09-08T21:00:02.791Z] 
[2022-09-08T21:00:02.791Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() STARTED
[2022-09-08T21:00:06.060Z] 
[2022-09-08T21:00:06.060Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() PASSED
[2022-09-08T21:00:06.060Z] 
[2022-09-08T21:00:06.060Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2022-09-08T21:00:10.838Z] 
[2022-09-08T21:00:10.838Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-09-08T21:00:10.838Z] 
[2022-09-08T21:00:10.838Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-09-08T21:00:15.568Z] 
[2022-09-08T21:00:15.568Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-09-08T21:00:15.569Z] 
[2022-09-08T21:00:15.569Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-09-08T21:00:16.705Z] 
[2022-09-08T21:00:16.705Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-09-08T21:00:19.812Z] 
[2022-09-08T21:00:19.813Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2022-09-08T21:00:24.812Z] 
[2022-09-08T21:00:24.812Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2022-09-08T21:00:24.812Z] 
[2022-09-08T21:00:24.812Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() STARTED
[2022-09-08T21:00:24.812Z] 
[2022-09-08T21:00:24.812Z] FineGrainedAutoResetIntegrationTest 

Re: [VOTE] KIP-862: Self-join optimization for stream-stream joins

2022-09-08 Thread Guozhang Wang
Thanks Vicky,

I read through the KIP again and it looks good to me. Just a quick question
regarding the public config changes: you mentioned "No public interfaces
will be impacted. The config TOPOLOGY_OPTIMIZATION_CONFIG will be extended
to accept a list of optimization rule configs in addition to the global
values "all" and "none" . But there are no new value strings mentioned in
this KIP, so that means we will apply this optimization only when `all` is
specified in the config right?


Guozhang


On Thu, Sep 8, 2022 at 12:02 PM Vasiliki Papavasileiou
 wrote:

> Hello everyone,
>
> I'd like to open the vote for KIP-862, which proposes to optimize
> stream-stream self-joins by using a single state store for the join.
>
> The proposal is here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-862%3A+Self-join+optimization+for+stream-stream+joins
>
> Thanks to all who reviewed the proposal, and thanks in advance for taking
> the time to vote!
>
> Thank you,
> Vicky
>


-- 
-- Guozhang


[jira] [Resolved] (KAFKA-14143) Exactly-once source system tests

2022-09-08 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-14143.
---
Resolution: Fixed

> Exactly-once source system tests
> 
>
> Key: KAFKA-14143
> URL: https://issues.apache.org/jira/browse/KAFKA-14143
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Affects Versions: 3.3.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.3.0
>
>
> System tests for the exactly-once source connector support introduced in 
> [KIP-618|https://cwiki.apache.org/confluence/display/KAFKA/KIP-618%3A+Exactly-Once+Support+for+Source+Connectors]
>  / KAFKA-1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] KIP-862: Self-join optimization for stream-stream joins

2022-09-08 Thread Vasiliki Papavasileiou
Hello everyone,

I'd like to open the vote for KIP-862, which proposes to optimize
stream-stream self-joins by using a single state store for the join.

The proposal is here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-862%3A+Self-join+optimization+for+stream-stream+joins

Thanks to all who reviewed the proposal, and thanks in advance for taking
the time to vote!

Thank you,
Vicky


[jira] [Created] (KAFKA-14209) Optimize stream stream self join to use single state store

2022-09-08 Thread Vicky Papavasileiou (Jira)
Vicky Papavasileiou created KAFKA-14209:
---

 Summary: Optimize stream stream self join to use single state store
 Key: KAFKA-14209
 URL: https://issues.apache.org/jira/browse/KAFKA-14209
 Project: Kafka
  Issue Type: Improvement
Reporter: Vicky Papavasileiou


For stream-stream joins that join the same source, we can omit one state store 
since they contain the same data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-854 Separate configuration for producer ID expiry

2022-09-08 Thread Justine Olshan
Hi all,

Thanks for voting on this KIP!

Taking a look at this change, I realized that rolling the cluster to apply
the configuration may be difficult when the issue is ongoing. The brokers
will become overloaded and it will be hard to roll. Since the configuration
can be dynamic, it makes sense to make it so. This way, we can minimize
impact by quickly changing the expiration (and potentially reverting it
back once the issue is resolved). I've updated the KIP to reflect that the
configuration should be dynamic rather than static, and that it will be
updated cluster-wide.

Let me know if there are any issues with this change.

Thanks,
Justine

On Thu, Aug 11, 2022 at 4:07 PM Justine Olshan  wrote:

> Hey all,
> Thanks for the votes!
> To summarize, we have 4 binding votes:
>
>- David Jacot
>- Jason Gustafson
>- Tom Bentley
>- Luke Chen
>
> and one non-binding vote:
>
>- Sagar
>
> This meets the requirements for the KIP to be accepted! Thanks again!
> Justine
>
> On Thu, Aug 11, 2022 at 2:13 AM Luke Chen  wrote:
>
>> Hi Justine,
>>
>> Thanks for the KIP.
>> +1 (binding) from me.
>>
>> Luke
>>
>> On Wed, Aug 10, 2022 at 1:44 AM Tom Bentley  wrote:
>>
>> > Hi Justine,
>> >
>> > Thanks again for the KIP, +1 (binding).
>> >
>> >
>> >
>> > On Tue, 9 Aug 2022 at 18:09, Jason Gustafson > >
>> > wrote:
>> >
>> > > Thanks Justine, +1 from me.
>> > >
>> > > On Tue, Aug 9, 2022 at 1:12 AM Sagar 
>> wrote:
>> > >
>> > > > Thanks for the KIP.
>> > > >
>> > > > +1(non-binding)
>> > > >
>> > > > Sagar.
>> > > >
>> > > > On Tue, Aug 9, 2022 at 1:13 PM David Jacot
>> > > >
>> > > > wrote:
>> > > >
>> > > > > Thanks for the KIP, Justine. The proposal makes sense to me. I am
>> +1
>> > > > > (binding).
>> > > > >
>> > > > > Cheers,
>> > > > > David
>> > > > >
>> > > > > On Mon, Aug 8, 2022 at 6:18 PM Justine Olshan
>> > > > >  wrote:
>> > > > > >
>> > > > > > Hi all,
>> > > > > > I'd like to start a vote for KIP-854: Separate configuration for
>> > > > producer
>> > > > > > ID expiry.
>> > > > > >
>> > > > > > KIP:
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-854+Separate+configuration+for+producer+ID+expiry
>> > > > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-14097
>> > > > > > Discussion thread:
>> > > > > >
>> https://lists.apache.org/thread/cz9x90883on98k082qd0tskj6yjhox1t
>> > > > > >
>> > > > > > Thanks,
>> > > > > > Justine
>> > > > >
>> > > >
>> > >
>> >
>>
>


Bug fix for KAFKA-13927

2022-09-08 Thread Jordan Bull
Hello,

I have a relatively simple fix for the bug
 reported in KAFKA-13927
 in which Connect will
not commit offsets for a message that a SinkTask retries and eventually
succeeds. Would appreciate eyes or feedback on.

Also the CI appears not to succeed for the trunk base (and other branches
right now); is that correct? I've added test assertions for this bug and
verified that the unit tests for the SinkTask do succeed. Curious what is
the course of action if tests do not pass in general for the base branch?

Thanks,
Jordan


Re: Last sprint to finish line: Replace EasyMock/Powermock with Mockito

2022-09-08 Thread Guozhang Wang
Thanks Christo for the updates. As we are adding new unit tests we are also
keen on using the new Mockito packages and so far I'd like to say it's much
easier to use :) would chime in to help on reviewing some of the PRs as
well.


Guozhang

On Tue, Sep 6, 2022 at 11:02 PM Christo Lolov
 wrote:

> Hello!
>
> This is the (roughly) bi-weekly update on the Mockito migration.
>
> Firstly, the following PRs have been merged since the last email so thank
> you to the writers (Yash and Divij) and reviewers (Dalibor, Mickael, Yash,
> Bruno and Chris):
>
> https://github.com/apache/kafka/pull/12459 <
> https://github.com/apache/kafka/pull/12459>
> https://github.com/apache/kafka/pull/12473 <
> https://github.com/apache/kafka/pull/12473>
> https://github.com/apache/kafka/pull/12509 <
> https://github.com/apache/kafka/pull/12509>
>
> Secondly, this is the latest list of PRs that are in need of a review to
> get them over the line:
>
> https://github.com/apache/kafka/pull/12409 <
> https://github.com/apache/kafka/pull/12409>
> https://github.com/apache/kafka/pull/12418 <
> https://github.com/apache/kafka/pull/12418> (I need to respond to the
> comments on this one, so the first action is on me)
> https://github.com/apache/kafka/pull/12465 <
> https://github.com/apache/kafka/pull/12465>
> https://github.com/apache/kafka/pull/12492 <
> https://github.com/apache/kafka/pull/12492>
> https://github.com/apache/kafka/pull/12505 <
> https://github.com/apache/kafka/pull/12505> (I need to respond to
> Dalibor’s comment on this one, but the overall PR could use some more eyes)
> https://github.com/apache/kafka/pull/12524 <
> https://github.com/apache/kafka/pull/12524>
> https://github.com/apache/kafka/pull/12527 <
> https://github.com/apache/kafka/pull/12527>
>
> Lastly, I am keeping https://issues.apache.org/jira/browse/KAFKA-14133 <
> https://issues.apache.org/jira/browse/KAFKA-14133> and
> https://issues.apache.org/jira/browse/KAFKA-14132 <
> https://issues.apache.org/jira/browse/KAFKA-14132> up to date, so if
> anyone has spare bandwidth and would like to assign themselves some of the
> unassigned tests their contributions would be welcome :)
>
> Best,
> Christo



-- 
-- Guozhang


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1209

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 421951 lines...]
[2022-09-08T17:12:48.253Z] 
[2022-09-08T17:12:48.253Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowStreamsExceptionNoResetSpecified() STARTED
[2022-09-08T17:12:48.253Z] 
[2022-09-08T17:12:48.253Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowStreamsExceptionNoResetSpecified() PASSED
[2022-09-08T17:12:51.538Z] 
[2022-09-08T17:12:51.538Z] KStreamAggregationIntegrationTest > 
shouldAggregateSlidingWindows(TestInfo) STARTED
[2022-09-08T17:12:51.813Z] 
[2022-09-08T17:12:51.813Z] GlobalKTableIntegrationTest > 
shouldGetToRunningWithOnlyGlobalTopology() STARTED
[2022-09-08T17:12:52.945Z] 
[2022-09-08T17:12:52.945Z] GlobalKTableIntegrationTest > 
shouldGetToRunningWithOnlyGlobalTopology() PASSED
[2022-09-08T17:12:52.945Z] 
[2022-09-08T17:12:52.945Z] GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin() STARTED
[2022-09-08T17:12:56.515Z] 
[2022-09-08T17:12:56.515Z] GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin() PASSED
[2022-09-08T17:12:56.515Z] 
[2022-09-08T17:12:56.515Z] GlobalKTableIntegrationTest > 
shouldRestoreGlobalInMemoryKTableOnRestart() STARTED
[2022-09-08T17:12:59.021Z] 
[2022-09-08T17:12:59.021Z] GlobalKTableIntegrationTest > 
shouldRestoreGlobalInMemoryKTableOnRestart() PASSED
[2022-09-08T17:12:59.021Z] 
[2022-09-08T17:12:59.021Z] GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin() STARTED
[2022-09-08T17:12:59.064Z] 
[2022-09-08T17:12:59.065Z] KStreamAggregationIntegrationTest > 
shouldAggregateSlidingWindows(TestInfo) PASSED
[2022-09-08T17:12:59.065Z] 
[2022-09-08T17:12:59.065Z] KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows() STARTED
[2022-09-08T17:13:01.172Z] 
[2022-09-08T17:13:01.172Z] KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows() PASSED
[2022-09-08T17:13:01.172Z] 
[2022-09-08T17:13:01.172Z] KStreamAggregationIntegrationTest > 
shouldReduceSlidingWindows(TestInfo) STARTED
[2022-09-08T17:13:03.276Z] 
[2022-09-08T17:13:03.276Z] GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin() PASSED
[2022-09-08T17:13:06.649Z] 
[2022-09-08T17:13:06.649Z] GlobalThreadShutDownOrderTest > 
shouldFinishGlobalStoreOperationOnShutDown() STARTED
[2022-09-08T17:13:06.795Z] 
[2022-09-08T17:13:06.795Z] KStreamAggregationIntegrationTest > 
shouldReduceSlidingWindows(TestInfo) PASSED
[2022-09-08T17:13:06.795Z] 
[2022-09-08T17:13:06.795Z] KStreamAggregationIntegrationTest > 
shouldReduce(TestInfo) STARTED
[2022-09-08T17:13:13.678Z] 
[2022-09-08T17:13:13.678Z] GlobalThreadShutDownOrderTest > 
shouldFinishGlobalStoreOperationOnShutDown() PASSED
[2022-09-08T17:13:14.230Z] 
[2022-09-08T17:13:14.230Z] KStreamAggregationIntegrationTest > 
shouldReduce(TestInfo) PASSED
[2022-09-08T17:13:14.230Z] 
[2022-09-08T17:13:14.230Z] KStreamAggregationIntegrationTest > 
shouldAggregate(TestInfo) STARTED
[2022-09-08T17:13:15.694Z] 
[2022-09-08T17:13:15.694Z] IQv2IntegrationTest > shouldFailStopped() STARTED
[2022-09-08T17:13:15.694Z] 
[2022-09-08T17:13:15.694Z] IQv2IntegrationTest > shouldFailStopped() PASSED
[2022-09-08T17:13:15.694Z] 
[2022-09-08T17:13:15.694Z] IQv2IntegrationTest > 
shouldNotRequireQueryHandler(TestInfo) STARTED
[2022-09-08T17:13:17.765Z] 
[2022-09-08T17:13:17.765Z] IQv2IntegrationTest > 
shouldNotRequireQueryHandler(TestInfo) PASSED
[2022-09-08T17:13:17.765Z] 
[2022-09-08T17:13:17.765Z] IQv2IntegrationTest > shouldFailNotStarted() STARTED
[2022-09-08T17:13:17.765Z] 
[2022-09-08T17:13:17.765Z] IQv2IntegrationTest > shouldFailNotStarted() PASSED
[2022-09-08T17:13:17.765Z] 
[2022-09-08T17:13:17.765Z] IQv2IntegrationTest > shouldFetchFromPartition() 
STARTED
[2022-09-08T17:13:20.987Z] 
[2022-09-08T17:13:20.987Z] IQv2IntegrationTest > shouldFetchFromPartition() 
PASSED
[2022-09-08T17:13:20.987Z] 
[2022-09-08T17:13:20.987Z] IQv2IntegrationTest > 
shouldFetchExplicitlyFromAllPartitions() STARTED
[2022-09-08T17:13:21.419Z] 
[2022-09-08T17:13:21.419Z] KStreamAggregationIntegrationTest > 
shouldAggregate(TestInfo) PASSED
[2022-09-08T17:13:21.419Z] 
[2022-09-08T17:13:21.419Z] KStreamAggregationIntegrationTest > 
shouldCount(TestInfo) STARTED
[2022-09-08T17:13:25.241Z] 
[2022-09-08T17:13:25.241Z] IQv2IntegrationTest > 
shouldFetchExplicitlyFromAllPartitions() PASSED
[2022-09-08T17:13:25.241Z] 
[2022-09-08T17:13:25.241Z] IQv2IntegrationTest > shouldFailUnknownStore() 
STARTED
[2022-09-08T17:13:25.241Z] 
[2022-09-08T17:13:25.241Z] IQv2IntegrationTest > shouldFailUnknownStore() PASSED
[2022-09-08T17:13:25.241Z] 
[2022-09-08T17:13:25.241Z] IQv2IntegrationTest > shouldRejectNonRunningActive() 
STARTED
[2022-09-08T17:13:28.451Z] 
[2022-09-08T17:13:28.451Z] KStreamAggregationIntegrationTest > 
shouldCount(TestInfo) PASSED
[2022-09-08T17:13:28.451Z] 
[2022-09-08T17:13:28.451Z] KStreamAggregationIntegrationTest > 
shouldGroupByKey(TestInfo) STARTED

[jira] [Resolved] (KAFKA-13952) Infinite retry timeout is not working

2022-09-08 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-13952.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

> Infinite retry timeout is not working
> -
>
> Key: KAFKA-13952
> URL: https://issues.apache.org/jira/browse/KAFKA-13952
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jakub Malek
>Assignee: Yash Mayya
>Priority: Minor
> Fix For: 3.4.0
>
>
> The 
> [documentation|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectorConfig.java#L129]
>  for {{errors.retry.timeout}} property says:
> {noformat}
> The maximum duration in milliseconds that a failed operation will be 
> reattempted. The default is 0, which means no retries will be attempted. Use 
> -1 for infinite retries.{noformat}
> But it seems that value {{-1}} is not respected by the 
> [RetryWithToleranceOperator|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/RetryWithToleranceOperator.java]
>  that simply compares elapsed time until {{startTime + errorRetryTimeout}} is 
> exceeded.
> I was also not able to find any conversion of the raw config value before 
> {{RetryWithToleranceOperator}} is initialized.
> I run a simple test with a connector using mocked transformation plugin that 
> throws the {{RetriableException}} and it seems to prove my claim.
> I'm not sure if it's documentation or implementation error or maybe I've 
> missed something.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] KIP-793: Sink Connectors: Support topic-mutating SMTs for async connectors (preCommit users)

2022-09-08 Thread Yash Mayya
Hi all,

I would like to (re)start a new discussion thread on KIP-793 (Kafka
Connect) which proposes some additions to the public SinkRecord interface
in order to support topic mutating SMTs for sink connectors that do their
own offset tracking.

Links:

KIP:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=191336830

Older discussion thread:
https://lists.apache.org/thread/00kcth6057jdcsyzgy1x8nb2s1cymy8h,
https://lists.apache.org/thread/rzqkm0q5y5v3vdjhg8wqppxbkw7nyopj

Jira: https://issues.apache.org/jira/browse/KAFKA-13431


Thanks,
Yash


Re: [DISCUSS] KIP-864: Add End-To-End Latency Metrics to Connectors

2022-09-08 Thread Jorge Esteban Quilcate Otoya
Great. I have updated the KIP to reflect this.

Cheers,
Jorge.

On Thu, 8 Sept 2022 at 12:26, Yash Mayya  wrote:

> Thanks, I think it makes sense to define these metrics at a DEBUG recording
> level.
>
> On Thu, Sep 8, 2022 at 2:51 PM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > On Thu, 8 Sept 2022 at 05:55, Yash Mayya  wrote:
> >
> > > Hi Jorge,
> > >
> > > Thanks for the changes. With regard to having per batch vs per record
> > > metrics, the additional overhead I was referring to wasn't about
> whether
> > or
> > > not we would need to iterate over all the records in a batch. I was
> > > referring to the potential additional overhead caused by the higher
> > volume
> > > of calls to Sensor::record on the sensors for the new metrics (as
> > compared
> > > to the existing batch only metrics), especially for high throughput
> > > connectors where batch sizes could be large. I guess we may want to do
> > some
> > > sort of performance testing and get concrete numbers to verify whether
> > this
> > > is a valid concern or not?
> > >
> >
> > 6.1. Got it, thanks for clarifying. I guess there could be a benchmark
> test
> > of the `Sensor::record` to get an idea of the performance impact.
> > Regardless, the fact that these are single-record metrics compared to
> > existing batch-only could be explicitly defined by setting these metrics
> at
> > a DEBUG or TRACE metric recording level, leaving the existing at INFO
> > level.
> > wdyt?
> >
> >
> > >
> > > Thanks,
> > > Yash
> > >
> > > On Tue, Sep 6, 2022 at 4:42 PM Jorge Esteban Quilcate Otoya <
> > > quilcate.jo...@gmail.com> wrote:
> > >
> > > > Hi Sagar and Yash,
> > > >
> > > > > the way it's defined in
> > > > https://kafka.apache.org/documentation/#connect_monitoring for the
> > > metrics
> > > >
> > > > 4.1. Got it. Add it to the KIP.
> > > >
> > > > > The only thing I would argue is do we need sink-record-latency-min?
> > > Maybe
> > > > we
> > > > > could remove this min metric as well and make all of the 3 e2e
> > metrics
> > > > > consistent
> > > >
> > > > 4.2 I see. Will remove it from the KIP.
> > > >
> > > > > Probably users can track the metrics at their end to
> > > > > figure that out. Do you think that makes sense?
> > > >
> > > > 4.3. Yes, agree. With these new metrics it should be easier for users
> > to
> > > > track this.
> > > >
> > > > > I think it makes sense to not have a min metric for either to
> remain
> > > > > consistent with the existing put-batch and poll-batch metrics
> > > >
> > > > 5.1. Got it. Same as 4.2
> > > >
> > > > > Another naming related suggestion I had was with the
> > > > > "convert-time" metrics - we should probably include transformations
> > in
> > > > the
> > > > > name since SMTs could definitely be attributable to a sizable chunk
> > of
> > > > the
> > > > > latency depending on the specific transformation chain.
> > > >
> > > > 5.2. Make sense. I'm proposing to add
> > `sink-record-convert-transform...`
> > > > and `source-record-transform-convert...` to represent correctly the
> > order
> > > > of operations.
> > > >
> > > > > it seems like both source and sink tasks only record metrics at a
> > > "batch"
> > > > > level, not on an individual record level. I think it might be
> > > additional
> > > > > overhead if we want to record these new metrics all at the record
> > > level?
> > > >
> > > > 5.3. I considered at the beginning to implement all metrics at the
> > batch
> > > > level, but given how the framework process records, I fallback to the
> > > > proposed approach:
> > > > - Sink Task:
> > > >   - `WorkerSinkTask#convertMessages(msgs)` already iterates over
> > records,
> > > > so there is no additional overhead to capture record latency per
> > record.
> > > > -
> > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L490-L514
> > > >   - `WorkerSinkTask#convertAndTransformRecord(record)` actually
> happens
> > > > individually. Measuring this operation per batch would include
> > processing
> > > > that is not strictly part of "convert and transform"
> > > > -
> > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L518
> > > > - Source Task:
> > > >   - `AbstractWorkerSourceTask#sendRecords` iterates over a batch and
> > > > applies transforms and convert record individually as well:
> > > > -
> > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractWorkerSourceTask.java#L389-L390
> > > >
> > > > > This might require some additional changes -
> > > > > for instance, with the "sink-record-latency" metric, we might only
> > want
> > > > to
> > > > > have a "max" metric since 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1208

2022-09-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 505057 lines...]
[2022-09-08T12:16:10.738Z] 
[2022-09-08T12:16:10.738Z] IQv2IntegrationTest > 
shouldFetchExplicitlyFromAllPartitions() STARTED
[2022-09-08T12:16:12.195Z] 
[2022-09-08T12:16:12.195Z] GlobalThreadShutDownOrderTest > 
shouldFinishGlobalStoreOperationOnShutDown() STARTED
[2022-09-08T12:16:12.682Z] 
[2022-09-08T12:16:12.682Z] IQv2IntegrationTest > 
shouldFetchExplicitlyFromAllPartitions() PASSED
[2022-09-08T12:16:12.682Z] 
[2022-09-08T12:16:12.682Z] IQv2IntegrationTest > shouldFailUnknownStore() 
STARTED
[2022-09-08T12:16:12.682Z] 
[2022-09-08T12:16:12.682Z] IQv2IntegrationTest > shouldFailUnknownStore() PASSED
[2022-09-08T12:16:12.682Z] 
[2022-09-08T12:16:12.682Z] IQv2IntegrationTest > shouldRejectNonRunningActive() 
STARTED
[2022-09-08T12:16:14.624Z] 
[2022-09-08T12:16:14.624Z] IQv2IntegrationTest > shouldRejectNonRunningActive() 
PASSED
[2022-09-08T12:16:17.001Z] 
[2022-09-08T12:16:17.001Z] InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs() STARTED
[2022-09-08T12:16:18.062Z] 
[2022-09-08T12:16:18.062Z] InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs() PASSED
[2022-09-08T12:16:18.062Z] 
[2022-09-08T12:16:18.062Z] InternalTopicIntegrationTest > 
shouldGetToRunningWithWindowedTableInFKJ() STARTED
[2022-09-08T12:16:18.223Z] 
[2022-09-08T12:16:18.223Z] GlobalThreadShutDownOrderTest > 
shouldFinishGlobalStoreOperationOnShutDown() PASSED
[2022-09-08T12:16:19.282Z] 
[2022-09-08T12:16:19.282Z] IQv2IntegrationTest > shouldFailStopped() STARTED
[2022-09-08T12:16:19.282Z] 
[2022-09-08T12:16:19.282Z] IQv2IntegrationTest > shouldFailStopped() PASSED
[2022-09-08T12:16:19.282Z] 
[2022-09-08T12:16:19.282Z] IQv2IntegrationTest > 
shouldNotRequireQueryHandler(TestInfo) STARTED
[2022-09-08T12:16:20.392Z] 
[2022-09-08T12:16:20.392Z] IQv2IntegrationTest > 
shouldNotRequireQueryHandler(TestInfo) PASSED
[2022-09-08T12:16:20.392Z] 
[2022-09-08T12:16:20.392Z] IQv2IntegrationTest > shouldFailNotStarted() STARTED
[2022-09-08T12:16:20.392Z] 
[2022-09-08T12:16:20.392Z] IQv2IntegrationTest > shouldFailNotStarted() PASSED
[2022-09-08T12:16:20.392Z] 
[2022-09-08T12:16:20.392Z] IQv2IntegrationTest > shouldFetchFromPartition() 
STARTED
[2022-09-08T12:16:21.898Z] 
[2022-09-08T12:16:21.898Z] InternalTopicIntegrationTest > 
shouldGetToRunningWithWindowedTableInFKJ() PASSED
[2022-09-08T12:16:21.898Z] 
[2022-09-08T12:16:21.898Z] InternalTopicIntegrationTest > 
shouldCompactAndDeleteTopicsForWindowStoreChangelogs() STARTED
[2022-09-08T12:16:21.898Z] 
[2022-09-08T12:16:21.898Z] InternalTopicIntegrationTest > 
shouldCompactAndDeleteTopicsForWindowStoreChangelogs() PASSED
[2022-09-08T12:16:22.707Z] 
[2022-09-08T12:16:22.708Z] IQv2IntegrationTest > shouldFetchFromPartition() 
PASSED
[2022-09-08T12:16:22.708Z] 
[2022-09-08T12:16:22.708Z] IQv2IntegrationTest > 
shouldFetchExplicitlyFromAllPartitions() STARTED
[2022-09-08T12:16:24.092Z] 
[2022-09-08T12:16:24.092Z] KStreamAggregationIntegrationTest > 
shouldAggregateSlidingWindows(TestInfo) STARTED
[2022-09-08T12:16:24.579Z] 
[2022-09-08T12:16:24.579Z] IQv2IntegrationTest > 
shouldFetchExplicitlyFromAllPartitions() PASSED
[2022-09-08T12:16:24.579Z] 
[2022-09-08T12:16:24.579Z] IQv2IntegrationTest > shouldFailUnknownStore() 
STARTED
[2022-09-08T12:16:24.579Z] 
[2022-09-08T12:16:24.579Z] IQv2IntegrationTest > shouldFailUnknownStore() PASSED
[2022-09-08T12:16:24.579Z] 
[2022-09-08T12:16:24.579Z] IQv2IntegrationTest > shouldRejectNonRunningActive() 
STARTED
[2022-09-08T12:16:27.529Z] 
[2022-09-08T12:16:27.529Z] IQv2IntegrationTest > shouldRejectNonRunningActive() 
PASSED
[2022-09-08T12:16:28.178Z] 
[2022-09-08T12:16:28.178Z] KStreamAggregationIntegrationTest > 
shouldAggregateSlidingWindows(TestInfo) PASSED
[2022-09-08T12:16:28.178Z] 
[2022-09-08T12:16:28.178Z] KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows() STARTED
[2022-09-08T12:16:28.662Z] 
[2022-09-08T12:16:28.663Z] InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs() STARTED
[2022-09-08T12:16:30.120Z] 
[2022-09-08T12:16:30.120Z] KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows() PASSED
[2022-09-08T12:16:30.120Z] 
[2022-09-08T12:16:30.120Z] KStreamAggregationIntegrationTest > 
shouldReduceSlidingWindows(TestInfo) STARTED
[2022-09-08T12:16:30.606Z] 
[2022-09-08T12:16:30.606Z] InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs() PASSED
[2022-09-08T12:16:30.606Z] 
[2022-09-08T12:16:30.606Z] InternalTopicIntegrationTest > 
shouldGetToRunningWithWindowedTableInFKJ() STARTED
[2022-09-08T12:16:34.638Z] 
[2022-09-08T12:16:34.638Z] InternalTopicIntegrationTest > 
shouldGetToRunningWithWindowedTableInFKJ() PASSED
[2022-09-08T12:16:34.638Z] 
[2022-09-08T12:16:34.638Z] InternalTopicIntegrationTest > 
shouldCompactAndDeleteTopicsForWindowStoreChangelogs() 

Re: [DISCUSS] KIP-864: Add End-To-End Latency Metrics to Connectors

2022-09-08 Thread Yash Mayya
Thanks, I think it makes sense to define these metrics at a DEBUG recording
level.

On Thu, Sep 8, 2022 at 2:51 PM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> On Thu, 8 Sept 2022 at 05:55, Yash Mayya  wrote:
>
> > Hi Jorge,
> >
> > Thanks for the changes. With regard to having per batch vs per record
> > metrics, the additional overhead I was referring to wasn't about whether
> or
> > not we would need to iterate over all the records in a batch. I was
> > referring to the potential additional overhead caused by the higher
> volume
> > of calls to Sensor::record on the sensors for the new metrics (as
> compared
> > to the existing batch only metrics), especially for high throughput
> > connectors where batch sizes could be large. I guess we may want to do
> some
> > sort of performance testing and get concrete numbers to verify whether
> this
> > is a valid concern or not?
> >
>
> 6.1. Got it, thanks for clarifying. I guess there could be a benchmark test
> of the `Sensor::record` to get an idea of the performance impact.
> Regardless, the fact that these are single-record metrics compared to
> existing batch-only could be explicitly defined by setting these metrics at
> a DEBUG or TRACE metric recording level, leaving the existing at INFO
> level.
> wdyt?
>
>
> >
> > Thanks,
> > Yash
> >
> > On Tue, Sep 6, 2022 at 4:42 PM Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> > > Hi Sagar and Yash,
> > >
> > > > the way it's defined in
> > > https://kafka.apache.org/documentation/#connect_monitoring for the
> > metrics
> > >
> > > 4.1. Got it. Add it to the KIP.
> > >
> > > > The only thing I would argue is do we need sink-record-latency-min?
> > Maybe
> > > we
> > > > could remove this min metric as well and make all of the 3 e2e
> metrics
> > > > consistent
> > >
> > > 4.2 I see. Will remove it from the KIP.
> > >
> > > > Probably users can track the metrics at their end to
> > > > figure that out. Do you think that makes sense?
> > >
> > > 4.3. Yes, agree. With these new metrics it should be easier for users
> to
> > > track this.
> > >
> > > > I think it makes sense to not have a min metric for either to remain
> > > > consistent with the existing put-batch and poll-batch metrics
> > >
> > > 5.1. Got it. Same as 4.2
> > >
> > > > Another naming related suggestion I had was with the
> > > > "convert-time" metrics - we should probably include transformations
> in
> > > the
> > > > name since SMTs could definitely be attributable to a sizable chunk
> of
> > > the
> > > > latency depending on the specific transformation chain.
> > >
> > > 5.2. Make sense. I'm proposing to add
> `sink-record-convert-transform...`
> > > and `source-record-transform-convert...` to represent correctly the
> order
> > > of operations.
> > >
> > > > it seems like both source and sink tasks only record metrics at a
> > "batch"
> > > > level, not on an individual record level. I think it might be
> > additional
> > > > overhead if we want to record these new metrics all at the record
> > level?
> > >
> > > 5.3. I considered at the beginning to implement all metrics at the
> batch
> > > level, but given how the framework process records, I fallback to the
> > > proposed approach:
> > > - Sink Task:
> > >   - `WorkerSinkTask#convertMessages(msgs)` already iterates over
> records,
> > > so there is no additional overhead to capture record latency per
> record.
> > > -
> > >
> > >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L490-L514
> > >   - `WorkerSinkTask#convertAndTransformRecord(record)` actually happens
> > > individually. Measuring this operation per batch would include
> processing
> > > that is not strictly part of "convert and transform"
> > > -
> > >
> > >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L518
> > > - Source Task:
> > >   - `AbstractWorkerSourceTask#sendRecords` iterates over a batch and
> > > applies transforms and convert record individually as well:
> > > -
> > >
> > >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractWorkerSourceTask.java#L389-L390
> > >
> > > > This might require some additional changes -
> > > > for instance, with the "sink-record-latency" metric, we might only
> want
> > > to
> > > > have a "max" metric since "avg" would require recording a value on
> the
> > > > sensor for each record (whereas we can get a "max" by only recording
> a
> > > > metric value for the oldest record in each batch).
> > >
> > > 5.4. Recording record-latency per batch may not be as useful as there
> is
> > no
> > > guarantee that the oldest record will be representative of the batch.
> > >
> > > On Sat, 3 Sept 2022 

[jira] [Created] (KAFKA-14208) KafkaConsumer#commitAsync throws unexpected WakeupException

2022-09-08 Thread Qingsheng Ren (Jira)
Qingsheng Ren created KAFKA-14208:
-

 Summary: KafkaConsumer#commitAsync throws unexpected 
WakeupException
 Key: KAFKA-14208
 URL: https://issues.apache.org/jira/browse/KAFKA-14208
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 3.2.1
Reporter: Qingsheng Ren


We recently encountered a bug after upgrading Kafka client to 3.2.1 in Flink 
Kafka connector (FLINK-29153). Here's the exception:
{code:java}
org.apache.kafka.common.errors.WakeupException
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:514)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:278)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:252)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnready(ConsumerCoordinator.java:493)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsAsync(ConsumerCoordinator.java:1055)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.commitAsync(KafkaConsumer.java:1573)
at 
org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread.run(KafkaConsumerThread.java:226)
 {code}
As {{WakeupException}} is not listed in the JavaDoc of 
{{{}KafkaConsumer#commitAsync{}}}, Flink Kafka connector doesn't catch the 
exception thrown directly from KafkaConsumer#commitAsync but handles all 
exceptions in the callback.

I checked the source code and suspect this is caused by KAFKA-13563. Also we 
never had this exception in commitAsync when we used Kafka client 2.4.1 & 
2.8.1. 

I'm wondering if this is kind of breaking the public API as the WakeupException 
is not listed in JavaDoc, and maybe it's better to invoke the callback to 
handle the {{WakeupException}} instead of throwing it directly from the method 
itself. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-864: Add End-To-End Latency Metrics to Connectors

2022-09-08 Thread Jorge Esteban Quilcate Otoya
On Thu, 8 Sept 2022 at 05:55, Yash Mayya  wrote:

> Hi Jorge,
>
> Thanks for the changes. With regard to having per batch vs per record
> metrics, the additional overhead I was referring to wasn't about whether or
> not we would need to iterate over all the records in a batch. I was
> referring to the potential additional overhead caused by the higher volume
> of calls to Sensor::record on the sensors for the new metrics (as compared
> to the existing batch only metrics), especially for high throughput
> connectors where batch sizes could be large. I guess we may want to do some
> sort of performance testing and get concrete numbers to verify whether this
> is a valid concern or not?
>

6.1. Got it, thanks for clarifying. I guess there could be a benchmark test
of the `Sensor::record` to get an idea of the performance impact.
Regardless, the fact that these are single-record metrics compared to
existing batch-only could be explicitly defined by setting these metrics at
a DEBUG or TRACE metric recording level, leaving the existing at INFO level.
wdyt?


>
> Thanks,
> Yash
>
> On Tue, Sep 6, 2022 at 4:42 PM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > Hi Sagar and Yash,
> >
> > > the way it's defined in
> > https://kafka.apache.org/documentation/#connect_monitoring for the
> metrics
> >
> > 4.1. Got it. Add it to the KIP.
> >
> > > The only thing I would argue is do we need sink-record-latency-min?
> Maybe
> > we
> > > could remove this min metric as well and make all of the 3 e2e metrics
> > > consistent
> >
> > 4.2 I see. Will remove it from the KIP.
> >
> > > Probably users can track the metrics at their end to
> > > figure that out. Do you think that makes sense?
> >
> > 4.3. Yes, agree. With these new metrics it should be easier for users to
> > track this.
> >
> > > I think it makes sense to not have a min metric for either to remain
> > > consistent with the existing put-batch and poll-batch metrics
> >
> > 5.1. Got it. Same as 4.2
> >
> > > Another naming related suggestion I had was with the
> > > "convert-time" metrics - we should probably include transformations in
> > the
> > > name since SMTs could definitely be attributable to a sizable chunk of
> > the
> > > latency depending on the specific transformation chain.
> >
> > 5.2. Make sense. I'm proposing to add `sink-record-convert-transform...`
> > and `source-record-transform-convert...` to represent correctly the order
> > of operations.
> >
> > > it seems like both source and sink tasks only record metrics at a
> "batch"
> > > level, not on an individual record level. I think it might be
> additional
> > > overhead if we want to record these new metrics all at the record
> level?
> >
> > 5.3. I considered at the beginning to implement all metrics at the batch
> > level, but given how the framework process records, I fallback to the
> > proposed approach:
> > - Sink Task:
> >   - `WorkerSinkTask#convertMessages(msgs)` already iterates over records,
> > so there is no additional overhead to capture record latency per record.
> > -
> >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L490-L514
> >   - `WorkerSinkTask#convertAndTransformRecord(record)` actually happens
> > individually. Measuring this operation per batch would include processing
> > that is not strictly part of "convert and transform"
> > -
> >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L518
> > - Source Task:
> >   - `AbstractWorkerSourceTask#sendRecords` iterates over a batch and
> > applies transforms and convert record individually as well:
> > -
> >
> >
> https://github.com/apache/kafka/blob/9841647c4fe422532f448423c92d26e4fdcb8932/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractWorkerSourceTask.java#L389-L390
> >
> > > This might require some additional changes -
> > > for instance, with the "sink-record-latency" metric, we might only want
> > to
> > > have a "max" metric since "avg" would require recording a value on the
> > > sensor for each record (whereas we can get a "max" by only recording a
> > > metric value for the oldest record in each batch).
> >
> > 5.4. Recording record-latency per batch may not be as useful as there is
> no
> > guarantee that the oldest record will be representative of the batch.
> >
> > On Sat, 3 Sept 2022 at 16:02, Yash Mayya  wrote:
> >
> > > Hi Jorge and Sagar,
> > >
> > > I think it makes sense to not have a min metric for either to remain
> > > consistent with the existing put-batch and poll-batch metrics (it
> doesn't
> > > seem particularly useful either anyway). Also, the new
> > > "sink-record-latency" metric name looks fine to me, thanks for making
> the
> > > changes! Another naming related suggestion I had was with the
> > >