Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1197

2022-09-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 504438 lines...]
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2022-09-03T03:11:12.290Z] 
[2022-09-03T03:11:12.290Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-09-03T03:11:12.290Z] 
[2022-09-03T03:11:12.290Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-09-03T03:11:12.290Z] 
[2022-09-03T03:11:12.290Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-09-03T03:11:12.290Z] 
[2022-09-03T03:11:12.290Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-09-03T03:11:14.091Z] 
[2022-09-03T03:11:14.091Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-09-03T03:11:14.915Z] 
[2022-09-03T03:11:14.915Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-09-03T03:11:20.842Z] 
[2022-09-03T03:11:20.842Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-09-03T03:11:20.842Z] 
[2022-09-03T03:11:20.842Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-09-03T03:11:25.507Z] 
[2022-09-03T03:11:25.507Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-09-03T03:11:25.507Z] 
[2022-09-03T03:11:25.507Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-09-03T03:11:34.100Z] 
[2022-09-03T03:11:34.100Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-09-03T03:11:34.100Z] 
[2022-09-03T03:11:34.100Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-09-03T03:11:35.038Z] 
[2022-09-03T03:11:35.038Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() PASSED
[2022-09-03T03:11:35.038Z] 
[2022-09-03T03:11:35.038Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() STARTED
[2022-09-03T03:11:57.370Z] 
[2022-09-03T03:11:57.370Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() PASSED
[2022-09-03T03:11:57.370Z] 
[2022-09-03T03:11:57.370Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() STARTED
[2022-09-03T03:11:59.126Z] 
[2022-09-03T03:11:59.126Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() PASSED
[2022-09-03T03:11:59.126Z] 
[2022-09-03T03:11:59.126Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2022-09-03T03:12:03.996Z] 
[2022-09-03T03:12:03.996Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-09-03T03:12:03.996Z] 
[2022-09-03T03:12:03.996Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-09-03T03:12:07.750Z] 
[2022-09-03T03:12:07.750Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-09-03T03:12:07.750Z] 
[2022-09-03T03:12:07.750Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-09-03T03:12:10.386Z] 
[2022-09-03T03:12:10.386Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-09-03T03:12:13.192Z] 
[2022-09-03T03:12:13.192Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2022-09-03T03:12:14.130Z] 
[2022-09-03T03:12:14.130Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2022-09-03T03:12:14.130Z] 
[2022-09-03T03:12:14.130Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() STARTED
[2022-09-03T03:12:14.130Z] 
[2022-09-03T03:12:14.130Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() PASSED
[2022-09-03T03:12:14.130Z] 
[2022-09-03T03:12:14.130Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() STARTED
[2022-09-03T03:12:14.130Z] 
[2022-09-03T03:12:14.130Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() PASSED
[2022-09-03T03:12:14.130Z] 
[2022-09-03T03:12:14.130Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets() STARTED
[2022-09-03T03:13:03.159Z] 
[2022-09-03T03:13:03.159Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets() PASSED
[2022-09-03T03:13:03.159Z] 
[2022-09-03T03:13:03.159Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest()
 STARTED

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #130

2022-09-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 295040 lines...]
[2022-09-02T18:39:54.120Z] SslAdminIntegrationTest > 
testAclUpdatesUsingAsynchronousAuthorizer() PASSED
[2022-09-02T18:39:54.120Z] 
[2022-09-02T18:39:54.120Z] SslAdminIntegrationTest > 
testAclUpdatesUsingSynchronousAuthorizer() STARTED
[2022-09-02T18:39:58.838Z] 
[2022-09-02T18:39:58.838Z] SslAdminIntegrationTest > 
testAclUpdatesUsingSynchronousAuthorizer() PASSED
[2022-09-02T18:39:58.838Z] 
[2022-09-02T18:39:58.838Z] TransactionsExpirationTest > 
testBumpTransactionalEpochAfterInvalidProducerIdMapping() STARTED
[2022-09-02T18:40:10.931Z] 
[2022-09-02T18:40:10.931Z] TransactionsExpirationTest > 
testBumpTransactionalEpochAfterInvalidProducerIdMapping() PASSED
[2022-09-02T18:40:10.931Z] 
[2022-09-02T18:40:10.931Z] DescribeAuthorizedOperationsTest > 
testClusterAuthorizedOperations() STARTED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] > Task :streams:integrationTest
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers PASSED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount STARTED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys STARTED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys PASSED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient STARTED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient PASSED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient STARTED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2022-09-02T18:40:23.057Z] 
[2022-09-02T18:40:23.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2022-09-02T18:40:35.092Z] 
[2022-09-02T18:40:35.092Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2022-09-02T18:40:35.092Z] 
[2022-09-02T18:40:35.092Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2022-09-02T18:40:40.876Z] 
[2022-09-02T18:40:40.877Z] > Task :core:integrationTest
[2022-09-02T18:40:40.877Z] 
[2022-09-02T18:40:40.877Z] DescribeAuthorizedOperationsTest > 
testClusterAuthorizedOperations() PASSED
[2022-09-02T18:40:40.877Z] 
[2022-09-02T18:40:40.877Z] DescribeAuthorizedOperationsTest > 
testTopicAuthorizedOperations() STARTED
[2022-09-02T18:40:57.257Z] 
[2022-09-02T18:40:57.257Z] DescribeAuthorizedOperationsTest > 
testTopicAuthorizedOperations() PASSED
[2022-09-02T18:40:57.257Z] 
[2022-09-02T18:40:57.257Z] DescribeAuthorizedOperationsTest > 
testConsumerGroupAuthorizedOperations() STARTED
[2022-09-02T18:41:04.473Z] 
[2022-09-02T18:41:04.473Z] > Task :streams:integrationTest
[2022-09-02T18:41:04.473Z] 
[2022-09-02T18:41:04.473Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-09-02T18:41:04.473Z] 
[2022-09-02T18:41:04.473Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-09-02T18:41:04.473Z] 
[2022-09-02T18:41:04.473Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-09-02T18:41:04.473Z] 
[2022-09-02T18:41:04.473Z] 

[jira] [Created] (KAFKA-14201) Consumer should not send group instance ID if committing with empty member ID

2022-09-02 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-14201:
---

 Summary: Consumer should not send group instance ID if committing 
with empty member ID
 Key: KAFKA-14201
 URL: https://issues.apache.org/jira/browse/KAFKA-14201
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


The consumer group instance ID is used to support a notion of "static" consumer 
groups. The idea is to be able to identify the same group instance across 
restarts so that a rebalance is not needed. However, if the user sets 
`group.instance.id` in the consumer configuration, but uses "simple" assignment 
with `assign()`, then the instance ID nevertheless is sent in the OffsetCommit 
request to the coordinator. This may result in a surprising UNKNOWN_MEMBER_ID 
error. The consumer should probably be smart enough to only send the instance 
ID when committing as part of a consumer group.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.2 #75

2022-09-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 380536 lines...]
[2022-09-02T18:34:55.733Z] > Task :clients:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :clients:classes UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :server-common:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :storage:api:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :connect:api:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :clients:jar UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :connect:api:classes UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :clients:processTestMessages UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :raft:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :raft:classes UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :connect:json:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :connect:json:classes UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :connect:json:javadoc SKIPPED
[2022-09-02T18:34:55.733Z] > Task :storage:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :connect:json:javadocJar
[2022-09-02T18:34:55.733Z] > Task :metadata:compileJava UP-TO-DATE
[2022-09-02T18:34:55.733Z] > Task :metadata:classes UP-TO-DATE
[2022-09-02T18:34:55.734Z] > Task :core:compileJava NO-SOURCE
[2022-09-02T18:34:55.734Z] > Task :clients:compileTestJava UP-TO-DATE
[2022-09-02T18:34:55.734Z] > Task :clients:testClasses UP-TO-DATE
[2022-09-02T18:34:55.734Z] > Task :connect:json:compileTestJava UP-TO-DATE
[2022-09-02T18:34:55.734Z] > Task :connect:json:testClasses UP-TO-DATE
[2022-09-02T18:34:55.734Z] > Task :raft:compileTestJava UP-TO-DATE
[2022-09-02T18:34:55.734Z] > Task :raft:testClasses UP-TO-DATE
[2022-09-02T18:34:55.734Z] > Task :connect:json:testJar
[2022-09-02T18:34:55.734Z] > Task :connect:json:testSrcJar
[2022-09-02T18:34:56.676Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2022-09-02T18:34:56.676Z] 
[2022-09-02T18:34:56.676Z] > Task :streams:processMessages
[2022-09-02T18:34:56.676Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2022-09-02T18:34:56.676Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_3.2/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.3.3/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2022-09-02T18:34:56.676Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2022-09-02T18:34:56.676Z] 
[2022-09-02T18:34:56.676Z] > Task :metadata:compileTestJava UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :metadata:testClasses UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :core:compileScala UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :core:classes UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :core:compileTestJava NO-SOURCE
[2022-09-02T18:34:56.676Z] > Task :streams:compileJava UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :streams:classes UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :streams:jar UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2022-09-02T18:34:56.676Z] > Task :core:compileTestScala UP-TO-DATE
[2022-09-02T18:34:56.676Z] > Task :core:testClasses UP-TO-DATE
[2022-09-02T18:34:58.607Z] 
[2022-09-02T18:34:58.607Z] > Task :connect:api:javadoc
[2022-09-02T18:34:58.607Z] 
/home/jenkins/workspace/Kafka_kafka_3.2/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:136:
 warning - Tag @link: reference not found: ConnectorConfig
[2022-09-02T18:34:59.548Z] 1 warning
[2022-09-02T18:35:00.490Z] 
[2022-09-02T18:35:00.490Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2022-09-02T18:35:00.490Z] > Task :connect:api:jar UP-TO-DATE
[2022-09-02T18:35:00.490Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2022-09-02T18:35:00.490Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2022-09-02T18:35:00.490Z] > Task :connect:json:jar UP-TO-DATE
[2022-09-02T18:35:00.490Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2022-09-02T18:35:00.490Z] > Task :connect:json:signMavenJavaPublication FAILED
[2022-09-02T18:35:00.490Z] > Task :connect:api:javadocJar
[2022-09-02T18:35:02.251Z] 
[2022-09-02T18:35:02.251Z] > Task :streams:javadoc
[2022-09-02T18:35:02.251Z] 
/home/jenkins/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': 

[jira] [Created] (KAFKA-14200) kafka-features.sh must exit with non-zero error code on error

2022-09-02 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-14200:


 Summary: kafka-features.sh must exit with non-zero error code on 
error
 Key: KAFKA-14200
 URL: https://issues.apache.org/jira/browse/KAFKA-14200
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe
Assignee: Colin McCabe


kafka-features.sh must exit with a non-zero error code on error. We must do 
this in order to catch regressions like KAFKA-13990.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


RE: Re: [VOTE] KIP-821: Connect Transforms support for nested structures

2022-09-02 Thread Chris Egerton
+1 (binding). Thanks Jorge, great stuff!

We should probably verify with the people that have already cast +1 votes
that they're still on board, since the design has shifted a bit since the
last vote was casted.

On 2022/06/28 20:42:14 Jorge Esteban Quilcate Otoya wrote:
> Hi everyone,
>
> I'd like to bump this vote thread. Currently it's missing 1 +1 binding
vote
> to pass (2 +1 binding, 1 +1 non-binding).
>
> There has been additional discussions to consider array access and
> deep-scan (similar to JsonPath) but hasn't been included as part of this
> KIP.
> The only minor change since the previous votes has been the change of
> configuration name: from `field.style` to `field.syntax.version`.
>
> KIP:
>
https://cwiki.apache.org/confluence/display/KAFKA/KIP-821%3A+Connect+Transforms+support+for+nested+structures
>
> Cheers,
> Jorge.
>
> On Fri, 22 Apr 2022 at 00:01, Bill Bejeck  wrote:
>
> > Thanks for the KIP, Jorge.
> >
> > This looks like a great addition to Kafka Connect.
> >
> > +1(binding)
> >
> > -Bill
> >
> > On Thu, Apr 21, 2022 at 6:41 PM John Roesler  wrote:
> >
> > > Thanks for the KIP, Jorge!
> > >
> > > I’ve just looked over the KIP, and it looks good to me.
> > >
> > > I’m +1 (binding)
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Apr 21, 2022, at 09:10, Chris Egerton wrote:
> > > > This is a worthwhile addition to the SMTs that ship out of the box
with
> > > > Kafka Connect. +1 non-binding
> > > >
> > > > On Thu, Apr 21, 2022, 09:51 Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> I'd like to start a vote on KIP-821:
> > > >>
> > > >>
> > >
> >
https://cwiki.apache.org/confluence/display/KAFKA/KIP-821%3A+Connect+Transforms+support+for+nested+structures
> > > >>
> > > >> Thanks,
> > > >> Jorge
> > > >>
> > >
> >
>


Re: Re: [DISCUSS] KIP-821: Connect Transforms support for nested structures

2022-09-02 Thread Chris Egerton
Hi Jorge,

One tiny nit, but LGTM otherwise:

The KIP mentions backslashes as "(/)"; shouldn't this be "(\)"?

I'll cast a +1 on the vote thread anyways; I'm sure this won't block us.

Cheers, and thanks for all your hard work on this!

Chris

On Thu, Sep 1, 2022 at 1:33 PM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Hi Chris,
>
> Thanks for your feedback!
>
> 1. Yes, it will be context-dependent. I have added rules and scenarios to
> the nested notation to cover the happy path and edge cases. In short,
> backticks will be not be considered as part of the field name when they are
> wrapping a field name: first backtick at the beginning of the path or after
> a dot, and closing backtick before a dot or at the end of the path.
> If backticks happen to be in those positions, use backslash to escape them.
> 2. You're right, that's a typo. Fixing it.
> 3. I don't think so, I have added a scenario to clarify this.
>
> KIP is updated. Hopefully the rules and scenarios help to close any open
> gap. Let me know if you see any cases that is not considered to address it.
>
> Cheers,
> Jorge.
>
> On Wed, 31 Aug 2022 at 20:02, Chris Egerton 
> wrote:
>
> > Hi Robert and Jorge,
> >
> > I think the backtick/backslash proposal works, but I'm a little unclear
> on
> > some of the details:
> >
> > 1. Are backticks only given special treatment when they immediately
> follow
> > a non-escaped dot? E.g., "foo.b`ar.ba`z" would refer to "foo" -> "b`ar"
> ->
> > "ba`z" instead of "foo" -> "bar.baz"? Based on the example where the name
> > "a.b`.c" refers to "a" -> "b`" -> "c", it seems like this is the case,
> but
> > I'm worried this might cause confusion since the role of the backtick and
> > the need to escape it becomes context-dependent.
> >
> > 2. In the example table near the beginning of the KIP, the name
> "a.`b\`.c`"
> > refers to "a" -> "b`c". What happened to the dot in the second part of
> the
> > name? Should it refer to "a" -> "b`.c" instead?
> >
> > 3. Is it ever necessary to escape backslashes themselves? If so, when?
> >
> > Overall, I wish we could come up with a prettier/simpler approach, but
> the
> > benefits provided by the dual backtick/dot syntax are too great to deny:
> > there are no correctness issues like the ones posed with double-dot
> > escaping that would lead to ambiguity, the most common cases are still
> very
> > simple to work with, and there's no risk of interfering with JSON escape
> > mechanisms (in most cases) or single-quote shell quoting (which may be
> > relevant when connector configurations are defined on the command line).
> > Thanks for the suggestion, Robert!
> >
> > Cheers,
> >
> > Chris
> >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.2 #74

2022-09-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 457794 lines...]
[2022-09-02T15:49:56.130Z] 
[2022-09-02T15:49:56.130Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] STARTED
[2022-09-02T15:50:03.259Z] 
[2022-09-02T15:50:03.259Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] PASSED
[2022-09-02T15:50:03.259Z] 
[2022-09-02T15:50:03.259Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] STARTED
[2022-09-02T15:50:08.034Z] 
[2022-09-02T15:50:08.034Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] PASSED
[2022-09-02T15:50:08.034Z] 
[2022-09-02T15:50:08.034Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED
[2022-09-02T15:50:12.343Z] 
[2022-09-02T15:50:12.343Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED
[2022-09-02T15:50:12.343Z] 
[2022-09-02T15:50:12.343Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED
[2022-09-02T15:50:17.114Z] 
[2022-09-02T15:50:17.114Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED
[2022-09-02T15:50:17.114Z] 
[2022-09-02T15:50:17.114Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] STARTED
[2022-09-02T15:50:22.704Z] 
[2022-09-02T15:50:22.704Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] PASSED
[2022-09-02T15:50:22.704Z] 
[2022-09-02T15:50:22.704Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] STARTED
[2022-09-02T15:50:31.260Z] 
[2022-09-02T15:50:31.260Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] PASSED
[2022-09-02T15:50:31.260Z] 
[2022-09-02T15:50:31.260Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] STARTED
[2022-09-02T15:50:34.823Z] 
[2022-09-02T15:50:34.823Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] PASSED
[2022-09-02T15:50:34.823Z] 
[2022-09-02T15:50:34.823Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] STARTED
[2022-09-02T15:50:41.841Z] 
[2022-09-02T15:50:41.841Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] PASSED
[2022-09-02T15:50:41.841Z] 
[2022-09-02T15:50:41.841Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] STARTED
[2022-09-02T15:50:44.475Z] 
[2022-09-02T15:50:44.475Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] PASSED
[2022-09-02T15:50:44.475Z] 
[2022-09-02T15:50:44.475Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] STARTED
[2022-09-02T15:50:48.060Z] 
[2022-09-02T15:50:48.060Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] PASSED
[2022-09-02T15:50:48.060Z] 
[2022-09-02T15:50:48.060Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED
[2022-09-02T15:50:52.681Z] 
[2022-09-02T15:50:52.681Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED
[2022-09-02T15:50:52.681Z] 
[2022-09-02T15:50:52.681Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] STARTED
[2022-09-02T15:50:59.707Z] 
[2022-09-02T15:50:59.707Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] PASSED
[2022-09-02T15:50:59.707Z] 
[2022-09-02T15:50:59.707Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] STARTED
[2022-09-02T15:51:04.650Z] 
[2022-09-02T15:51:04.650Z] 
org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] PASSED
[2022-09-02T15:51:05.850Z] 
[2022-09-02T15:51:05.850Z] 
org.apache.kafka.streams.integration.StreamTableJoinTopologyOptimizationIntegrationTest
 > shouldDoStreamTableJoinWithDifferentNumberOfPartitions[Optimization = all] 
STARTED
[2022-09-02T15:51:08.038Z] 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #203

2022-09-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 354801 lines...]
[2022-09-02T15:50:18.360Z] ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled() PASSED
[2022-09-02T15:50:18.360Z] 
[2022-09-02T15:50:18.360Z] ControllerIntegrationTest > 
testControllerMoveOnPartitionReassignment() STARTED
[2022-09-02T15:50:21.906Z] 
[2022-09-02T15:50:21.906Z] ControllerIntegrationTest > 
testControllerMoveOnPartitionReassignment() PASSED
[2022-09-02T15:50:21.906Z] 
[2022-09-02T15:50:21.906Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithEnabledExistingFeatureZNode()
 STARTED
[2022-09-02T15:50:24.626Z] 
[2022-09-02T15:50:24.626Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithEnabledExistingFeatureZNode()
 PASSED
[2022-09-02T15:50:24.626Z] 
[2022-09-02T15:50:24.626Z] ControllerIntegrationTest > 
testControllerMoveOnTopicCreation() STARTED
[2022-09-02T15:50:27.233Z] 
[2022-09-02T15:50:27.233Z] ControllerIntegrationTest > 
testControllerMoveOnTopicCreation() PASSED
[2022-09-02T15:50:27.233Z] 
[2022-09-02T15:50:27.233Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithDisabledExistingFeatureZNode()
 STARTED
[2022-09-02T15:50:29.836Z] 
[2022-09-02T15:50:29.836Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithDisabledExistingFeatureZNode()
 PASSED
[2022-09-02T15:50:29.836Z] 
[2022-09-02T15:50:29.836Z] ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch() STARTED
[2022-09-02T15:50:32.284Z] 
[2022-09-02T15:50:32.284Z] ControllerIntegrationTest > 
testControllerRejectControlledShutdownRequestWithStaleBrokerEpoch() PASSED
[2022-09-02T15:50:32.284Z] 
[2022-09-02T15:50:32.284Z] ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections() STARTED
[2022-09-02T15:50:39.400Z] 
[2022-09-02T15:50:39.400Z] ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections() PASSED
[2022-09-02T15:50:39.400Z] 
[2022-09-02T15:50:39.400Z] ControllerIntegrationTest > 
testTopicIdUpgradeAfterReassigningPartitions() STARTED
[2022-09-02T15:50:48.251Z] 
[2022-09-02T15:50:48.251Z] ControllerIntegrationTest > 
testTopicIdUpgradeAfterReassigningPartitions() PASSED
[2022-09-02T15:50:48.251Z] 
[2022-09-02T15:50:48.251Z] ControllerIntegrationTest > 
testTopicIdPersistsThroughControllerReelection() STARTED
[2022-09-02T15:50:55.202Z] 
[2022-09-02T15:50:55.202Z] ControllerIntegrationTest > 
testTopicIdPersistsThroughControllerReelection() PASSED
[2022-09-02T15:50:55.202Z] 
[2022-09-02T15:50:55.202Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 STARTED
[2022-09-02T15:50:57.804Z] 
[2022-09-02T15:50:57.804Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 PASSED
[2022-09-02T15:50:57.804Z] 
[2022-09-02T15:50:57.804Z] ControllerIntegrationTest > testEmptyCluster() 
STARTED
[2022-09-02T15:50:59.540Z] 
[2022-09-02T15:50:59.540Z] ControllerIntegrationTest > testEmptyCluster() PASSED
[2022-09-02T15:50:59.540Z] 
[2022-09-02T15:50:59.541Z] ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection() STARTED
[2022-09-02T15:51:02.142Z] 
[2022-09-02T15:51:02.142Z] ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection() PASSED
[2022-09-02T15:51:02.142Z] 
[2022-09-02T15:51:02.142Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElection() STARTED
[2022-09-02T15:51:10.493Z] 
[2022-09-02T15:51:10.493Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElection() PASSED
[2022-09-02T15:51:10.493Z] 
[2022-09-02T15:51:10.493Z] ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange() STARTED
[2022-09-02T15:51:20.431Z] 
[2022-09-02T15:51:20.431Z] ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange() PASSED
[2022-09-02T15:51:20.431Z] 
[2022-09-02T15:51:20.431Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 STARTED
[2022-09-02T15:51:21.504Z] 
[2022-09-02T15:51:21.504Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 PASSED
[2022-09-02T15:51:21.504Z] 
[2022-09-02T15:51:21.504Z] ControllerIntegrationTest > 
testMetadataPropagationForOfflineReplicas() STARTED
[2022-09-02T15:51:31.721Z] 
[2022-09-02T15:51:31.721Z] ControllerIntegrationTest > 
testMetadataPropagationForOfflineReplicas() PASSED
[2022-09-02T15:51:31.721Z] 
[2022-09-02T15:51:31.721Z] ControllerIntegrationTest > 
testTopicIdCreatedOnUpgrade() STARTED
[2022-09-02T15:51:36.319Z] 
[2022-09-02T15:51:36.319Z] ControllerIntegrationTest > 

Re: [VOTE] KIP-678: New Kafka Connect SMT for plainText => struct with Regex

2022-09-02 Thread Chris Egerton
+1 (binding)

Thanks for the KIP!

On Thu, Sep 1, 2022 at 9:52 PM gyejun choi  wrote:

> Hi all,
> I'd like to start a vote for KIP-678: New Kafka Connect SMT for plainText
> => struct with Regex
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-678%3A+New+Kafka+Connect+SMT+for+plainText+%3D%3E+Struct%28or+Map%29+with+Regex
>
> JIRA: https://github.com/apache/kafka/pull/12219
>
> Discussion thread:
> https://lists.apache.org/thread/xb57l7j953k8dfgqvktb09y31vzpm1xx
> https://lists.apache.org/thread/7t1k0ko8l973v4oj3l983j7qpwolhyzf
>
> Thanks,
> whsoul
>


Re: [DISCUSS] KIP-848: The Next Generation of the Consumer Rebalance Protocol

2022-09-02 Thread David Jacot
Hi Jun,

Thanks for your feedback. Let me start by answering your questions
inline and I will update the KIP next week.

> Thanks for the KIP. Overall, the main benefits of the KIP seem to be fewer
> RPCs during rebalance and more efficient support of wildcard. A few
> comments below.

I would also add that the KIP removes the global sync barrier in the
protocol which is essential to improve group stability and
scalability, and the KIP also simplifies the client by moving most of
the logic to the server side.

> 30. ConsumerGroupHeartbeatRequest
> 30.1 ServerAssignor is a singleton. Do we plan to support rolling changing
> of the partition assignor in the consumers?

Definitely. The group coordinator will use the assignor used by a
majority of the members. This allows the group to move from one
assignor to another by a roll. This is explained in the Assignor
Selection chapter.

> 30.2 For each field, could you explain whether it's required in every
> request or the scenarios when it needs to be filled? For example, it's not
> clear to me when TopicPartitions needs to be filled.

The client is expected to set those fields in case of a connection
issue (e.g. timeout) or when the fields have changed since the last
HB. The server populates those fields as long as the member is not
fully reconciled - the member should acknowledge that it has the
expected epoch and assignment. I will clarify this in the KIP.

> 31. In the current consumer protocol, the rack affinity between the client
> and the broker is only considered during fetching, but not during assigning
> partitions to consumers. Sometimes, once the assignment is made, there is
> no opportunity for read affinity because no replicas of assigned partitions
> are close to the member. I am wondering if we should use this opportunity
> to address this by including rack in GroupMember.

That's an interesting idea. I don't see any issue with adding the rack
to the members. I will do so.

> 32. On the metric side, often, it's useful to know how busy a group
> coordinator is. By moving the event loop model, it seems that we could add
> a metric that tracks the fraction of the time the event loop is doing the
> actual work.

That's a great idea. I will add it. Thanks.

> 33. Could we add a section on coordinator failover handling? For example,
> does it need to trigger the check if any group with the wildcard
> subscription now has a new matching topic?

Sure. When the new group coordinator takes over, it has to:
* Setup the session timeouts.
* Trigger a new assignment if a client side assignor is used. We don't
store the information about the member selected to run the assignment
so we have to start a new one.
* Update the topics metadata, verify the wildcard subscriptions, and
trigger a rebalance if needed.

> 34. ConsumerGroupMetadataValue, ConsumerGroupPartitionMetadataValue,
> ConsumerGroupMemberMetadataValue: Could we document what the epoch field
> reflects? For example, does the epoch in ConsumerGroupMetadataValue reflect
> the latest group epoch? What about the one in
> ConsumerGroupPartitionMetadataValue and ConsumerGroupMemberMetadataValue?

Sure. I will clarify that but it is always the latest group epoch.
When the group state is updated, the group epoch is bumped so we use
that one for all the change records related to the update.

> 35. "the group coordinator will ensure that the following invariants are
> met: ... All members exists." It's possible for a member not to get any
> assigned partitions, right?

That's right. Here I meant that the members provided by the assignor
in the assignment must exist in the group. The assignor can not make
up new member ids.

> 36. "He can rejoins the group with a member epoch equals to 0": When would
> a consumer rejoin and what member id would be used?

A member is expected to abandon all its partitions and rejoins when it
receives the FENCED_MEMBER_EPOCH error. In this case, the group
coordinator will have removed the member from the group. The member
can rejoin the group with the same member id but with 0 as epoch. Let
me see if I can clarify this in the KIP.

> 37. "Instead, power users will have the ability to trigger a reassignment
> by either providing a non-zero reason or by updating the assignor
> metadata." Hmm, this seems to be conflicting with the deprecation of
> Consumer#enforeRebalance.

In this case, a new assignment is triggered by the client side
assignor. When constructing the HB, the consumer will always consult
the client side assignor and propagate the information to the group
coordinator. In other words, we don't expect users to call
Consumer#enforceRebalance anymore.

> 38. The reassignment examples are nice. But the section seems to have
> multiple typos.
> 38.1 When the group transitions to epoch 2, B immediately gets into
> "epoch=1, partitions=[foo-2]", which seems incorrect.
> 38.2 When the group transitions to epoch 3, C seems to get into epoch=3,
> partitions=[foo-1] too early.
> 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1196

2022-09-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 502943 lines...]
[2022-09-02T12:57:58.479Z] 
[2022-09-02T12:57:58.479Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2022-09-02T12:58:30.606Z] 
[2022-09-02T12:58:30.606Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2022-09-02T12:58:30.606Z] 
[2022-09-02T12:58:30.606Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2022-09-02T12:58:57.082Z] 
[2022-09-02T12:58:57.082Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-09-02T12:58:57.082Z] 
[2022-09-02T12:58:57.082Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-09-02T12:59:03.032Z] 
[2022-09-02T12:59:03.032Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-09-02T12:59:03.032Z] 
[2022-09-02T12:59:03.032Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2022-09-02T12:59:29.484Z] 
[2022-09-02T12:59:29.485Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-09-02T12:59:29.485Z] 
[2022-09-02T12:59:29.485Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-09-02T12:59:31.428Z] 
[2022-09-02T12:59:31.428Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-09-02T12:59:31.428Z] 
[2022-09-02T12:59:31.428Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-09-02T12:59:33.200Z] 
[2022-09-02T12:59:33.200Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-09-02T12:59:33.200Z] 
[2022-09-02T12:59:33.200Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-09-02T12:59:35.954Z] 
[2022-09-02T12:59:35.954Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-09-02T12:59:35.954Z] 
[2022-09-02T12:59:35.954Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-09-02T12:59:40.616Z] 
[2022-09-02T12:59:40.616Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-09-02T12:59:40.616Z] 
[2022-09-02T12:59:40.616Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-09-02T12:59:47.854Z] 
[2022-09-02T12:59:47.854Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-09-02T12:59:47.854Z] 
[2022-09-02T12:59:47.854Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-09-02T12:59:48.964Z] 
[2022-09-02T12:59:48.964Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() PASSED
[2022-09-02T12:59:48.964Z] 
[2022-09-02T12:59:48.964Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() STARTED
[2022-09-02T13:00:11.440Z] 
[2022-09-02T13:00:11.440Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() PASSED
[2022-09-02T13:00:11.440Z] 
[2022-09-02T13:00:11.440Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() STARTED
[2022-09-02T13:00:14.078Z] 
[2022-09-02T13:00:14.078Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() PASSED
[2022-09-02T13:00:14.078Z] 
[2022-09-02T13:00:14.078Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2022-09-02T13:00:17.840Z] 
[2022-09-02T13:00:17.840Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-09-02T13:00:17.840Z] 
[2022-09-02T13:00:17.840Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-09-02T13:00:22.643Z] 
[2022-09-02T13:00:22.643Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-09-02T13:00:22.643Z] 
[2022-09-02T13:00:22.643Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-09-02T13:00:24.403Z] 
[2022-09-02T13:00:24.403Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-09-02T13:00:28.201Z] 
[2022-09-02T13:00:28.201Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2022-09-02T13:00:29.141Z] 
[2022-09-02T13:00:29.141Z] FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2022-09-02T13:00:29.141Z] 
[2022-09-02T13:00:29.141Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() STARTED
[2022-09-02T13:00:29.141Z] 
[2022-09-02T13:00:29.141Z] FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() PASSED
[2022-09-02T13:00:29.141Z] 
[2022-09-02T13:00:29.141Z] FineGrainedAutoResetIntegrationTest 

Re: [VOTE] KIP-837 Allow MultiCasting a Result Record.

2022-09-02 Thread Sagar
Hello Bruno/Chris,

Since these are the last set of changes(I am assuming haha), it would be
great if you could review the 2 options from above so that we can close the
voting. Of course I am happy to incorporate any other requisite changes.

Thanks!
Sagar.

On Wed, Aug 31, 2022 at 10:07 PM Sagar  wrote:

> Thanks Bruno for the great points.
>
> I see 2 options here =>
>
> 1) As Chris suggested, drop the support for dropping records in the
> partitioner. That way, an empty list could signify the usage of a default
> partitioner. Also, if the deprecated partition() method returns null
> thereby signifying the default partitioner, the partitions() can return an
> empty list i.e default partitioner.
>
> 2) OR we treat a null return type of partitions() method to signify the
> usage of the default partitioner. In the default implementation of
> partitions() method, if partition() returns null, then even partitions()
> can return null(instead of an empty list). The RecordCollectorImpl code can
> also be modified accordingly. @Chris, to your point, we can even drop the
> support of dropping of records. It came up during KIP discussion, and I
> thought it might be a useful feature. Let me know what you think.
>
> 3) Lastly about the partition number check. I wanted to avoid the throwing
> of exception so I thought adding it might be a useful feature. But as you
> pointed out, if it can break backwards compatibility, it's better to remove
> it.
>
> Thanks!
> Sagar.
>
>
> On Tue, Aug 30, 2022 at 6:32 PM Chris Egerton 
> wrote:
>
>> +1 to Bruno's concerns about backward compatibility. Do we actually need
>> support for dropping records in the partitioner? It doesn't seem necessary
>> based on the motivation for the KIP. If we remove that feature, we could
>> handle null and/or empty lists by using the default partitioning,
>> equivalent to how we handle null return values from the existing partition
>> method today.
>>
>> On Tue, Aug 30, 2022 at 8:55 AM Bruno Cadonna  wrote:
>>
>> > Hi Sagar,
>> >
>> > Thank you for the updates!
>> >
>> > I do not intend to prolong this vote thread more than needed, but I
>> > still have some points.
>> >
>> > The deprecated partition method can return null if the default
>> > partitioning logic of the producer should be used.
>> > With the new method partitions() it seems that it is not possible to use
>> > the default partitioning logic, anymore.
>> >
>> > Also, in the default implementation of method partitions(), a record
>> > that would use the default partitioning logic in method partition()
>> > would be dropped, which would break backward compatibility since Streams
>> > would always call the new method partitions() even though the users
>> > still implement the deprecated method partition().
>> >
>> > I have a last point that we should probably discuss on the PR and not on
>> > the KIP but since you added the code in the KIP I need to mention it. I
>> > do not think you should check the validity of the partition number since
>> > the ProducerRecord does the same check and throws an exception. If
>> > Streams adds the same check but does not throw, the behavior is not
>> > backward compatible.
>> >
>> > Best,
>> > Bruno
>> >
>> >
>> > On 30.08.22 12:43, Sagar wrote:
>> > > Thanks Bruno/Chris,
>> > >
>> > > Even I agree that might be better to keep it simple like the way Chris
>> > > suggested. I have updated the KIP accordingly. I made couple of minor
>> > > changes to the KIP:
>> > >
>> > > 1) One of them being the change of return type of partitions method
>> from
>> > > List to Set. This is to ensure that in case the implementation of
>> > > StreamPartitoner is buggy and ends up returning duplicate
>> > > partition numbers, we won't have duplicates thereby not trying to
>> send to
>> > > the same partition multiple times due to this.
>> > > 2) I also added a check to send the record only to valid partition
>> > numbers
>> > > and log and drop when the partition number is invalid. This is again
>> to
>> > > prevent errors for cases when the StreamPartitioner implementation has
>> > some
>> > > bugs (since there are no validations as such).
>> > > 3) I also updated the Test Plan section based on the suggestion from
>> > Bruno.
>> > > 4) I updated the default implementation of partitions method based on
>> the
>> > > great catch from Chris!
>> > >
>> > > Let me know if it looks fine now.
>> > >
>> > > Thanks!
>> > > Sagar.
>> > >
>> > >
>> > > On Tue, Aug 30, 2022 at 3:00 PM Bruno Cadonna 
>> > wrote:
>> > >
>> > >> Hi,
>> > >>
>> > >> I am favour of discarding the sugar for broadcasting and leave the
>> > >> broadcasting to the implementation as Chris suggests. I think that is
>> > >> the cleanest option.
>> > >>
>> > >> Best,
>> > >> Bruno
>> > >>
>> > >> On 29.08.22 19:50, Chris Egerton wrote:
>> > >>> Hi all,
>> > >>>
>> > >>> I think it'd be useful to be more explicit about broadcasting to all
>> > >> topic
>> > >>> partitions rather than add implicit behavior for 

[jira] [Created] (KAFKA-14199) Installed kafka in ubuntu and not able to access in browser. org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104

2022-09-02 Thread Gops (Jira)
Gops created KAFKA-14199:


 Summary: Installed kafka in ubuntu and not able to access in 
browser.  org.apache.kafka.common.network.InvalidReceiveException: Invalid 
receive (size = 1195725856 larger than 104857600)
 Key: KAFKA-14199
 URL: https://issues.apache.org/jira/browse/KAFKA-14199
 Project: Kafka
  Issue Type: Bug
  Components: admin
Reporter: Gops


I am new to kafka. I have installed the zookeeper and kafka in my local ubuntu 
machine. When i try to access the kafka in my browser 
[http://ip:9092|http://ip:9092/]  ia m facing this error.

+++

[SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from 
/127.0.0.1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size 
= 1195725856 larger than 104857600)
    at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:105)
    at 
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
    at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
    at kafka.network.Processor.poll(SocketServer.scala:989)
    at kafka.network.Processor.run(SocketServer.scala:892)
    at java.base/java.lang.Thread.run(Thread.java:829)

+++

Also I have checked by updating the socket.request.max.bytes=5 in 
~/kafka/config/server.properties file still getting same error

 

pls figure it out. Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-864: Add End-To-End Latency Metrics to Connectors

2022-09-02 Thread Sagar
Hi Jorge,

Thanks for the changes.

Regarding the metrics, I meant something like this:
kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"

the way it's defined in
https://kafka.apache.org/documentation/#connect_monitoring for the metrics.

I see what you mean by the 3 metrics and how it can be interpreted. The
only thing I would argue is do we need sink-record-latency-min? Maybe we
could remove this min metric as well and make all of the 3 e2e metrics
consistent(since put-batch also doesn't expose a min which makes sense to
me). I think this is in contrast to what Yash pointed out above so I would
like to hear his thoughts as well.

The other point Yash mentioned about the slightly flawed definition of e2e
is also true in a sense. But I have a feeling that's one the records are
polled by the connector tasks, it would be difficult to track the final leg
via the framework. Probably users can track the metrics at their end to
figure that out. Do you think that makes sense?

Thanks!
Sagar.




On Thu, Sep 1, 2022 at 11:40 PM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Hi Sagar and Yash,
>
> Thanks for your feedback!
>
> > 1) I am assuming the new metrics would be task level metric.
>
> 1.1 Yes, it will be a task level metric, implemented on the
> Worker[Source/Sink]Task.
>
> > Could you specify the way it's done for other sink/source connector?
>
> 1.2. Not sure what do you mean by this. Could you elaborate a bit more?
>
> > 2. I am slightly confused about the e2e latency metric...
>
> 2.1. Yes, I see. I was trying to bring a similar concept as in Streams with
> KIP-613, though the e2e concept may not be translatable.
> We could keep it as `sink-record-latency` to avoid conflating concepts. A
> similar metric naming was proposed in KIP-489 but at the consumer level —
> though it seems dormant for a couple of years.
>
> > However, the put-batch time measures the
> > time to put a batch of records to external sink. So, I would assume the 2
> > can't be added as is to compute the e2e latency. Maybe I am missing
> > something here. Could you plz clarify this.
>
> 2.2. Yes, agree. Not necessarily added, but with the 3 latencies (poll,
> convert, putBatch) will be clearer where the bottleneck may be, and
> represent the internal processing.
>
> > however, as per the KIP it looks like it will be
> > the latency between when the record was written to Kafka and when the
> > record is returned by a sink task's consumer's poll?
>
> 3.1. Agree. 2.1. could help to clarify this.
>
> > One more thing - I was wondering
> > if there's a particular reason for having a min metric for e2e latency
> but
> > not for convert time?
>
> 3.2. Was following KIP-613 for e2e which seems useful to compare with Max a
> get an idea of the window of results, though current latencies in Connector
> do not include Min, and that's why I haven't added it for convert latency.
> Do you think it make sense to extend latency metrics with Min?
>
> KIP is updated to clarify some of these changes.
>
> Many thanks,
> Jorge.
>
> On Thu, 1 Sept 2022 at 18:11, Yash Mayya  wrote:
>
> > Hi Jorge,
> >
> > Thanks for the KIP! I have the same confusion with the e2e-latency
> metrics
> > as Sagar above. "e2e" would seem to indicate the latency between when the
> > record was written to Kafka and when the record was written to the sink
> > system by the connector - however, as per the KIP it looks like it will
> be
> > the latency between when the record was written to Kafka and when the
> > record is returned by a sink task's consumer's poll? I think that metric
> > will be a little confusing to interpret. One more thing - I was wondering
> > if there's a particular reason for having a min metric for e2e latency
> but
> > not for convert time?
> >
> > Thanks,
> > Yash
> >
> > On Thu, Sep 1, 2022 at 8:59 PM Sagar  wrote:
> >
> > > Hi Jorge,
> > >
> > > Thanks for the KIP. It looks like a very good addition. I skimmed
> through
> > > once and had a couple of questions =>
> > >
> > > 1) I am assuming the new metrics would be task level metric. Could you
> > > specify the way it's done for other sink/source connector?
> > > 2) I am slightly confused about the e2e latency metric. Let's consider
> > the
> > > sink connector metric. If I look at the way it's supposed to be
> > calculated,
> > > i.e the difference between the record timestamp and the wall clock
> time,
> > it
> > > looks like a per record metric. However, the put-batch time measures
> the
> > > time to put a batch of records to external sink. So, I would assume
> the 2
> > > can't be added as is to compute the e2e latency. Maybe I am missing
> > > something here. Could you plz clarify this.
> > >
> > > Thanks!
> > > Sagar.
> > >
> > > On Tue, Aug 30, 2022 at 8:43 PM Jorge Esteban Quilcate Otoya <
> > > quilcate.jo...@gmail.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to start a discussion thread on KIP-864: Add End-To-End
> > Latency
> > > > 

[jira] [Created] (KAFKA-14198) Release package contains snakeyaml 1.30

2022-09-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14198:
--

 Summary: Release package contains snakeyaml 1.30
 Key: KAFKA-14198
 URL: https://issues.apache.org/jira/browse/KAFKA-14198
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Mickael Maison


snakeyaml 1.30 is vulnerable to CVE-2022-25857: 
https://security.snyk.io/vuln/SNYK-JAVA-ORGYAML-2806360

It looks like we pull this dependency because of swagger. It's unclear how or 
even if this can be exploited in Kafka but it's flagged by scanning tools. 

I wonder if we could make the swagger dependency compile time only and avoid 
shipping them. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14197) Kraft broker fails to startup after topic creation failure

2022-09-02 Thread Luke Chen (Jira)
Luke Chen created KAFKA-14197:
-

 Summary: Kraft broker fails to startup after topic creation failure
 Key: KAFKA-14197
 URL: https://issues.apache.org/jira/browse/KAFKA-14197
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Reporter: Luke Chen


In kraft ControllerWriteEvent, we start by trying to apply the record to 
controller in-memory state, then sent out the record via raft client. But if 
there is error during sending the records, there's no way to revert the change 
to controller in-memory state[1].

The issue happened when creating topics, controller state is updated with topic 
and partition metadata (ex: broker to ISR map), but the record doesn't send out 
successfully (i.e. buffer allocation error). Then, when shutting down the node, 
the controlled shutdown will try to remove the broker from ISR by[2]:
{code:java}
generateLeaderAndIsrUpdates("enterControlledShutdown[" + brokerId + "]", 
brokerId, NO_LEADER, records, 
brokersToIsrs.partitionsWithBrokerInIsr(brokerId));{code}
 

After we appending the partitionChangeRecords, and send to metadata topic 
successfully, it'll cause the brokers failed to "replay" these partition change 
since these topic/partitions didn't get created successfully previously.

Even worse, after restarting the node, all the metadata records will replay 
again, and the same error happened again, cause the broker cannot start up 
successfully.

 

The error and call stack is like this, basically, it complains the topic image 
can't be found
{code:java}
[2022-09-02 16:29:16,334] ERROR Encountered metadata loading fault: Error 
replaying metadata log record at offset 81 
(org.apache.kafka.server.fault.LoggingFaultHandler)
java.lang.NullPointerException
    at org.apache.kafka.image.TopicDelta.replay(TopicDelta.java:69)
    at org.apache.kafka.image.TopicsDelta.replay(TopicsDelta.java:91)
    at org.apache.kafka.image.MetadataDelta.replay(MetadataDelta.java:248)
    at org.apache.kafka.image.MetadataDelta.replay(MetadataDelta.java:186)
    at 
kafka.server.metadata.BrokerMetadataListener.$anonfun$loadBatches$3(BrokerMetadataListener.scala:239)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
    at 
kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$loadBatches(BrokerMetadataListener.scala:232)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:113)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    at java.base/java.lang.Thread.run(Thread.java:829)
{code}
 

[1] 
https://github.com/apache/kafka/blob/ef65b6e566ef69b2f9b58038c98a5993563d7a68/metadata/src/main/java/org/apache/kafka/controller/QuorumController.java#L779-L804
 

[2] 
https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java#L1270



--
This message was sent by Atlassian Jira
(v8.20.10#820010)