Re: Request for permission to assign JIRA ticket (KAFKA-13403) to myself

2021-10-27 Thread Matthias J. Sax

What is you user name?

On 10/27/21 6:08 PM, Arun Mathew wrote:

Hi,
 Please give me the relevant permissions to take up tickets.
--
With Regards,
Arun Mathew



Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #544

2021-10-27 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 493797 lines...]
[2021-10-28T02:14:10.118Z] 
[2021-10-28T02:14:10.118Z] AclAuthorizerTest > testEmptyAclThrowsException() 
STARTED
[2021-10-28T02:14:11.314Z] 
[2021-10-28T02:14:11.314Z] AclAuthorizerTest > testEmptyAclThrowsException() 
PASSED
[2021-10-28T02:14:11.314Z] 
[2021-10-28T02:14:11.314Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeNoAclFoundOverride() STARTED
[2021-10-28T02:14:11.314Z] 
[2021-10-28T02:14:11.314Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeNoAclFoundOverride() PASSED
[2021-10-28T02:14:11.314Z] 
[2021-10-28T02:14:11.314Z] AclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess() STARTED
[2021-10-28T02:14:12.443Z] 
[2021-10-28T02:14:12.443Z] AclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess() PASSED
[2021-10-28T02:14:12.443Z] 
[2021-10-28T02:14:12.443Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED
[2021-10-28T02:14:12.443Z] 
[2021-10-28T02:14:12.443Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED
[2021-10-28T02:14:12.443Z] 
[2021-10-28T02:14:12.443Z] AclAuthorizerTest > 
testAllowAccessWithCustomPrincipal() STARTED
[2021-10-28T02:14:12.443Z] 
[2021-10-28T02:14:12.443Z] AclAuthorizerTest > 
testAllowAccessWithCustomPrincipal() PASSED
[2021-10-28T02:14:12.443Z] 
[2021-10-28T02:14:12.443Z] AclAuthorizerTest > 
testDeleteAclOnWildcardResource() STARTED
[2021-10-28T02:14:13.372Z] 
[2021-10-28T02:14:13.372Z] AclAuthorizerTest > 
testDeleteAclOnWildcardResource() PASSED
[2021-10-28T02:14:13.372Z] 
[2021-10-28T02:14:13.372Z] AclAuthorizerTest > 
testAuthorizerZkConfigFromKafkaConfig() STARTED
[2021-10-28T02:14:14.405Z] 
[2021-10-28T02:14:14.405Z] AclAuthorizerTest > 
testAuthorizerZkConfigFromKafkaConfig() PASSED
[2021-10-28T02:14:14.405Z] 
[2021-10-28T02:14:14.405Z] AclAuthorizerTest > testChangeListenerTiming() 
STARTED
[2021-10-28T02:14:14.405Z] 
[2021-10-28T02:14:14.405Z] AclAuthorizerTest > testChangeListenerTiming() PASSED
[2021-10-28T02:14:14.405Z] 
[2021-10-28T02:14:14.405Z] AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED
[2021-10-28T02:14:15.637Z] 
[2021-10-28T02:14:15.637Z] AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED
[2021-10-28T02:14:15.637Z] 
[2021-10-28T02:14:15.637Z] AclAuthorizerTest > 
testAuthorzeByResourceTypeSuperUserHasAccess() STARTED
[2021-10-28T02:14:15.637Z] 
[2021-10-28T02:14:15.637Z] AclAuthorizerTest > 
testAuthorzeByResourceTypeSuperUserHasAccess() PASSED
[2021-10-28T02:14:15.637Z] 
[2021-10-28T02:14:15.637Z] AclAuthorizerTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() STARTED
[2021-10-28T02:14:15.637Z] 
[2021-10-28T02:14:15.637Z] AclAuthorizerTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() PASSED
[2021-10-28T02:14:15.637Z] 
[2021-10-28T02:14:15.637Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED
[2021-10-28T02:14:16.734Z] 
[2021-10-28T02:14:16.734Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED
[2021-10-28T02:14:16.734Z] 
[2021-10-28T02:14:16.734Z] AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED
[2021-10-28T02:14:17.831Z] 
[2021-10-28T02:14:17.831Z] AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED
[2021-10-28T02:14:17.831Z] 
[2021-10-28T02:14:17.831Z] AclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource() STARTED
[2021-10-28T02:14:17.831Z] 
[2021-10-28T02:14:17.831Z] AclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED
[2021-10-28T02:14:17.831Z] 
[2021-10-28T02:14:17.831Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED
[2021-10-28T02:14:19.016Z] 
[2021-10-28T02:14:19.016Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED
[2021-10-28T02:14:19.016Z] 
[2021-10-28T02:14:19.016Z] AclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls() STARTED
[2021-10-28T02:14:19.016Z] 
[2021-10-28T02:14:19.016Z] AclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls() PASSED
[2021-10-28T02:14:19.016Z] 
[2021-10-28T02:14:19.016Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED
[2021-10-28T02:14:20.199Z] 
[2021-10-28T02:14:20.199Z] LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleanerWithMessageFormatV0(CompressionType)[3]
 PASSED
[2021-10-28T02:14:20.199Z] 
[2021-10-28T02:14:20.199Z] LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0(CompressionType) > 

Request for permission to assign JIRA ticket (KAFKA-13403) to myself

2021-10-27 Thread Arun Mathew
Hi,
Please give me the relevant permissions to take up tickets.
--
With Regards,
Arun Mathew


[jira] [Created] (KAFKA-13412) Retry of initTransactions after timeout may cause invalid transition

2021-10-27 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-13412:
---

 Summary: Retry of initTransactions after timeout may cause invalid 
transition
 Key: KAFKA-13412
 URL: https://issues.apache.org/jira/browse/KAFKA-13412
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


If `initTransactions()` cannot be completed before the timeout defined by 
`max.block.ms`, then the call will raise a `TimeoutException`. The user is 
expected to retry this, which is what Kafka Streams does. However, the producer 
will keep retrying the `InitProducerId` request in the background and it is 
possible for it to return before the retry call to `initTransaction()`. This 
leads to the following exception:
{code}
org.apache.kafka.common.KafkaException: TransactionalId blah: Invalid 
transition attempted from state READY to state INITIALIZING

at 
org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:1077)
at 
org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:1070)
at 
org.apache.kafka.clients.producer.internals.TransactionManager.lambda$initializeTransactions$1(TransactionManager.java:336)
at 
org.apache.kafka.clients.producer.internals.TransactionManager.handleCachedTransactionRequestResult(TransactionManager.java:1198)
at 
org.apache.kafka.clients.producer.internals.TransactionManager.initializeTransactions(TransactionManager.java:333)
at 
org.apache.kafka.clients.producer.internals.TransactionManager.initializeTransactions(TransactionManager.java:328)
at 
org.apache.kafka.clients.producer.KafkaProducer.initTransactions(KafkaProducer.java:597)
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-10-27 Thread Chris Egerton
Hi Knowles,

Thanks for the KIP. I may have more to say later but there's one thing I'd
like to make sure to share now. In the Javadocs for the proposed
SourceTask::ignoreNonRetriableProducerException method,
the InvalidProducerEpochException exception class is included as an example
of a non-retriable exception that may cause the new SourceTask method to be
invoked. This exception should only arise if the source task's producer is
a transactional producer, which is currently never the case and, once
KIP-618 (https://cwiki.apache.org/confluence/display/KAFKA/KIP-618) is
merged, will only be the case when the task is running with exactly-once
support. I wonder if it's safe to allow connectors to discard this
exception when they're running with exactly-once support, or if the task
should still be unconditionally failed in that case?

Cheers,

Chris

On Wed, Oct 27, 2021 at 5:39 PM John Roesler  wrote:

> Hi Knowles,
>
> Thanks for the reply! That all sounds reasonable to me, and
> that's a good catch regarding the SourceRecord.
>
> Thanks,
> -John
>
> On Wed, 2021-10-27 at 15:32 -0400, Knowles Atchison Jr
> wrote:
> > John,
> >
> > Thank you for the response and feedback!
> >
> > I originally started my first pass with the ProducerRecord byte[]>.
> > For our connector, we need some of the information out of the
> SourceRecord
> > to ack our source system. If I had the actual ProducerRecord, I would
> have
> > to convert it back before I would be able to do anything useful with it.
> I
> > think there is merit in providing both records as parameters to this
> > callback. Then connector writers can decide which of the representations
> of
> > the data is most useful to them. I also noticed that in my PR I was
> sending
> > the SourceRecord post transformation, when we really should be sending
> the
> > preTransformRecord.
> >
> > The Streams solution to this is very interesting. Given the nature of a
> > connector, to me it makes the most sense for the api call to be part of
> > that task rather than an external class that is configurable. This allows
> > the connector to use state it may have at the time to inform decisions on
> > what to do with these producer exceptions.
> >
> > I have updated the KIP and PR.
> >
> > Knowles
> >
> > On Wed, Oct 27, 2021 at 1:03 PM John Roesler 
> wrote:
> >
> > > Good morning, Knowles,
> > >
> > > Thanks for the KIP!
> > >
> > > To address your latest questions, it is fine to call for a
> > > vote if a KIP doesn't generate much discussion. Either the
> > > KIP was just not controversial enough for anyone to comment,
> > > in which case a vote is appropriate; or no one had time to
> > > review it, in which case, calling for a vote might be more
> > > provacative and elicit a response.
> > >
> > > As far as pinging people directly, one idea would be to look
> > > at the git history (git blame/praise) for the files you're
> > > changing to see which committers have recently been
> > > involved. Those are the folks who are most likely to have
> > > valuable feedback on your proposal. It might not be
> > > appropriate to directly email them, but I have seen KIP
> > > discussions before that requested feedback from people by
> > > name. It's probably not best to lead with that, but since no
> > > one has responded so far, it might not hurt. I'm sure that
> > > the reason they haven't noticed your KIP is just that they
> > > are so busy it slipped their radar. They might actually
> > > appreciate a more direct ping at this point.
> > >
> > > I'm happy to review, but as a caveat, I don't have much
> > > experience with using or maintaining Connect, so caveat
> > > emptor as far as my review goes.
> > >
> > > First of all, thanks for the well written KIP. Without much
> > > context, I was able to understand the motivation and
> > > proposal easily just by reading your document.
> > >
> > > I think your proposal is a good one. It seems like it would
> > > be pretty obvious as a user what (if anything) to do with
> > > the proposed method.
> > >
> > > For your reference, this proposal reminds me of these
> > > capabilities in Streams:
> > >
> > >
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/DeserializationExceptionHandler.java
> > > and
> > >
> > >
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/ProductionExceptionHandler.java
> > > .
> > >
> > > I'm not sure if there's value in bringing your proposed
> > > interface closer to that pattern or not. Streams and Connect
> > > are quite different domains after all. At least, I wanted
> > > you to be aware of them so you could consider the
> > > alternative API strategy they present.
> > >
> > > Regardless, I do wonder if it would be helpful to also
> > > include the actual ProducerRecord we tried to send, since
> > > there's a non-trivial transformation that takes place to
> > > convert the SourceRecord into a ProducerRecord. I'm not 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #543

2021-10-27 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 4155 lines...]
[2021-10-27T22:26:32.483Z] 
[2021-10-27T22:26:32.483Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-10-27T22:26:32.483Z] 
[2021-10-27T22:26:32.483Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to ensure correctness.
[2021-10-27T22:26:32.483Z] Please consult deprecation warnings for more details.
[2021-10-27T22:26:32.483Z] 
[2021-10-27T22:26:32.483Z] BUILD FAILED in 6m 43s
[2021-10-27T22:26:32.483Z] 228 actionable tasks: 186 executed, 42 up-to-date
[2021-10-27T22:26:32.483Z] 
[2021-10-27T22:26:32.483Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-10-27-22-19-51.html
[2021-10-27T22:26:32.483Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch ARM
[2021-10-27T22:26:38.303Z] > Task :streams:checkstyleTest
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] FAILURE: Build failed with an exception.
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] * What went wrong:
[2021-10-27T22:26:38.303Z] Execution failed for task ':clients:checkstyleTest'.
[2021-10-27T22:26:38.303Z] > Checkstyle rule violations were found. See the 
report at: 
file:///home/jenkins/workspace/Kafka_kafka_trunk/clients/build/reports/checkstyle/test.html
[2021-10-27T22:26:38.303Z]   Checkstyle files with violations: 1
[2021-10-27T22:26:38.303Z]   Checkstyle violations by severity: [error:1]
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] * Try:
[2021-10-27T22:26:38.303Z] Run with --stacktrace option to get the stack trace. 
Run with --info or --debug option to get more log output. Run with --scan to 
get full insights.
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] * Get more help at https://help.gradle.org
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to ensure correctness.
[2021-10-27T22:26:38.303Z] Please consult deprecation warnings for more details.
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] BUILD FAILED in 6m 43s
[2021-10-27T22:26:38.303Z] 228 actionable tasks: 186 executed, 42 up-to-date
[2021-10-27T22:26:38.303Z] 
[2021-10-27T22:26:38.303Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-10-27-22-19-58.html
[2021-10-27T22:26:38.303Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 8 and Scala 2.13
[2021-10-27T22:26:54.940Z] [Warn] 
/home/jenkins/workspace/Kafka_kafka_trunk/core/src/test/scala/unit/kafka/log/OffsetIndexTest.scala:52:
 @nowarn annotation does not suppress any warnings
[2021-10-27T22:26:54.940Z] one warning found
[2021-10-27T22:26:58.737Z] 
[2021-10-27T22:26:58.737Z] > Task :core:testClasses
[2021-10-27T22:26:58.737Z] > Task :core:checkstyleMain
[2021-10-27T22:26:58.737Z] > Task :core:checkstyleTest
[2021-10-27T22:26:58.737Z] > Task :storage:compileTestJava
[2021-10-27T22:26:58.737Z] > Task :storage:testClasses
[2021-10-27T22:27:00.522Z] > Task :storage:checkstyleTest
[2021-10-27T22:27:01.453Z] > Task :jmh-benchmarks:compileJava
[2021-10-27T22:27:01.453Z] > Task :jmh-benchmarks:classes
[2021-10-27T22:27:01.453Z] > Task :jmh-benchmarks:compileTestJava NO-SOURCE
[2021-10-27T22:27:01.453Z] > Task :jmh-benchmarks:checkstyleMain
[2021-10-27T22:27:01.453Z] > Task :jmh-benchmarks:testClasses UP-TO-DATE
[2021-10-27T22:27:01.453Z] > Task :jmh-benchmarks:checkstyleTest NO-SOURCE
[2021-10-27T22:27:02.384Z] > Task :connect:runtime:compileTestJava
[2021-10-27T22:27:02.384Z] > Task :connect:runtime:testClasses
[2021-10-27T22:27:03.313Z] > Task :connect:mirror:compileTestJava
[2021-10-27T22:27:03.313Z] > Task 

Re: [DISCUSS] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-10-27 Thread John Roesler
Hi Knowles,

Thanks for the reply! That all sounds reasonable to me, and
that's a good catch regarding the SourceRecord.

Thanks,
-John

On Wed, 2021-10-27 at 15:32 -0400, Knowles Atchison Jr
wrote:
> John,
> 
> Thank you for the response and feedback!
> 
> I originally started my first pass with the ProducerRecord.
> For our connector, we need some of the information out of the SourceRecord
> to ack our source system. If I had the actual ProducerRecord, I would have
> to convert it back before I would be able to do anything useful with it. I
> think there is merit in providing both records as parameters to this
> callback. Then connector writers can decide which of the representations of
> the data is most useful to them. I also noticed that in my PR I was sending
> the SourceRecord post transformation, when we really should be sending the
> preTransformRecord.
> 
> The Streams solution to this is very interesting. Given the nature of a
> connector, to me it makes the most sense for the api call to be part of
> that task rather than an external class that is configurable. This allows
> the connector to use state it may have at the time to inform decisions on
> what to do with these producer exceptions.
> 
> I have updated the KIP and PR.
> 
> Knowles
> 
> On Wed, Oct 27, 2021 at 1:03 PM John Roesler  wrote:
> 
> > Good morning, Knowles,
> > 
> > Thanks for the KIP!
> > 
> > To address your latest questions, it is fine to call for a
> > vote if a KIP doesn't generate much discussion. Either the
> > KIP was just not controversial enough for anyone to comment,
> > in which case a vote is appropriate; or no one had time to
> > review it, in which case, calling for a vote might be more
> > provacative and elicit a response.
> > 
> > As far as pinging people directly, one idea would be to look
> > at the git history (git blame/praise) for the files you're
> > changing to see which committers have recently been
> > involved. Those are the folks who are most likely to have
> > valuable feedback on your proposal. It might not be
> > appropriate to directly email them, but I have seen KIP
> > discussions before that requested feedback from people by
> > name. It's probably not best to lead with that, but since no
> > one has responded so far, it might not hurt. I'm sure that
> > the reason they haven't noticed your KIP is just that they
> > are so busy it slipped their radar. They might actually
> > appreciate a more direct ping at this point.
> > 
> > I'm happy to review, but as a caveat, I don't have much
> > experience with using or maintaining Connect, so caveat
> > emptor as far as my review goes.
> > 
> > First of all, thanks for the well written KIP. Without much
> > context, I was able to understand the motivation and
> > proposal easily just by reading your document.
> > 
> > I think your proposal is a good one. It seems like it would
> > be pretty obvious as a user what (if anything) to do with
> > the proposed method.
> > 
> > For your reference, this proposal reminds me of these
> > capabilities in Streams:
> > 
> > https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/DeserializationExceptionHandler.java
> > and
> > 
> > https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/ProductionExceptionHandler.java
> > .
> > 
> > I'm not sure if there's value in bringing your proposed
> > interface closer to that pattern or not. Streams and Connect
> > are quite different domains after all. At least, I wanted
> > you to be aware of them so you could consider the
> > alternative API strategy they present.
> > 
> > Regardless, I do wonder if it would be helpful to also
> > include the actual ProducerRecord we tried to send, since
> > there's a non-trivial transformation that takes place to
> > convert the SourceRecord into a ProducerRecord. I'm not sure
> > what people would do with it, exactly, but it might be
> > helpful in deciding what to do about the exception, or maybe
> > even in understanding the exception.
> > 
> > Those are the only thoughts that come to my mind! Thanks
> > again,
> > -John
> > 
> > On Wed, 2021-10-27 at 09:16 -0400, Knowles Atchison Jr
> > wrote:
> > > Good morning,
> > > 
> > > Bumping this thread. Is there someone specific on the Connect framework
> > > team that I should ping? Is it appropriate to just call a vote? All
> > source
> > > connectors are dead in the water without a way to handle producer write
> > > exceptions. Thank you.
> > > 
> > > Knowles
> > > 
> > > On Mon, Oct 18, 2021 at 8:33 AM Christopher Shannon <
> > > christopher.l.shan...@gmail.com> wrote:
> > > 
> > > > I also would find this feature useful to handle errors better, does
> > anyone
> > > > have any comments or feedback?
> > > > 
> > > > 
> > > > On Mon, Oct 11, 2021 at 8:52 AM Knowles Atchison Jr <
> > katchiso...@gmail.com
> > > > > 
> > > > wrote:
> > > > 
> > > > > Good morning,
> > > > > 
> > > > > Bumping this for 

Re: [DISCUSS] KIP-786: Emit Metric Client Quota Values

2021-10-27 Thread Mason Legere
Hi All,

Haven't received any feedback on this yet but as it was a small change have
made a PR showing the functional components: pull request

Will update the related documentation outlining the new metric attributes
in a bit.

Best,
Mason Legere

On Sat, Oct 23, 2021 at 4:00 PM Mason Legere 
wrote:

> Hi All,
>
> I would like to start a discussion for my proposed KIP-786
> 
>  which
> aims to allow client quota values to be emitted as a standard jmx MBean
> attribute - if enabled in the static broker configuration.
>
> Please note that I originally misnumbered this KIP and am re-creating this
> discussion thread for clarity. The original thread can be found at: Original
> Email Thread
> 
>
> Best,
> Mason Legere
>


Re: Apache Kafka : start up scripts

2021-10-27 Thread Israel Ekpo
Start here

https://github.com/apache/kafka/blob/trunk/bin/kafka-server-start.sh

https://github.com/apache/kafka/tree/trunk/core/src/main/scala/kafka/server

Also take a look at the logs when the server starts up. That should give
you some insights.

On Wed, Oct 27, 2021 at 5:03 PM Kafka Life  wrote:

> Dear Kafka experts
>
> when an broker is started using start script , could any of you please let
> me know the sequence of steps that happens in the back ground when the node
> UP
>
> like : when the script is initiated to start ,
> 1/ is it checking indexes .. ?
> 2/ is it checking isr ?
> 3/ is URP being made to zero.. ?
>
> i tried to look in ther server log but could not under the sequence of
> events  performed till the node was up .. could some one please help ..
>
> Thanks
>


[jira] [Created] (KAFKA-13411) Exception while connecting from kafka client consumer producers deployed in a wildfly context to a kafka broker implementing OAUTHBEARER sasl mechanism

2021-10-27 Thread Shankar Bhaskaran (Jira)
Shankar Bhaskaran created KAFKA-13411:
-

 Summary: Exception while connecting from kafka client consumer 
producers deployed in a wildfly context to a kafka broker implementing 
OAUTHBEARER sasl mechanism
 Key: KAFKA-13411
 URL: https://issues.apache.org/jira/browse/KAFKA-13411
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
 Environment: Windows, Linux , Wildfly Application server
Reporter: Shankar Bhaskaran
 Fix For: 3.0.1


I have set up a Kafka cluster on my linux machine secured using keycloak
(OAUTHBEARER) Mechanism. I can use the Kafka Console Consumers and
Producers to send and receive messages.

 

I have tried to connect to Kafka from my consumers and producers deployed
as module on the wildfly App serve (version 19, java 11) . I have set up
all the required configuration (Config Section at the bottom) .


The SASL_JAAS_CONFIG provided as consumerconfig option has the details
like (apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
required LoginStringClaim_sub='kafka-client');

 

I am able to get authenticated with the broker , but in the client callback
I am getting an Unsupported Callback error . I have 3 modules in wildfly

1) kafka producer consumer code dependent on the 2) oauth jar (for
logincallbackhandler and login module) dependent on the 3) kafka-client
jar (2.8.0)]

 

I can see that the CLIENT CALL BACK IS CLIENTCREDENTIAL INSTEAD OF
OAuthBearerTokenCallback. The saslclient is getting set as
AbstractSaslClient instead of OAuthBearerSaslClient.

[https://www.mail-archive.com/dev@kafka.apache.org/msg120743.html]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Apache Kafka : start up scripts

2021-10-27 Thread Kafka Life
Dear Kafka experts

when an broker is started using start script , could any of you please let
me know the sequence of steps that happens in the back ground when the node
UP

like : when the script is initiated to start ,
1/ is it checking indexes .. ?
2/ is it checking isr ?
3/ is URP being made to zero.. ?

i tried to look in ther server log but could not under the sequence of
events  performed till the node was up .. could some one please help ..

Thanks


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #542

2021-10-27 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 494959 lines...]
[2021-10-27T19:54:48.778Z] 
[2021-10-27T19:54:48.778Z] AclAuthorizerTest > 
testAuthorizerZkConfigFromPrefixOverrides() STARTED
[2021-10-27T19:54:48.778Z] 
[2021-10-27T19:54:48.778Z] AclAuthorizerTest > 
testAuthorizerZkConfigFromPrefixOverrides() PASSED
[2021-10-27T19:54:48.778Z] 
[2021-10-27T19:54:48.778Z] AclAuthorizerTest > testAclsFilter() STARTED
[2021-10-27T19:54:49.727Z] 
[2021-10-27T19:54:49.727Z] AclAuthorizerTest > testAclsFilter() PASSED
[2021-10-27T19:54:49.727Z] 
[2021-10-27T19:54:49.727Z] AclAuthorizerTest > testAclManagementAPIs() STARTED
[2021-10-27T19:54:49.727Z] 
[2021-10-27T19:54:49.727Z] AclAuthorizerTest > testAclManagementAPIs() PASSED
[2021-10-27T19:54:49.727Z] 
[2021-10-27T19:54:49.727Z] AclAuthorizerTest > testWildCardAcls() STARTED
[2021-10-27T19:54:50.676Z] 
[2021-10-27T19:54:50.676Z] AclAuthorizerTest > testWildCardAcls() PASSED
[2021-10-27T19:54:50.676Z] 
[2021-10-27T19:54:50.676Z] AclAuthorizerTest > testCreateDeleteTiming() STARTED
[2021-10-27T19:54:51.625Z] 
[2021-10-27T19:54:51.625Z] AclAuthorizerTest > testCreateDeleteTiming() PASSED
[2021-10-27T19:54:51.625Z] 
[2021-10-27T19:54:51.625Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWithAllHostAce() STARTED
[2021-10-27T19:54:52.573Z] 
[2021-10-27T19:54:52.573Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWithAllHostAce() PASSED
[2021-10-27T19:54:52.573Z] 
[2021-10-27T19:54:52.573Z] AclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2() STARTED
[2021-10-27T19:54:52.573Z] 
[2021-10-27T19:54:52.573Z] AclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2() PASSED
[2021-10-27T19:54:52.573Z] 
[2021-10-27T19:54:52.573Z] AclAuthorizerTest > testTopicAcl() STARTED
[2021-10-27T19:54:53.522Z] 
[2021-10-27T19:54:53.522Z] AclAuthorizerTest > testTopicAcl() PASSED
[2021-10-27T19:54:53.522Z] 
[2021-10-27T19:54:53.522Z] AclAuthorizerTest > testSuperUserHasAccess() STARTED
[2021-10-27T19:54:53.522Z] 
[2021-10-27T19:54:53.522Z] AclAuthorizerTest > testSuperUserHasAccess() PASSED
[2021-10-27T19:54:53.522Z] 
[2021-10-27T19:54:53.522Z] AclAuthorizerTest > 
testDeleteAclOnPrefixedResource() STARTED
[2021-10-27T19:54:54.472Z] 
[2021-10-27T19:54:54.472Z] AclAuthorizerTest > 
testDeleteAclOnPrefixedResource() PASSED
[2021-10-27T19:54:54.472Z] 
[2021-10-27T19:54:54.472Z] AclAuthorizerTest > testDenyTakesPrecedence() STARTED
[2021-10-27T19:54:54.472Z] 
[2021-10-27T19:54:54.472Z] AclAuthorizerTest > testDenyTakesPrecedence() PASSED
[2021-10-27T19:54:54.472Z] 
[2021-10-27T19:54:54.472Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED
[2021-10-27T19:54:55.420Z] 
[2021-10-27T19:54:55.420Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED
[2021-10-27T19:54:55.420Z] 
[2021-10-27T19:54:55.420Z] AclAuthorizerTest > 
testSingleCharacterResourceAcls() STARTED
[2021-10-27T19:54:55.420Z] 
[2021-10-27T19:54:55.420Z] AclAuthorizerTest > 
testSingleCharacterResourceAcls() PASSED
[2021-10-27T19:54:55.420Z] 
[2021-10-27T19:54:55.420Z] AclAuthorizerTest > testNoAclFoundOverride() STARTED
[2021-10-27T19:54:56.369Z] 
[2021-10-27T19:54:56.369Z] AclAuthorizerTest > testNoAclFoundOverride() PASSED
[2021-10-27T19:54:56.369Z] 
[2021-10-27T19:54:56.369Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED
[2021-10-27T19:54:57.318Z] 
[2021-10-27T19:54:57.318Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED
[2021-10-27T19:54:57.318Z] 
[2021-10-27T19:54:57.318Z] AclAuthorizerTest > testEmptyAclThrowsException() 
STARTED
[2021-10-27T19:54:57.318Z] 
[2021-10-27T19:54:57.318Z] AclAuthorizerTest > testEmptyAclThrowsException() 
PASSED
[2021-10-27T19:54:57.318Z] 
[2021-10-27T19:54:57.318Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeNoAclFoundOverride() STARTED
[2021-10-27T19:54:58.267Z] 
[2021-10-27T19:54:58.267Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeNoAclFoundOverride() PASSED
[2021-10-27T19:54:58.267Z] 
[2021-10-27T19:54:58.267Z] AclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess() STARTED
[2021-10-27T19:54:58.267Z] 
[2021-10-27T19:54:58.267Z] AclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess() PASSED
[2021-10-27T19:54:58.267Z] 
[2021-10-27T19:54:58.267Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED
[2021-10-27T19:54:59.216Z] 
[2021-10-27T19:54:59.216Z] AclAuthorizerTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED
[2021-10-27T19:54:59.216Z] 
[2021-10-27T19:54:59.216Z] AclAuthorizerTest > 
testAllowAccessWithCustomPrincipal() STARTED
[2021-10-27T19:54:59.216Z] 
[2021-10-27T19:54:59.216Z] AclAuthorizerTest > 
testAllowAccessWithCustomPrincipal() PASSED
[2021-10-27T19:54:59.216Z] 

Re: [DISCUSS] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-10-27 Thread Knowles Atchison Jr
John,

Thank you for the response and feedback!

I originally started my first pass with the ProducerRecord.
For our connector, we need some of the information out of the SourceRecord
to ack our source system. If I had the actual ProducerRecord, I would have
to convert it back before I would be able to do anything useful with it. I
think there is merit in providing both records as parameters to this
callback. Then connector writers can decide which of the representations of
the data is most useful to them. I also noticed that in my PR I was sending
the SourceRecord post transformation, when we really should be sending the
preTransformRecord.

The Streams solution to this is very interesting. Given the nature of a
connector, to me it makes the most sense for the api call to be part of
that task rather than an external class that is configurable. This allows
the connector to use state it may have at the time to inform decisions on
what to do with these producer exceptions.

I have updated the KIP and PR.

Knowles

On Wed, Oct 27, 2021 at 1:03 PM John Roesler  wrote:

> Good morning, Knowles,
>
> Thanks for the KIP!
>
> To address your latest questions, it is fine to call for a
> vote if a KIP doesn't generate much discussion. Either the
> KIP was just not controversial enough for anyone to comment,
> in which case a vote is appropriate; or no one had time to
> review it, in which case, calling for a vote might be more
> provacative and elicit a response.
>
> As far as pinging people directly, one idea would be to look
> at the git history (git blame/praise) for the files you're
> changing to see which committers have recently been
> involved. Those are the folks who are most likely to have
> valuable feedback on your proposal. It might not be
> appropriate to directly email them, but I have seen KIP
> discussions before that requested feedback from people by
> name. It's probably not best to lead with that, but since no
> one has responded so far, it might not hurt. I'm sure that
> the reason they haven't noticed your KIP is just that they
> are so busy it slipped their radar. They might actually
> appreciate a more direct ping at this point.
>
> I'm happy to review, but as a caveat, I don't have much
> experience with using or maintaining Connect, so caveat
> emptor as far as my review goes.
>
> First of all, thanks for the well written KIP. Without much
> context, I was able to understand the motivation and
> proposal easily just by reading your document.
>
> I think your proposal is a good one. It seems like it would
> be pretty obvious as a user what (if anything) to do with
> the proposed method.
>
> For your reference, this proposal reminds me of these
> capabilities in Streams:
>
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/DeserializationExceptionHandler.java
> and
>
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/ProductionExceptionHandler.java
> .
>
> I'm not sure if there's value in bringing your proposed
> interface closer to that pattern or not. Streams and Connect
> are quite different domains after all. At least, I wanted
> you to be aware of them so you could consider the
> alternative API strategy they present.
>
> Regardless, I do wonder if it would be helpful to also
> include the actual ProducerRecord we tried to send, since
> there's a non-trivial transformation that takes place to
> convert the SourceRecord into a ProducerRecord. I'm not sure
> what people would do with it, exactly, but it might be
> helpful in deciding what to do about the exception, or maybe
> even in understanding the exception.
>
> Those are the only thoughts that come to my mind! Thanks
> again,
> -John
>
> On Wed, 2021-10-27 at 09:16 -0400, Knowles Atchison Jr
> wrote:
> > Good morning,
> >
> > Bumping this thread. Is there someone specific on the Connect framework
> > team that I should ping? Is it appropriate to just call a vote? All
> source
> > connectors are dead in the water without a way to handle producer write
> > exceptions. Thank you.
> >
> > Knowles
> >
> > On Mon, Oct 18, 2021 at 8:33 AM Christopher Shannon <
> > christopher.l.shan...@gmail.com> wrote:
> >
> > > I also would find this feature useful to handle errors better, does
> anyone
> > > have any comments or feedback?
> > >
> > >
> > > On Mon, Oct 11, 2021 at 8:52 AM Knowles Atchison Jr <
> katchiso...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Good morning,
> > > >
> > > > Bumping this for visibility. I would like this to go into the next
> > > release.
> > > > KIP freeze is Friday.
> > > >
> > > > Any comments and feedback are welcome.
> > > >
> > > > Knowles
> > > >
> > > > On Tue, Oct 5, 2021 at 4:24 PM Knowles Atchison Jr <
> > > katchiso...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hello all,
> > > > >
> > > > > I would like to discuss the following KIP:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> 

[jira] [Created] (KAFKA-13410) KRaft Upgrades

2021-10-27 Thread David Arthur (Jira)
David Arthur created KAFKA-13410:


 Summary: KRaft Upgrades
 Key: KAFKA-13410
 URL: https://issues.apache.org/jira/browse/KAFKA-13410
 Project: Kafka
  Issue Type: New Feature
Reporter: David Arthur
Assignee: David Arthur


This is the placeholder JIRA for KIP-778



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-10-27 Thread John Roesler
Good morning, Knowles,

Thanks for the KIP!

To address your latest questions, it is fine to call for a
vote if a KIP doesn't generate much discussion. Either the
KIP was just not controversial enough for anyone to comment,
in which case a vote is appropriate; or no one had time to
review it, in which case, calling for a vote might be more
provacative and elicit a response.

As far as pinging people directly, one idea would be to look
at the git history (git blame/praise) for the files you're
changing to see which committers have recently been
involved. Those are the folks who are most likely to have
valuable feedback on your proposal. It might not be
appropriate to directly email them, but I have seen KIP
discussions before that requested feedback from people by
name. It's probably not best to lead with that, but since no
one has responded so far, it might not hurt. I'm sure that
the reason they haven't noticed your KIP is just that they
are so busy it slipped their radar. They might actually
appreciate a more direct ping at this point.

I'm happy to review, but as a caveat, I don't have much
experience with using or maintaining Connect, so caveat
emptor as far as my review goes.

First of all, thanks for the well written KIP. Without much
context, I was able to understand the motivation and
proposal easily just by reading your document.

I think your proposal is a good one. It seems like it would
be pretty obvious as a user what (if anything) to do with
the proposed method.

For your reference, this proposal reminds me of these
capabilities in Streams:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/DeserializationExceptionHandler.java
and
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/errors/ProductionExceptionHandler.java
.

I'm not sure if there's value in bringing your proposed
interface closer to that pattern or not. Streams and Connect
are quite different domains after all. At least, I wanted
you to be aware of them so you could consider the
alternative API strategy they present.

Regardless, I do wonder if it would be helpful to also
include the actual ProducerRecord we tried to send, since
there's a non-trivial transformation that takes place to
convert the SourceRecord into a ProducerRecord. I'm not sure
what people would do with it, exactly, but it might be
helpful in deciding what to do about the exception, or maybe
even in understanding the exception.

Those are the only thoughts that come to my mind! Thanks
again,
-John

On Wed, 2021-10-27 at 09:16 -0400, Knowles Atchison Jr
wrote:
> Good morning,
> 
> Bumping this thread. Is there someone specific on the Connect framework
> team that I should ping? Is it appropriate to just call a vote? All source
> connectors are dead in the water without a way to handle producer write
> exceptions. Thank you.
> 
> Knowles
> 
> On Mon, Oct 18, 2021 at 8:33 AM Christopher Shannon <
> christopher.l.shan...@gmail.com> wrote:
> 
> > I also would find this feature useful to handle errors better, does anyone
> > have any comments or feedback?
> > 
> > 
> > On Mon, Oct 11, 2021 at 8:52 AM Knowles Atchison Jr  > > 
> > wrote:
> > 
> > > Good morning,
> > > 
> > > Bumping this for visibility. I would like this to go into the next
> > release.
> > > KIP freeze is Friday.
> > > 
> > > Any comments and feedback are welcome.
> > > 
> > > Knowles
> > > 
> > > On Tue, Oct 5, 2021 at 4:24 PM Knowles Atchison Jr <
> > katchiso...@gmail.com>
> > > wrote:
> > > 
> > > > Hello all,
> > > > 
> > > > I would like to discuss the following KIP:
> > > > 
> > > > 
> > > > 
> > > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-779%3A+Allow+Source+Tasks+to+Handle+Producer+Exceptions
> > > > 
> > > > The main purpose is to allow Source Tasks the ability to see underlying
> > > > Producer Exceptions and decide what to do rather than being killed. In
> > > our
> > > > use cases we would want to log/write off some information and continue
> > > > processing.
> > > > 
> > > > PR is here:
> > > > 
> > > > https://github.com/apache/kafka/pull/11382
> > > > 
> > > > Any comments and feedback are welcome.
> > > > 
> > > > 
> > > > Knowles
> > > > 
> > > 
> > 




Re: Issue with Kafka consumers and producers on wildfly and SASL

2021-10-27 Thread Shankar Bhaskaran
Hi ,

I have a fix for this issue ,  how should i submit a patch ?

Regards,
Shankar

On Mon, Aug 30, 2021 at 3:40 AM Shankar Bhaskaran 
wrote:

> Hi ,
>
>
>
> I have set up a Kafka cluster on my linux machine secured using keycloak
> (OAUTHBEARER) Mechanism. I can use the Kafka Console Consumers and
> Producers to send and receive messages.
>
>
>
> I have tried to connect to Kafka from my consumers and producers deployed
> as module on the wildfly App serve (version 19, java 11) . I have set up
> all the required configuration (Config Section at the bottom) .
>
>
> The SASL_JAAS_CONFIG provided as consumerconfig option  has the details
> like (apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
> required LoginStringClaim_sub='kafka-client');
>
>
>
> I am able to get authenticated with the broker , but in the client
> callback I am getting an Unsupported Callback error . I have 3 modules in
> wildfly
>
> 1) kafka producer consumer code dependent on the 2) oauth jar (for
> logincallbackhandler and login module)  dependent on the 3) kafka-client
> jar (2.8.0)]
>
>
>
> I can see that the CLIENT CALL BACK IS CLIENTCREDENTIAL INSTEAD OF
> OAuthBearerTokenCallback. The saslclient is getting set as
> AbstractSaslClient instead of OAuthBearerSaslClient.
>
>
>
> Can I get any pointers on this one ?
>
>
>
> LOGS
>
>
>
> rg.apache.kafka.common.errors.SaslAuthenticationException: An error:
> (java.security.PrivilegedActionException:
> javax.security.sasl.SaslException: ELY05176: Unsupported callback [Caused
> by javax.security.auth.callback.UnsupportedCallbackException]) occurred
> when evaluating SASL token received from the Kafka Broker. Kafka Client
> will go to AUTHENTICATION_FAILED state.
>
> Caused by: javax.security.sasl.SaslException: ELY05176: Unsupported
> callback [Caused by
> javax.security.auth.callback.UnsupportedCallbackException]
>
> at
> org.wildfly.security.elytron-private@1.11.4.Final//org.wildfly.security.mechanism.oauth2.OAuth2Client.getInitialResponse(OAuth2Client.java:58)
>
> at
> org.wildfly.security.elytron-private@1.11.4.Final//org.wildfly.security.sasl.oauth2.OAuth2SaslClient.evaluateMessage(OAuth2SaslClient.java:62)
>
> at
> org.wildfly.security.elytron-private@1.11.4.Final//org.wildfly.security.sasl.util.AbstractSaslParticipant.evaluateMessage(AbstractSaslParticipant.java:219)
>
> at
> org.wildfly.security.elytron-private@1.11.4.Final//org.wildfly.security.sasl.util.AbstractSaslClient.evaluateChallenge(AbstractSaslClient.java:98)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.lambda$createSaslToken$1(SaslClientAuthenticator.java:534)
>
> at
> java.base/java.security.AccessController.doPrivileged(Native Method)
>
> at
> java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:534)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:433)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendInitialToken(SaslClientAuthenticator.java:332)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:273)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:181)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.common.network.Selector.poll(Selector.java:481)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
>
> at
> org.apache.kafka.clients@1.1.8.1//org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:245)
>
> at
> 

Re: [DISCUSS] KIP-787 - MM2 Interface to manage Kafka resources

2021-10-27 Thread Ryanne Dolan
Well I'm convinced! Thanks for looking into it.

Ryanne

On Wed, Oct 27, 2021, 8:49 AM Omnia Ibrahim  wrote:

> I checked the difference between the number of methods in the Admin
> interface and the number of methods MM2 invokes from Admin, and this diff
> is enormous (it's more than 70 methods).
> As far as I can see, the following methods MM2 depends on (in
> MirrorSourceConnector, MirrorMaker, MirrorCheckpointTask and
> MirrorCheckpointConnector), this will leave 73 methods on the Admin
> interface that customer will need to return dummy data for,
>
>1. create(conf)
>2. close
>3. listTopics()
>4. createTopics(newTopics, createTopicsOptions)
>5. createPartitions(newPartitions)
>6. alterConfigs(configs)  // this method is marked for deprecation in
>Admin and the ConfigResource MM2 use is only TOPIC
>7. createAcls(aclBindings) // the list of bindings always filtered by
>TOPIC
>8. describeAcls(aclBindingFilter) // filter is always ANY_TOPIC_ACL
>9. describeConfigs(configResources) // Always for TOPIC resources
>10. listConsumerGroupOffsets(groupId)
>11. listConsumerGroups()
>12. alterConsumerGroupOffsets(groupId, offsets)
>13. describeCluster() // this is invoked from
> ConnectUtils.lookupKafkaClusterId(conf),
>but MM2 isn't the one that initialize the AdminClient
>
> Going with the Admin interface in practice will make any custom Admin
> implementation humongous even for a fringe use case because of the number
> of methods that need to return dummy data,
>
> I am still leaning toward a new interface as it abstract all MM2's
> interaction with Kafka Resources in one place; this makes it easier to
> maintain and make it easier for the use cases where customers wish to
> provide a different method to handle resources.
>
> Omnia
>
> On Tue, Oct 26, 2021 at 4:10 PM Ryanne Dolan 
> wrote:
>
> > I like the idea of failing-fast whenever a custom impl is provided, but I
> > suppose that that could be done for Admin as well. I agree your proposal
> is
> > more ergonomic, but maybe it's okay to have a little friction in such
> > fringe use-cases.
> >
> > Ryanne
> >
> >
> > On Tue, Oct 26, 2021, 6:23 AM Omnia Ibrahim 
> > wrote:
> >
> > > Hey Ryanne, Thanks fo the quick feedback.
> > > Using the Admin interface would make everything easier, as MM2 will
> need
> > > only to configure the classpath for the new implementation and use it
> > > instead of AdminClient.
> > > However, I have two concerns
> > > 1. The Admin interface is enormous, and the MM2 users will need to know
> > the
> > > list of methods MM2 depends on and override these only instead of
> > > implementing the whole Admin interface.
> > > 2. MM2 users will need keep an eye on any changes to Admin interface
> that
> > > impact MM2 for example deprecating methods.
> > > Am not sure if adding these concerns on the users is acceptable or not.
> > > One solution to address these concerns could be adding some checks to
> > make
> > > sure the methods MM2 uses from the Admin interface exists to fail
> faster.
> > > What do you think
> > >
> > > Omnia
> > >
> > >
> > > On Mon, Oct 25, 2021 at 11:24 PM Ryanne Dolan 
> > > wrote:
> > >
> > > > Thanks Omnia, neat idea. I wonder if we could use the existing Admin
> > > > interface instead of defining a new one?
> > > >
> > > > Ryanne
> > > >
> > > > On Mon, Oct 25, 2021, 12:54 PM Omnia Ibrahim <
> o.g.h.ibra...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hey everyone,
> > > > > Please take a look at KIP-787
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-787%3A+MM2+Interface+to+manage+Kafka+resources
> > > > > <
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-787%3A+MM2+Interface+to+manage+Kafka+resources
> > > > > >
> > > > >
> > > > > Thanks for the feedback and support
> > > > > Omnia
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-13409) JUnit test runs often end with "non-zero exit value 1"

2021-10-27 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-13409:


 Summary: JUnit test runs often end with "non-zero exit value 1"
 Key: KAFKA-13409
 URL: https://issues.apache.org/jira/browse/KAFKA-13409
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe
Assignee: Colin McCabe


JUnit test runs often end with "non-zero exit value 1". Here is an example 
message:
{code}
[2021-10-27T00:04:23.451Z] > Process 'Gradle Test Executor 130' finished with 
non-zero exit value 1
[2021-10-27T00:04:23.451Z]   This problem might be caused by incorrect test 
process configuration.
[2021-10-27T00:04:23.451Z]   Please refer to the test execution section in the 
User Manual at 
https://docs.gradle.org/7.2/userguide/java_testing.html#sec:test_execution
{code}

This ends the whole test run, not just the failing test. We should be more 
aggressive about preventing tests from calling "exit" to avoid this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-787 - MM2 Interface to manage Kafka resources

2021-10-27 Thread Omnia Ibrahim
I checked the difference between the number of methods in the Admin
interface and the number of methods MM2 invokes from Admin, and this diff
is enormous (it's more than 70 methods).
As far as I can see, the following methods MM2 depends on (in
MirrorSourceConnector, MirrorMaker, MirrorCheckpointTask and
MirrorCheckpointConnector), this will leave 73 methods on the Admin
interface that customer will need to return dummy data for,

   1. create(conf)
   2. close
   3. listTopics()
   4. createTopics(newTopics, createTopicsOptions)
   5. createPartitions(newPartitions)
   6. alterConfigs(configs)  // this method is marked for deprecation in
   Admin and the ConfigResource MM2 use is only TOPIC
   7. createAcls(aclBindings) // the list of bindings always filtered by
   TOPIC
   8. describeAcls(aclBindingFilter) // filter is always ANY_TOPIC_ACL
   9. describeConfigs(configResources) // Always for TOPIC resources
   10. listConsumerGroupOffsets(groupId)
   11. listConsumerGroups()
   12. alterConsumerGroupOffsets(groupId, offsets)
   13. describeCluster() // this is invoked from
ConnectUtils.lookupKafkaClusterId(conf),
   but MM2 isn't the one that initialize the AdminClient

Going with the Admin interface in practice will make any custom Admin
implementation humongous even for a fringe use case because of the number
of methods that need to return dummy data,

I am still leaning toward a new interface as it abstract all MM2's
interaction with Kafka Resources in one place; this makes it easier to
maintain and make it easier for the use cases where customers wish to
provide a different method to handle resources.

Omnia

On Tue, Oct 26, 2021 at 4:10 PM Ryanne Dolan  wrote:

> I like the idea of failing-fast whenever a custom impl is provided, but I
> suppose that that could be done for Admin as well. I agree your proposal is
> more ergonomic, but maybe it's okay to have a little friction in such
> fringe use-cases.
>
> Ryanne
>
>
> On Tue, Oct 26, 2021, 6:23 AM Omnia Ibrahim 
> wrote:
>
> > Hey Ryanne, Thanks fo the quick feedback.
> > Using the Admin interface would make everything easier, as MM2 will need
> > only to configure the classpath for the new implementation and use it
> > instead of AdminClient.
> > However, I have two concerns
> > 1. The Admin interface is enormous, and the MM2 users will need to know
> the
> > list of methods MM2 depends on and override these only instead of
> > implementing the whole Admin interface.
> > 2. MM2 users will need keep an eye on any changes to Admin interface that
> > impact MM2 for example deprecating methods.
> > Am not sure if adding these concerns on the users is acceptable or not.
> > One solution to address these concerns could be adding some checks to
> make
> > sure the methods MM2 uses from the Admin interface exists to fail faster.
> > What do you think
> >
> > Omnia
> >
> >
> > On Mon, Oct 25, 2021 at 11:24 PM Ryanne Dolan 
> > wrote:
> >
> > > Thanks Omnia, neat idea. I wonder if we could use the existing Admin
> > > interface instead of defining a new one?
> > >
> > > Ryanne
> > >
> > > On Mon, Oct 25, 2021, 12:54 PM Omnia Ibrahim 
> > > wrote:
> > >
> > > > Hey everyone,
> > > > Please take a look at KIP-787
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-787%3A+MM2+Interface+to+manage+Kafka+resources
> > > > <
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-787%3A+MM2+Interface+to+manage+Kafka+resources
> > > > >
> > > >
> > > > Thanks for the feedback and support
> > > > Omnia
> > > >
> > >
> >
>


Re: [VOTE] KIP-784: Add top-level error code field to DescribeLogDirsResponse

2021-10-27 Thread Luke Chen
Hi Mickael,
Thanks for the KIP.
It's good to keep it consistent with others, to have top-level error field.

+ 1 (non-binding)

Thank you.
Luke

On Wed, Oct 27, 2021 at 9:01 PM Mickael Maison 
wrote:

> Hi all,
>
> I'd like to start the vote on this minor KIP.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+top-level+error+code+field+to+DescribeLogDirsResponse
>
> Please take a look, vote or let me know if you have any feedback.
>
> Thanks
>


Re: [DISCUSS] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-10-27 Thread Knowles Atchison Jr
Good morning,

Bumping this thread. Is there someone specific on the Connect framework
team that I should ping? Is it appropriate to just call a vote? All source
connectors are dead in the water without a way to handle producer write
exceptions. Thank you.

Knowles

On Mon, Oct 18, 2021 at 8:33 AM Christopher Shannon <
christopher.l.shan...@gmail.com> wrote:

> I also would find this feature useful to handle errors better, does anyone
> have any comments or feedback?
>
>
> On Mon, Oct 11, 2021 at 8:52 AM Knowles Atchison Jr  >
> wrote:
>
> > Good morning,
> >
> > Bumping this for visibility. I would like this to go into the next
> release.
> > KIP freeze is Friday.
> >
> > Any comments and feedback are welcome.
> >
> > Knowles
> >
> > On Tue, Oct 5, 2021 at 4:24 PM Knowles Atchison Jr <
> katchiso...@gmail.com>
> > wrote:
> >
> > > Hello all,
> > >
> > > I would like to discuss the following KIP:
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-779%3A+Allow+Source+Tasks+to+Handle+Producer+Exceptions
> > >
> > > The main purpose is to allow Source Tasks the ability to see underlying
> > > Producer Exceptions and decide what to do rather than being killed. In
> > our
> > > use cases we would want to log/write off some information and continue
> > > processing.
> > >
> > > PR is here:
> > >
> > > https://github.com/apache/kafka/pull/11382
> > >
> > > Any comments and feedback are welcome.
> > >
> > >
> > > Knowles
> > >
> >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #541

2021-10-27 Thread Apache Jenkins Server
See 




[VOTE] KIP-784: Add top-level error code field to DescribeLogDirsResponse

2021-10-27 Thread Mickael Maison
Hi all,

I'd like to start the vote on this minor KIP.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+top-level+error+code+field+to+DescribeLogDirsResponse

Please take a look, vote or let me know if you have any feedback.

Thanks


Re: [VOTE] KIP-780: Support fine-grained compression options

2021-10-27 Thread Luke Chen
Hi Dongjin,
Thanks for the KIP.
+1 (non-binding)

Luke

On Wed, Oct 27, 2021 at 8:44 PM Dongjin Lee  wrote:

> Bumping up the voting thread.
>
> If you have any questions or opinions, don't hesitate to leave them in the
> discussion thread.
>
> Best,
> Dongjin
>
> On Thu, Oct 14, 2021 at 3:02 AM Dongjin Lee  wrote:
>
> > Hi, Kafka dev,
> >
> > I'd like to open a vote for KIP-780: Support fine-grained compression
> > options:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options
> >
> > Please note that this feature mutually complements KIP-390: Support
> > Compression Level (accepted, targeted to 3.1.0.). It was initially
> planned
> > for a part of KIP-390 but spun off for performance concerns.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Support+Compression+Level
> >
> > Best,
> > Dongjin
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> >
> >
> > *github:  github.com/dongjinleekr
> > keybase:
> https://keybase.io/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > speakerdeck:
> speakerdeck.com/dongjin
> > *
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


Re: [VOTE] KIP-780: Support fine-grained compression options

2021-10-27 Thread Dongjin Lee
Bumping up the voting thread.

If you have any questions or opinions, don't hesitate to leave them in the
discussion thread.

Best,
Dongjin

On Thu, Oct 14, 2021 at 3:02 AM Dongjin Lee  wrote:

> Hi, Kafka dev,
>
> I'd like to open a vote for KIP-780: Support fine-grained compression
> options:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options
>
> Please note that this feature mutually complements KIP-390: Support
> Compression Level (accepted, targeted to 3.1.0.). It was initially planned
> for a part of KIP-390 but spun off for performance concerns.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Support+Compression+Level
>
> Best,
> Dongjin
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #540

2021-10-27 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13408) Add a new metric to track invalid task provided offset

2021-10-27 Thread Nicolas Guyomar (Jira)
Nicolas Guyomar created KAFKA-13408:
---

 Summary: Add a new metric to track invalid task provided offset
 Key: KAFKA-13408
 URL: https://issues.apache.org/jira/browse/KAFKA-13408
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Nicolas Guyomar


Hi team,

Whenever a task provide invalid offset back to the framework, we log a warn 
message but we do not track yet a dedicated metric for those

In some situations where the Sink connector task implements the precommit 
method and has a bug in the way it tracks assigned topic/partition this could 
lead to a task not reporting offset at all thus no offset is being tracked on 
the consumer group in Kafka and can go undetected if we rely on the existing 
JMX beans for offset commit

This improvement Jira is to add a dedicated JMX Metric to be able to alert if 
such invalid offset are provided too often or for too long

 

Thank you



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13407) Kafka controller out of service after ZK leader restart

2021-10-27 Thread Daniel (Jira)
Daniel created KAFKA-13407:
--

 Summary: Kafka controller out of service after ZK leader restart
 Key: KAFKA-13407
 URL: https://issues.apache.org/jira/browse/KAFKA-13407
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.8.1, 2.8.0
 Environment: Ubuntu 20.04
Reporter: Daniel


When the Zookeeper leader disappears, a new instance becomes the leader, the 
instances need to reconnect to Zookeeper, but the Kafka "Controller" gets lost 
in limbo state after re-establishing connection.

See below for how I manage to reproduce this over and over.

*Prerequisites*

Have a Kafka cluster with 3 instances running version 2.8.1. Figure out which 
one is the Controller. I'm using Kafkacat 1.5.0 and get this info using the 
`-L` flag.

Zookeeper runs with 3 instances on version 3.5.9. Figure out which one is 
leader by checking

 
{code:java}
echo stat | nc -v localhost 2181
{code}
 

 

*Reproduce*

1. Stop the leader Zookeeper service.

2. Watch the logs of the Kafka Controller and ensure that it reconnects and 
registers again.

 
{code:java}
Oct 27 09:13:08 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:08,882] INFO 
Unable to read additional data from server sessionid 0x1f2a12870003, likely 
server has closed socket, closing socket connection and attempting reconnect 
(org.apache.zookeeper.ClientCnxn)
Oct 27 09:13:10 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:10,548] WARN 
SASL configuration failed: javax.security.auth.login.LoginException: No JAAS 
configuration section named 'Client' was found in specified JAAS configuration 
file: '/opt/kafka/config/kafka_server_jaas.conf'. Will continue connection to 
Zookeeper server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn)
Oct 27 09:13:10 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:10,548] INFO 
Opening socket connection to server 
zookeeper-kafka.service.consul.lab.aws.blue.example.net/10.10.84.12:2181 
(org.apache.zookeeper.ClientCnxn)
Oct 27 09:13:10 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:10,548] ERROR 
[ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient)
Oct 27 09:13:10 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:10,549] INFO 
Socket connection established, initiating session, client: /10.10.85.215:39338, 
server: 
zookeeper-kafka.service.consul.lab.aws.blue.example.net/10.10.84.12:2181 
(org.apache.zookeeper.ClientCnxn)
Oct 27 09:13:10 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:10,569] INFO 
Session establishment complete on server 
zookeeper-kafka.service.consul.lab.aws.blue.example.net/10.10.84.12:2181, 
sessionid = 0x1f2a12870003, negotiated timeout = 18000 
(org.apache.zookeeper.ClientCnxn)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,548] INFO 
[ZooKeeperClient Kafka server] Reinitializing due to auth failure. 
(kafka.zookeeper.ZooKeeperClient)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,550] INFO 
[PartitionStateMachine controllerId=1003] Stopped partition state machine 
(kafka.controller.ZkPartitionStateMachine)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,550] INFO 
[ReplicaStateMachine controllerId=1003] Stopped replica state machine 
(kafka.controller.ZkReplicaStateMachine)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,551] INFO 
[RequestSendThread controllerId=1003] Shutting down 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,551] INFO 
[RequestSendThread controllerId=1003] Stopped 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,551] INFO 
[RequestSendThread controllerId=1003] Shutdown completed 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,552] INFO 
[RequestSendThread controllerId=1003] Shutting down 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,552] INFO 
[RequestSendThread controllerId=1003] Stopped 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,552] INFO 
[RequestSendThread controllerId=1003] Shutdown completed 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,554] INFO 
[RequestSendThread controllerId=1003] Shutting down 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,554] INFO 
[RequestSendThread controllerId=1003] Stopped 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,554] INFO 
[RequestSendThread controllerId=1003] Shutdown completed 
(kafka.controller.RequestSendThread)
Oct 27 09:13:11 ip-10-10-85-215 kafka[62961]: [2021-10-27 09:13:11,556] INFO 
Processing notification(s) to 

[jira] [Created] (KAFKA-13406) Cooperative sticky assignor got stuck due to assignment validation failed

2021-10-27 Thread Luke Chen (Jira)
Luke Chen created KAFKA-13406:
-

 Summary: Cooperative sticky assignor got stuck due to assignment 
validation failed
 Key: KAFKA-13406
 URL: https://issues.apache.org/jira/browse/KAFKA-13406
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 3.0.0
Reporter: Luke Chen
Assignee: Luke Chen


We'll do validateCooperativeAssignment for cooperative assignor, where we 
validate if there are previously owned partitions directly transfer to other 
consumers without "revoke" step. However, the "ownedPartition" in subscription 
might contain out-of-dated data, which might cause the validation always 
failure.

We should consider the fix it by deserializing the subscription userData for 
generation info in validateCooperationAssignment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)