[jira] [Resolved] (KAFKA-10729) KIP-482: Bump remaining RPC's to use tagged fields

2020-12-01 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10729.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

> KIP-482: Bump remaining RPC's to use tagged fields
> --
>
> Key: KAFKA-10729
> URL: https://issues.apache.org/jira/browse/KAFKA-10729
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gardner Vickers
>Assignee: Gardner Vickers
>Priority: Major
> Fix For: 2.8.0
>
>
> With 
> [KIP-482|https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields],
>  the Kafka protocol gained support for tagged fields.
> Not all RPC's were bumped to use flexible versioning and tagged fields. We 
> should bump the remaining RPC's and provide a new IBP to take advantage of 
> tagged fields via the flexible versioning mechanism.
>  
> The RPC's which need to be bumped are:
>  
> {code:java}
> AddOffsetsToTxnRequest
> AddOffsetsToTxnResponse
> AddPartitionsToTxnRequest
> AddPartitionsToTxnResponse
> AlterClientQuotasRequest
> AlterClientQuotasResponse
> AlterConfigsRequest
> AlterConfigsResponse
> AlterReplicaLogDirsRequest
> AlterReplicaLogDirsResponse
> DescribeClientQuotasRequest
> DescribeClientQuotasResponse
> DescribeConfigsRequest
> DescribeConfigsResponse
> EndTxnRequest
> EndTxnResponse
> ListOffsetRequest
> ListOffsetResponse
> OffsetForLeaderEpochRequest
> OffsetForLeaderEpochResponse
> ProduceRequest
> ProduceResponse
> WriteTxnMarkersRequest
> WriteTxnMarkersResponse 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #297

2020-12-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Increase unit test coverage of method 
ProcessorTopology#updateSourceTopics() (#9654)

[github] MINOR: Small cleanups in `AlterIsr` handling logic (#9663)

[github] KAFKA-6687: restrict DSL to allow only Streams from the same source 
topics (#9609)


--
[...truncated 6.95 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #274

2020-12-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10729; Bump remaining RPC's to use tagged fields. (#9601)


--
[...truncated 3.47 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2b34c5bd,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@148241b0,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@148241b0,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@50194c5d,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@50194c5d,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5c1276e2,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5c1276e2,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@518659f0,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@518659f0,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@578f1db1,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@578f1db1,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4d6ed8b9,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4d6ed8b9,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2841290, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2841290, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@61ae82e9,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@61ae82e9,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e6379f8, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e6379f8, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 

Re: Unable to sign into VPN

2020-12-01 Thread Kowshik Prakasam
Please ignore this email, it was sent to the wrong email address.

On Tue, Dec 1, 2020 at 3:19 PM Kowshik Prakasam 
wrote:

> Hi,
>
> I'm unable to sign into VPN. When I goto Okta and click on "Pulse Secure
> VPN", I keep getting the attached error message. Could you please help
> resolve this?
>
>
> Thanks,
> Kowshik
>
>


Re: Unable to sign into VPN

2020-12-01 Thread Kowshik Prakasam
Sorry!

On Tue, Dec 1, 2020 at 3:19 PM Kowshik Prakasam 
wrote:

> Please ignore this email, it was sent to the wrong email address.
>
> On Tue, Dec 1, 2020 at 3:19 PM Kowshik Prakasam 
> wrote:
>
>> Hi,
>>
>> I'm unable to sign into VPN. When I goto Okta and click on "Pulse Secure
>> VPN", I keep getting the attached error message. Could you please help
>> resolve this?
>>
>>
>> Thanks,
>> Kowshik
>>
>>


Re: [DISCUSS] KIP-689: Extend `StreamJoined` to allow more store configs

2020-12-01 Thread Walker Carlson
Thanks for making these changes. It makes more sense now to me. Overall LGTM

walker

On Tue, Dec 1, 2020 at 3:39 PM Sophie Blee-Goldman 
wrote:

> Thanks for the KIP! I'm happy with the state of things after your latest
> update,
> LGTM
>
> Sophie
>
> On Tue, Dec 1, 2020 at 2:26 PM Leah Thomas  wrote:
>
> > Hi Matthias,
> >
> > Yeah I think it should, good catch. That should also answer Walker's
> > question about why we have an option for `withLoggingEnabled()` even
> though
> > that's the default. Passing in a new map of configs could allow the user
> to
> > configure the log differently than the default. I've updated the KIP to
> > reflected the added parameter and an added variable, `topicConfig` to
> store
> > the map of configs.
> >
> > Best,
> > Leah
> >
> > On Mon, Nov 30, 2020 at 5:35 PM Matthias J. Sax 
> wrote:
> >
> > > Thanks for the KIP Leah.
> > >
> > > Should `withLoggingEnabled()` take a `Map config`
> > > similar to the one from `Materialized`?
> > >
> > >
> > > -Matthias
> > >
> > > On 11/30/20 12:22 PM, Walker Carlson wrote:
> > > > Ah. That makes sense. Thank you for fixing that.
> > > >
> > > > One minor question. If the default is enabled is there any case
> where a
> > > > user would turn logging off then back on again? I can see having the
> > > > enableLoging for completeness so it's not that important probably.
> > > >
> > > > Anyways other than that it looks good!
> > > >
> > > > Walker
> > > >
> > > > On Mon, Nov 30, 2020 at 12:06 PM Leah Thomas 
> > > wrote:
> > > >
> > > >> Hey Walker,
> > > >>
> > > >> Thanks for your response.
> > > >>
> > > >> 1. Ah yeah thanks for the catch, that was held over from copy/paste.
> > > Both
> > > >> functions should take no parameters, as they just `loggingEnabled`
> to
> > > true
> > > >> or false. I've removed the `WindowBytesStoreSupplier
> > otherStoreSupplier`
> > > >> from the methods in the KIP
> > > >> 2. I think the fix to 1 answers this question, otherwise, I'm not
> > quite
> > > >> sure what you're asking. With the updated method calls, there
> > shouldn't
> > > be
> > > >> any duplication.
> > > >>
> > > >> Cheers,
> > > >> Leah
> > > >>
> > > >> On Mon, Nov 30, 2020 at 12:14 PM Walker Carlson <
> > wcarl...@confluent.io>
> > > >> wrote:
> > > >>
> > > >>> Hello Leah,
> > > >>>
> > > >>> Thank you for the KIP.
> > > >>>
> > > >>> I had a couple questions that maybe you can expand on from what is
> on
> > > the
> > > >>> KIP.
> > > >>>
> > > >>> 1) Why are we enabling/disabling the logging by passing in a
> > > >>> `WindowBytesStoreSupplier`?
> > > >>> It seems to me that these two things should be separate.
> > > >>>
> > > >>> 2) There is already `withThisStoreSupplier(final
> > > WindowBytesStoreSupplier
> > > >>> otherStoreSupplier)` and `withOtherStoreSupplier(final
> > > >>> WindowBytesStoreSupplier otherStoreSupplier)`. Why do we need to
> > > >> duplicate
> > > >>> them when the `retentionPeriod` can be set through them?
> > > >>>
> > > >>> Thanks,
> > > >>> Walker
> > > >>>
> > > >>> On Mon, Nov 30, 2020 at 8:53 AM Leah Thomas 
> > > >> wrote:
> > > >>>
> > >  After reading through
> > > https://issues.apache.org/jira/browse/KAFKA-9921
> > > >> I
> > >  removed the option to enable/disable caching for `StreamJoined`,
> as
> > > >>> caching
> > >  will always be disabled because we retain duplicates.
> > > 
> > >  I updated the KIP accordingly, it now adds only `enableLogging`
> as a
> > >  config.
> > > 
> > >  On Mon, Nov 30, 2020 at 9:54 AM Leah Thomas  >
> > > >>> wrote:
> > > 
> > > > Hi all,
> > > >
> > > > I'd like to kick-off the discussion for KIP-689: Extend
> > > >> `StreamJoined`
> > > >>> to
> > > > allow more store configs. This builds off the work of KIP-479
> > > > <
> > > 
> > > >>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoined+config+object+to+Join
> > > 
> > >  to
> > > > add options to enable/disable both logging and caching for stream
> > > >> join
> > > > stores.
> > > >
> > > > KIP is here:
> > > >
> > > 
> > > >>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-689%3A+Extend+%60StreamJoined%60+to+allow+more+store+configs
> > > >
> > > >
> > > > Looking forward to hearing your thoughts,
> > > > Leah
> > > >
> > > 
> > > >>>
> > > >>
> > > >
> > >
> >
>


Unable to sign into VPN

2020-12-01 Thread Kowshik Prakasam
Hi,

I'm unable to sign into VPN. When I goto Okta and click on "Pulse Secure
VPN", I keep getting the attached error message. Could you please help
resolve this?


Thanks,
Kowshik


Re: [DISCUSS] KIP-689: Extend `StreamJoined` to allow more store configs

2020-12-01 Thread Sophie Blee-Goldman
Thanks for the KIP! I'm happy with the state of things after your latest
update,
LGTM

Sophie

On Tue, Dec 1, 2020 at 2:26 PM Leah Thomas  wrote:

> Hi Matthias,
>
> Yeah I think it should, good catch. That should also answer Walker's
> question about why we have an option for `withLoggingEnabled()` even though
> that's the default. Passing in a new map of configs could allow the user to
> configure the log differently than the default. I've updated the KIP to
> reflected the added parameter and an added variable, `topicConfig` to store
> the map of configs.
>
> Best,
> Leah
>
> On Mon, Nov 30, 2020 at 5:35 PM Matthias J. Sax  wrote:
>
> > Thanks for the KIP Leah.
> >
> > Should `withLoggingEnabled()` take a `Map config`
> > similar to the one from `Materialized`?
> >
> >
> > -Matthias
> >
> > On 11/30/20 12:22 PM, Walker Carlson wrote:
> > > Ah. That makes sense. Thank you for fixing that.
> > >
> > > One minor question. If the default is enabled is there any case where a
> > > user would turn logging off then back on again? I can see having the
> > > enableLoging for completeness so it's not that important probably.
> > >
> > > Anyways other than that it looks good!
> > >
> > > Walker
> > >
> > > On Mon, Nov 30, 2020 at 12:06 PM Leah Thomas 
> > wrote:
> > >
> > >> Hey Walker,
> > >>
> > >> Thanks for your response.
> > >>
> > >> 1. Ah yeah thanks for the catch, that was held over from copy/paste.
> > Both
> > >> functions should take no parameters, as they just `loggingEnabled` to
> > true
> > >> or false. I've removed the `WindowBytesStoreSupplier
> otherStoreSupplier`
> > >> from the methods in the KIP
> > >> 2. I think the fix to 1 answers this question, otherwise, I'm not
> quite
> > >> sure what you're asking. With the updated method calls, there
> shouldn't
> > be
> > >> any duplication.
> > >>
> > >> Cheers,
> > >> Leah
> > >>
> > >> On Mon, Nov 30, 2020 at 12:14 PM Walker Carlson <
> wcarl...@confluent.io>
> > >> wrote:
> > >>
> > >>> Hello Leah,
> > >>>
> > >>> Thank you for the KIP.
> > >>>
> > >>> I had a couple questions that maybe you can expand on from what is on
> > the
> > >>> KIP.
> > >>>
> > >>> 1) Why are we enabling/disabling the logging by passing in a
> > >>> `WindowBytesStoreSupplier`?
> > >>> It seems to me that these two things should be separate.
> > >>>
> > >>> 2) There is already `withThisStoreSupplier(final
> > WindowBytesStoreSupplier
> > >>> otherStoreSupplier)` and `withOtherStoreSupplier(final
> > >>> WindowBytesStoreSupplier otherStoreSupplier)`. Why do we need to
> > >> duplicate
> > >>> them when the `retentionPeriod` can be set through them?
> > >>>
> > >>> Thanks,
> > >>> Walker
> > >>>
> > >>> On Mon, Nov 30, 2020 at 8:53 AM Leah Thomas 
> > >> wrote:
> > >>>
> >  After reading through
> > https://issues.apache.org/jira/browse/KAFKA-9921
> > >> I
> >  removed the option to enable/disable caching for `StreamJoined`, as
> > >>> caching
> >  will always be disabled because we retain duplicates.
> > 
> >  I updated the KIP accordingly, it now adds only `enableLogging` as a
> >  config.
> > 
> >  On Mon, Nov 30, 2020 at 9:54 AM Leah Thomas 
> > >>> wrote:
> > 
> > > Hi all,
> > >
> > > I'd like to kick-off the discussion for KIP-689: Extend
> > >> `StreamJoined`
> > >>> to
> > > allow more store configs. This builds off the work of KIP-479
> > > <
> > 
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoined+config+object+to+Join
> > 
> >  to
> > > add options to enable/disable both logging and caching for stream
> > >> join
> > > stores.
> > >
> > > KIP is here:
> > >
> > 
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-689%3A+Extend+%60StreamJoined%60+to+allow+more+store+configs
> > >
> > >
> > > Looking forward to hearing your thoughts,
> > > Leah
> > >
> > 
> > >>>
> > >>
> > >
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #298

2020-12-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10729; Bump remaining RPC's to use tagged fields. (#9601)

[github] KAFKA-9263 The new hw is added to incorrect log when 
ReplicaAlterLogDirsThread is replacing log (fix 
PlaintextAdminIntegrationTest.testAlterReplicaLogDirs) (#9423)


--
[...truncated 3.48 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-10793) Race condition in FindCoordinatorFuture permanently severs connection to group coordinator

2020-12-01 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-10793:
--

 Summary: Race condition in FindCoordinatorFuture permanently 
severs connection to group coordinator
 Key: KAFKA-10793
 URL: https://issues.apache.org/jira/browse/KAFKA-10793
 Project: Kafka
  Issue Type: Bug
  Components: consumer, streams
Affects Versions: 2.5.0
Reporter: A. Sophie Blee-Goldman


Pretty much as soon as we started actively monitoring the 
_last-rebalance-seconds-ago_ metric in our Kafka Streams test environment, we 
started seeing something weird. Every so often one of the StreamThreads (ie a 
single Consumer instance) would appear to permanently fall out of the group, as 
evidenced by a monotonically increasing _last-rebalance-seconds-ago._ We inject 
artificial network failures every few hours at most, so the group rebalances 
quite often. But the one consumer never rejoins, with no other symptoms 
(besides a slight drop in throughput since the remaining threads had to take 
over this member's work). We're confident that the problem exists in the client 
layer, since the logs confirmed that the unhealthy consumer was still calling 
poll. It was also calling Consumer#committed in its main poll loop, which was 
consistently failing with a TimeoutException.

When I attached a remote debugger to an instance experiencing this issue, the 
network client's connection to the group coordinator (the one that uses 
MAX_VALUE - node.id as the coordinator id) was in the DISCONNECTED state. But 
for some reason it never tried to re-establish this connection, although it did 
successfully connect to that same broker through the "normal" connection (ie 
the one that juts uses node.id).

The tl;dr is that the AbstractCoordinator's FindCoordinatorRequest has failed 
(presumably due to a disconnect), but the _findCoordinatorFuture_ is non-null 
so a new request is never sent. This shouldn't be possible since the 
FindCoordinatorResponseHandler is supposed to clear the _findCoordinatorFuture_ 
when the future is completed. But somehow that didn't happen, so the consumer 
continues to assume there's still a FindCoordinator request in flight and never 
even notices that it's dropped out of the group.

These are the only confirmed findings so far, however we have some guesses 
which I'll leave in the comments. Note that we only noticed this due to the 
newly added _last-rebalance-seconds-ago_ __metric, and there's no reason to 
believe this bug hasn't been flying under the radar since the Consumer's 
inception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #296

2020-12-01 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-10780) Rewrite ControllerZNode struct with auto-generated protocol

2020-12-01 Thread dengziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengziming resolved KAFKA-10780.

Resolution: Won't Do

KIP-500 will replace all this code.

>  Rewrite ControllerZNode struct with auto-generated protocol
> 
>
> Key: KAFKA-10780
> URL: https://issues.apache.org/jira/browse/KAFKA-10780
> Project: Kafka
>  Issue Type: Sub-task
>  Components: protocol
>Reporter: dengziming
>Assignee: dengziming
>Priority: Major
>
> User auto-generated protocol to rewrite zk controller node



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10794) Replica leader election is too slow in the case of too many partitions

2020-12-01 Thread limeng (Jira)
limeng created KAFKA-10794:
--

 Summary: Replica leader election is too slow in the case of too 
many partitions
 Key: KAFKA-10794
 URL: https://issues.apache.org/jira/browse/KAFKA-10794
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.5.1, 2.6.0
Reporter: limeng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9263) The new hw is added to incorrect log when ReplicaAlterLogDirsThread is replacing log (fix PlaintextAdminIntegrationTest.testAlterReplicaLogDirs)

2020-12-01 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-9263.
---
Fix Version/s: 2.8.0
   Resolution: Fixed

> The new hw is added to incorrect log when  ReplicaAlterLogDirsThread is 
> replacing log (fix  PlaintextAdminIntegrationTest.testAlterReplicaLogDirs)
> --
>
> Key: KAFKA-9263
> URL: https://issues.apache.org/jira/browse/KAFKA-9263
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.4.0
>Reporter: John Roesler
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.8.0
>
>
> This test has failed for me on 
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/9691/testReport/junit/kafka.api/AdminClientIntegrationTest/testAlterReplicaLogDirs/
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedException: only 0 messages are produced 
> within timeout after replica movement. Producer future 
> Some(Failure(java.util.concurrent.TimeoutException: Timeout after waiting for 
> 1 ms.))
> Stacktrace
> org.scalatest.exceptions.TestFailedException: only 0 messages are produced 
> within timeout after replica movement. Producer future 
> Some(Failure(java.util.concurrent.TimeoutException: Timeout after waiting for 
> 1 ms.))
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
>   at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
>   at org.scalatest.Assertions.fail(Assertions.scala:1091)
>   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
>   at org.scalatest.Assertions$.fail(Assertions.scala:1389)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:842)
>   at 
> kafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs(AdminClientIntegrationTest.scala:459)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> Standard Output
> [2019-12-03 04:54:16,111] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition unclean-test-topic-1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-12-03 04:54:21,711] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-12-03 04:54:21,712] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-12-03 04:54:27,092] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition unclean-test-topic-1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-12-03 04:54:27,091] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition unclean-test-topic-1-1 at offset 0 
> 

[jira] [Created] (KAFKA-10792) Source tasks can block herder thread by hanging during stop

2020-12-01 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-10792:
-

 Summary: Source tasks can block herder thread by hanging during 
stop
 Key: KAFKA-10792
 URL: https://issues.apache.org/jira/browse/KAFKA-10792
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.1, 2.6.0, 2.4.1, 2.5.0, 2.4.0, 2.4.2, 2.7.0
Reporter: Chris Egerton
Assignee: Chris Egerton


If a source task blocks during its {{stop}} method, the herder thread will also 
block, which can cause issues with detecting rebalances, reconfiguring 
connectors, and other vital functions of a Connect worker.

This occurs because the call to {{SourceTask::stop}} occurs on the herder's 
thread, instead of on the source task's own dedicated thread. This can be fixed 
by moving the call to {{SourceTask::stop}} onto the source task's dedicated 
thread and aligning with the current approach for {{Connector}}s and 
{{SinkTask}}s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #272

2020-12-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix reading SSH output in Streams system tests (#9665)


--
[...truncated 3.47 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@426d2efa, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@171423f6, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@171423f6, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7c503dba, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7c503dba, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2175855e, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2175855e, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@ac47574, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@ac47574, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c79ec49, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c79ec49, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4139ab91, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4139ab91, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4c6b899d, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4c6b899d, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@652d3ab1, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@652d3ab1, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@164589f9, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@164589f9, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1aaccd83, 
timestamped = false, caching = false, logging = true] STARTED


Re: [VOTE] 2.6.1 RC2

2020-12-01 Thread Guozhang Wang
Hello Gary,

Thanks for pointing this out, that change was for making other exceptions
easier to debug but I think MemberIdRequiredException is overlooked. I can
provide a hotfix to separate this exception from others in this log entry
to be included in a future release.


Guozhang

On Sat, Nov 28, 2020 at 11:54 AM Ismael Juma  wrote:

> Thoughts Guozhang?
>
> On Wed, Nov 25, 2020, 11:16 AM Gary Russell  wrote:
>
> > I am seeing this on every consumer start:
> >
> > 2020-11-25 13:54:34.858  INFO 42250 --- [ntainer#0-0-C-1]
> > o.a.k.c.c.internals.AbstractCoordinator  : [Consumer
> > clientId=consumer-ktest26int-1, groupId=ktest26int] Rebalance failed.
> >
> > org.apache.kafka.common.errors.MemberIdRequiredException: The group
> member
> > needs to have a valid member id before actually entering a consumer
> group.
> >
> >
> > Due to this change [1].
> >
> > I understand what a MemberIdRequiredException is, but the previous
> (2.6.0)
> > log (with exception.getMessage()) didn't stand out like the new one does
> > because it was all on one line.
> >
> > Probably not a blocker, but I suspect it will cause some angst when users
> > start seeing it since it stands out so much. It will be worse if/when the
> > lack of stack trace for ApiExceptions is ever fixed.
> >
> > I am not sure I understand why it's logged at INFO at all, since it's a
> > normal state during initial handshaking.
> >
> >
> >
> > [1]:
> >
> https://github.com/apache/kafka/commit/16ec1793d53700623c9cb43e711f585aafd44dd4#diff-15efe9b844f78b686393b6c2e2ad61306c3473225742caed05c7edab9a138832R468
> >
> >
> > 
> > From: Mickael Maison 
> > Sent: Wednesday, November 25, 2020 1:41 PM
> > To: dev ; Users ;
> > kafka-clients 
> > Subject: [VOTE] 2.6.1 RC2
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 2.6.1.
> >
> > Since RC1, the following JIRAs have been fixed: KAFKA-10758
> >
> > Release notes for the 2.6.1 release:
> >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https:%2F%2Fhome.apache.org%2F~mimaison%2Fkafka-2.6.1-rc2%2FRELEASE_NOTES.htmldata=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=mpMKjztN2CqqGQDrf5wfJ1JTMTep9oA2tf2n2tH8OEI%3Dreserved=0
> >
> > *** Please download, test and vote by Wednesday, December 2, 5PM PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkafka.apache.org%2FKEYSdata=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=WZTUB%2B7qCILAKvU%2B07JURXDQzTlgjth87eI6IWL120M%3Dreserved=0
> >
> > * Release artifacts to be voted upon (source and binary):
> >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https:%2F%2Fhome.apache.org%2F~mimaison%2Fkafka-2.6.1-rc2%2Fdata=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=o0VgYxAqwmj%2BnKA%2FRe0Idwe0IDozqt%2BqFkewexGg6H8%3Dreserved=0
> >
> > * Maven artifacts to be voted upon:
> >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Frepository.apache.org%2Fcontent%2Fgroups%2Fstaging%2Forg%2Fapache%2Fkafka%2Fdata=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=p6Tt30M5cfuXRHr1oR8usuPYBXUlsV4frtPaVmY91b4%3Dreserved=0
> >
> > * Javadoc:
> >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https:%2F%2Fhome.apache.org%2F~mimaison%2Fkafka-2.6.1-rc2%2Fjavadoc%2Fdata=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=Yw3faj2YYCV0gjkrKwM%2BWb%2B5nu7NYhK5HqLjcSCF7%2Fc%3Dreserved=0
> >
> > * Tag to be voted upon (off 2.6 branch) is the 2.6.1 tag:
> >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fkafka%2Freleases%2Ftag%2F2.6.1-rc2data=04%7C01%7Cgrussell%40vmware.com%7Cf7ebf202bedd4de818d108d89171b832%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637419264901725624%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=fLiGhYV9B8vnpbWAGIHAsQcFKkXACVAi%2BYamXyMTey8%3Dreserved=0
> >
> > * Documentation:
> >
> >
> 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #250

2020-12-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix reading SSH output in Streams system tests (#9665)

[github] MINOR: Increase unit test coverage of method 
ProcessorTopology#updateSourceTopics() (#9654)

[github] MINOR: Small cleanups in `AlterIsr` handling logic (#9663)

[github] KAFKA-6687: restrict DSL to allow only Streams from the same source 
topics (#9609)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED


Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #65

2020-12-01 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-689: Extend `StreamJoined` to allow more store configs

2020-12-01 Thread Leah Thomas
Hi Matthias,

Yeah I think it should, good catch. That should also answer Walker's
question about why we have an option for `withLoggingEnabled()` even though
that's the default. Passing in a new map of configs could allow the user to
configure the log differently than the default. I've updated the KIP to
reflected the added parameter and an added variable, `topicConfig` to store
the map of configs.

Best,
Leah

On Mon, Nov 30, 2020 at 5:35 PM Matthias J. Sax  wrote:

> Thanks for the KIP Leah.
>
> Should `withLoggingEnabled()` take a `Map config`
> similar to the one from `Materialized`?
>
>
> -Matthias
>
> On 11/30/20 12:22 PM, Walker Carlson wrote:
> > Ah. That makes sense. Thank you for fixing that.
> >
> > One minor question. If the default is enabled is there any case where a
> > user would turn logging off then back on again? I can see having the
> > enableLoging for completeness so it's not that important probably.
> >
> > Anyways other than that it looks good!
> >
> > Walker
> >
> > On Mon, Nov 30, 2020 at 12:06 PM Leah Thomas 
> wrote:
> >
> >> Hey Walker,
> >>
> >> Thanks for your response.
> >>
> >> 1. Ah yeah thanks for the catch, that was held over from copy/paste.
> Both
> >> functions should take no parameters, as they just `loggingEnabled` to
> true
> >> or false. I've removed the `WindowBytesStoreSupplier otherStoreSupplier`
> >> from the methods in the KIP
> >> 2. I think the fix to 1 answers this question, otherwise, I'm not quite
> >> sure what you're asking. With the updated method calls, there shouldn't
> be
> >> any duplication.
> >>
> >> Cheers,
> >> Leah
> >>
> >> On Mon, Nov 30, 2020 at 12:14 PM Walker Carlson 
> >> wrote:
> >>
> >>> Hello Leah,
> >>>
> >>> Thank you for the KIP.
> >>>
> >>> I had a couple questions that maybe you can expand on from what is on
> the
> >>> KIP.
> >>>
> >>> 1) Why are we enabling/disabling the logging by passing in a
> >>> `WindowBytesStoreSupplier`?
> >>> It seems to me that these two things should be separate.
> >>>
> >>> 2) There is already `withThisStoreSupplier(final
> WindowBytesStoreSupplier
> >>> otherStoreSupplier)` and `withOtherStoreSupplier(final
> >>> WindowBytesStoreSupplier otherStoreSupplier)`. Why do we need to
> >> duplicate
> >>> them when the `retentionPeriod` can be set through them?
> >>>
> >>> Thanks,
> >>> Walker
> >>>
> >>> On Mon, Nov 30, 2020 at 8:53 AM Leah Thomas 
> >> wrote:
> >>>
>  After reading through
> https://issues.apache.org/jira/browse/KAFKA-9921
> >> I
>  removed the option to enable/disable caching for `StreamJoined`, as
> >>> caching
>  will always be disabled because we retain duplicates.
> 
>  I updated the KIP accordingly, it now adds only `enableLogging` as a
>  config.
> 
>  On Mon, Nov 30, 2020 at 9:54 AM Leah Thomas 
> >>> wrote:
> 
> > Hi all,
> >
> > I'd like to kick-off the discussion for KIP-689: Extend
> >> `StreamJoined`
> >>> to
> > allow more store configs. This builds off the work of KIP-479
> > <
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoined+config+object+to+Join
> 
>  to
> > add options to enable/disable both logging and caching for stream
> >> join
> > stores.
> >
> > KIP is here:
> >
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-689%3A+Extend+%60StreamJoined%60+to+allow+more+store+configs
> >
> >
> > Looking forward to hearing your thoughts,
> > Leah
> >
> 
> >>>
> >>
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #273

2020-12-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Increase unit test coverage of method 
ProcessorTopology#updateSourceTopics() (#9654)

[github] MINOR: Small cleanups in `AlterIsr` handling logic (#9663)

[github] KAFKA-6687: restrict DSL to allow only Streams from the same source 
topics (#9609)


--
[...truncated 3.47 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@311ea83d,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@311ea83d,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@73ffef0b,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@73ffef0b,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2b34c5bd,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2b34c5bd,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@148241b0,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@148241b0,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@50194c5d,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@50194c5d,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5c1276e2,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5c1276e2,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@518659f0,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@518659f0,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@578f1db1,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@578f1db1,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4d6ed8b9,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4d6ed8b9,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #295

2020-12-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10770: Remove duplicate defination of Metrics#getTags (#9659)


--
[...truncated 3.48 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #249

2020-12-01 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #275

2020-12-01 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-680: TopologyTestDriver should not require a Properties argument

2020-12-01 Thread Rohit Deshpande
Hello all,
I am closing the vote as there are 3 binding votes.
Summary of the change:
Proposing to add two new constructors to TopologyTestDriver class.
1. One with only topology as parameter
2. Second one with topology and wall clock time as parameter
Additional condition is we want to set randomized application id in stream
config to avoid conflicts with tests running in parallel.
Wiki for this change: link

Pull request: link 

Thanks,
Rohit

On Mon, Nov 23, 2020 at 9:58 AM Rohit Deshpande 
wrote:

> Thanks John and Matthias.
> Waiting for 1 more binding vote.
> Thanks,
> Rohit
>
> On Sat, Nov 21, 2020 at 11:01 AM Matthias J. Sax  wrote:
>
>> +1 (binding)
>>
>> On 11/20/20 7:43 PM, John Roesler wrote:
>> > Thanks again for the KIP, Rohit.
>> >
>> > I’m +1 (binding)
>> >
>> > Sorry, I missed your vote thread.
>> >
>> > -John
>> >
>> > On Fri, Nov 20, 2020, at 21:35, Rohit Deshpande wrote:
>> >> Thanks Guozhang.
>> >> Waiting for binding votes.
>> >> Thanks,
>> >> Rohit
>> >>
>> >> On Tue, Nov 17, 2020 at 10:13 AM Guozhang Wang 
>> wrote:
>> >>
>> >>> +1, thanks Rohit.
>> >>>
>> >>>
>> >>> Guozhang
>> >>>
>> >>> On Sun, Nov 15, 2020 at 11:53 AM Rohit Deshpande <
>> rohitdesh...@gmail.com>
>> >>> wrote:
>> >>>
>>  Hello all,
>>  I would like to start voting on KIP-680: TopologyTestDriver should
>> not
>>  require a Properties argument.
>> 
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-680%3A+TopologyTestDriver+should+not+require+a+Properties+argument
>> 
>>  Discuss thread:
>> 
>> 
>> >>>
>> https://lists.apache.org/thread.html/r5d3d0afc6feb5e18ade47aefbd88534f1b19b2f550a14d33cbc7a0dd%40%3Cdev.kafka.apache.org%3E
>> 
>>  Jira for the KIP:
>>  https://issues.apache.org/jira/browse/KAFKA-10629
>> 
>>  If we end up making changes, they will look like this:
>>  https://github.com/apache/kafka/compare/trunk...rohitrmd:KAFKA-10629
>> 
>>  Thanks,
>>  Rohit
>> 
>> >>>
>> >>>
>> >>> --
>> >>> -- Guozhang
>> >>>
>> >>
>>
>


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-12-01 Thread Satish Duggana
Hi,
We updated the KIP with the points mentioned in the earlier mail
except for KIP-516 related changes. You can go through them and let us
know if you have any comments. We will update the KIP with the
remaining todo items and KIP-516 related changes by end of this
week(5th Dec).

Thanks,
Satish.

On Tue, Nov 10, 2020 at 8:26 PM Satish Duggana  wrote:
>
> Hi Jun,
> Thanks for your comments. Please find the inline replies below.
>
> 605.2 "Build the local leader epoch cache by cutting the leader epoch
> sequence received from remote storage to [LSO, ELO]." I mentioned an issue
> earlier. Suppose the leader's local start offset is 100. The follower finds
> a remote segment covering offset range [80, 120). The producerState with
> this remote segment is up to offset 120. To trim the producerState to
> offset 100 requires more work since one needs to download the previous
> producerState up to offset 80 and then replay the messages from 80 to 100.
> It seems that it's simpler in this case for the follower just to take the
> remote segment as it is and start fetching from offset 120.
>
> We chose that approach to avoid any edge cases here. It may be
> possible that the remote log segment that is received may not have the
> same leader epoch sequence from 100-120 as it contains on the
> leader(this can happen due to unclean leader). It is safe to start
> from what the leader returns here.Another way is to find the remote
> log segment
>
> 5016. Just to echo what Kowshik was saying. It seems that
> RLMM.onPartitionLeadershipChanges() is only called on the replicas for a
> partition, not on the replicas for the __remote_log_segment_metadata
> partition. It's not clear how the leader of __remote_log_segment_metadata
> obtains the metadata for remote segments for deletion.
>
> RLMM will always receive the callback for the remote log metadata
> topic partitions hosted on the local broker and these will be
> subscribed. I will make this clear in the KIP.
>
> 5100. KIP-516 has been accepted and is being implemented now. Could you
> update the KIP based on topicID?
>
> We mentioned KIP-516 and how it helps. We will update this KIP with
> all the changes it brings with KIP-516.
>
> 5101. RLMM: It would be useful to clarify how the following two APIs are
> used. According to the wiki, the former is used for topic deletion and the
> latter is used for retention. It seems that retention should use the former
> since remote segments without a matching epoch in the leader (potentially
> due to unclean leader election) also need to be garbage collected. The
> latter seems to be used for the new leader to determine the last tiered
> segment.
> default Iterator
> listRemoteLogSegments(TopicPartition topicPartition)
> Iterator listRemoteLogSegments(TopicPartition
> topicPartition, long leaderEpoch);
>
> Right,.that is what we are currently doing. We will update the
> javadocs and wiki with that. Earlier, we did not want to remove the
> segments which are not matched with leader epochs from the ladder
> partition as they may be used later by a replica which can become a
> leader (unclean leader election) and refer those segments. But that
> may leak these segments in remote storage until the topic lifetime. We
> decided to cleanup the segments with the oldest incase of size based
> retention also.
>
> 5102. RSM:
> 5102.1 For methods like fetchLogSegmentData(), it seems that they can
> use RemoteLogSegmentId instead of RemoteLogSegmentMetadata.
>
> It will be useful to have metadata for RSM to fetch log segment. It
> may create location/path using id with other metadata too.
>
> 5102.2 In fetchLogSegmentData(), should we use long instead of Long?
>
> Wanted to keep endPosition as optional to read till the end of the
> segment and avoid sentinels.
>
> 5102.3 Why only some of the methods have default implementation and others
> Don't?
>
> Actually,  RSM will not have any default implementations. Those 3
> methods were made default earlier for tests etc. Updated the wiki.
>
> 5102.4. Could we define RemoteLogSegmentMetadataUpdate
> and DeletePartitionUpdate?
>
> Sure, they will be added.
>
>
> 5102.5 LogSegmentData: It seems that it's easier to pass
> in leaderEpochIndex as a ByteBuffer or byte array than a file since it will
> be generated in memory.
>
> Right, this is in plan.
>
> 5102.6 RemoteLogSegmentMetadata: It seems that it needs both baseOffset and
> startOffset. For example, deleteRecords() could move the startOffset to the
> middle of a segment. If we copy the full segment to remote storage, the
> baseOffset and the startOffset will be different.
>
> Good point. startOffset is baseOffset by default, if not set explicitly.
>
> 5102.7 Could we define all the public methods for RemoteLogSegmentMetadata
> and LogSegmentData?
>
> Sure, updated the wiki.
>
> 5102.8 Could we document whether endOffset in RemoteLogSegmentMetadata is
> inclusive/exclusive?
>
> It is inclusive, will update.
>
> 5103. configs:
> 5103.1 Could 

Re: Contributor permissions request

2020-12-01 Thread Omnia Ibrahim
Great I have access now. Thanks!
Regards,
Omnia Ibrahim
 Cloud Infrastructure - Kafka

> On 30 Nov 2020, at 23:00, Matthias J. Sax  wrote:
> 
> Hi,
> 
> maybe Bill forgot to hit the "save" button? Added you. Let us know if
> there are still issues.
> 
> 
> -Matthias
> 
> On 11/30/20 3:34 AM, Bruno Cadonna wrote:
>> Hi Omnia,
>> 
>> I forwarded you Bill's reply.
>> 
>> Unfortunately, I do not have permissions to check your permissions.
>> Somebody with committer status needs to check.
>> 
>> Usually getting permissions to write a KIP is quite fast. I am sorry for
>> the inconvenience.
>> 
>> Best,
>> Bruno
>> 
>> 
>> On 30.11.20 11:57, Omnia Ibrahim wrote:
>>> Hi Bruni,
>>> 
>>> Thanks for getting back to me. I can’t see Bill email in the spam
>>> folder and I don’t believe I got access to create KIPs, I’m seeing
>>> this message when I try to click on `Create KIP`  here
>>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>>> 
>>> 
>>> 
>>> Not sure what is the issue!
>>> 
>>> Regards,
>>> Omnia Ibrahim
>>>  Cloud Infrastructure - Kafka
>>> 
 On 30 Nov 2020, at 10:37, Bruno Cadonna >>> > wrote:
 
 Hi Omnia,
 
 Bill has already replied to your request and should have already
 granted you permissions.
 
 Maybe his reply went to your spam folder. I put your e-mail address
 in cc and hope you will get this e-mail.
 
 Best,
 Bruno
 
 On 30.11.20 11:24, Omnia Ibrahim wrote:
> Hi
> Any idea how long it take to get response on contributor permission?
> I wanna create a KIP for MM2 but am blocked on this request.
> Regards,
> Omnia Ibrahim
>  Cloud Infrastructure - Kafka
>> On 23 Nov 2020, at 10:29, Omnia Ibrahim
>> > > wrote:
>> 
>> JIRA username: (omnia_h_ibrahim)
>> GitHub username: (OmniaGM)
>> Wiki username: (omnia)
>> 
>> 
>> Regards,
>> Omnia Ibrahim
>>  Cloud Infrastructure - Kafka
>> 
>>> 



[DISCUSS] KIP-690: Add additional configuration to control MirrorMaker 2 internal topics naming convention

2020-12-01 Thread Omnia Ibrahim
Hi everyone
I want to start discussion of the KIP 690, the proposal is here
https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention

Thanks for your time and feedback.
Omnia


[jira] [Created] (KAFKA-10789) Streamlining Tests in ChangeLoggingKeyValueBytesStoreTest

2020-12-01 Thread Sagar Rao (Jira)
Sagar Rao created KAFKA-10789:
-

 Summary: Streamlining Tests in ChangeLoggingKeyValueBytesStoreTest
 Key: KAFKA-10789
 URL: https://issues.apache.org/jira/browse/KAFKA-10789
 Project: Kafka
  Issue Type: Improvement
Reporter: Sagar Rao


While reviewing, kIP-614, it was decided that tests for 
[CachingInMemoryKeyValueStoreTest.java|https://github.com/apache/kafka/pull/9508/files/899b79781d3412658293b918dce16709121accf1#diff-fdfe70d8fa0798642f0ed54785624aa9850d5d86afff2285acdf12f2775c3588]
 need to be streamlined to use mocked underlyingStore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10788) Streamlining Tests in CachingInMemoryKeyValueStoreTest

2020-12-01 Thread Sagar Rao (Jira)
Sagar Rao created KAFKA-10788:
-

 Summary: Streamlining Tests in CachingInMemoryKeyValueStoreTest
 Key: KAFKA-10788
 URL: https://issues.apache.org/jira/browse/KAFKA-10788
 Project: Kafka
  Issue Type: Improvement
Reporter: Sagar Rao


While reviewing, kIP-614, it was decided that tests for 
[CachingInMemoryKeyValueStoreTest.java|https://github.com/apache/kafka/pull/9508/files/899b79781d3412658293b918dce16709121accf1#diff-fdfe70d8fa0798642f0ed54785624aa9850d5d86afff2285acdf12f2775c3588]
 need to be streamlined to use mocked underlyingStore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-12-01 Thread Bill Bejeck
Sorry for jumping into this so late,

Thanks for the KIP, I'm a +1 (binding)

-Bill

On Sun, Jul 26, 2020 at 11:06 AM John Roesler  wrote:

> Thanks William,
>
> I’m +1 (binding)
>
> Thanks,
> John
>
> On Fri, Jul 24, 2020, at 20:22, Sophie Blee-Goldman wrote:
> > Thanks all, +1 (non-binding)
> >
> > Cheers,
> > Sophie
> >
> > On Wed, Jul 8, 2020 at 4:02 AM Bruno Cadonna  wrote:
> >
> > > Thanks Will and Piotr,
> > >
> > > +1 (non-binding)
> > >
> > > Best,
> > > Bruno
> > >
> > > On Wed, Jul 8, 2020 at 8:12 AM Matthias J. Sax 
> wrote:
> > > >
> > > > Thanks for the KIP.
> > > >
> > > > +1 (binding)
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 7/7/20 11:48 AM, William Bottrell wrote:
> > > > > Hi everyone,
> > > > >
> > > > > I'd like to start a vote for adding two new time API's to
> > > ProcessorContext.
> > > > >
> > > > > Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext
> > > > > <
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-622%3A+Add+currentSystemTimeMs+and+currentStreamTimeMs+to+ProcessorContext
> > > >
> > > > >
> > > > >  Thanks everyone for the initial feedback and thanks for your time.
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-10791) Kafka Metadata older epoch problem

2020-12-01 Thread KRISHNA SARVEPALLI (Jira)
KRISHNA SARVEPALLI created KAFKA-10791:
--

 Summary: Kafka Metadata older epoch problem
 Key: KAFKA-10791
 URL: https://issues.apache.org/jira/browse/KAFKA-10791
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.2.0
 Environment: Kubernetes cluster,
Reporter: KRISHNA SARVEPALLI
 Attachments: Kafka-Client-Issue.png, zookeeper-leader-epoch.png, 
zookeeper-state.png

We are using Kafka in production with 5 brokers and 3 zookeepers. We are 
running Kafka and zookeeper in Kubernetes and storage is managed by PVC using 
NFS. We are using topic with 60 partitions.

The cluster was running successfully from almost 50 days since the last 
restart. Last week (11/28) two brokers were down. Team is still researching for 
the root cause of broker failures. 

Since we are using K8s the brokers came back up immediately (in less than 
5minutes). But we have issue on the producer applications and consumer 
applications while downloading the metadata. Please check the attached images.

We have enabled debug logs for one of the applications and it seems like Kafka 
brokers are returning metadata with leader_epoch value of 0 where as in the 
client Metadata cache it was maintained at 6 for most of the partitions. 

Eventually we are forced to restart all the producer apps (around 35-40 micro 
services) and they are all able to download the metadata since it's first time 
didn't face any issue and was able to produce the messages.

As part of troubleshooting, we have checked the zookeeper key/value pairs 
registered by Kafka and we can see that leader_epoch was set back to 0 for 
almost all partitions. And we have checked for another topic which is used by 
other apps, their leader_epoch was in good shape and ctime and mtime are also 
updated correctly. Please check the attached screenshots.

Please refer the stackoverflow issue that we have reported:

https://stackoverflow.com/questions/65055299/kafka-producer-not-able-to-download-refresh-metadata-after-brokers-were-restar

 

+*Broker Configs:*+

--override zookeeper.connect=zookeeper:2181 
 --override advertised.listeners=PLAINTEXT://kafka,SASL_SSL://kafka
 --override log.dirs=/opt/kafka/data/logs 
 --override broker.id=kafka
 --override num.network.threads=3 
 --override num.io.threads=8 
 --override default.replication.factor=3 
 --override auto.create.topics.enable=true 
 --override delete.topic.enable=true 
 --override socket.send.buffer.bytes=102400 
 --override socket.receive.buffer.bytes=102400 
 --override socket.request.max.bytes=104857600 
 --override num.partitions=30 
 --override num.recovery.threads.per.data.dir=1 
 --override offsets.topic.replication.factor=3 
 --override transaction.state.log.replication.factor=3 
 --override transaction.state.log.min.isr=1 
 --override log.retention.hours=48 
 --override log.segment.bytes=1073741824 
 --override log.retention.check.interval.ms=30 
 --override zookeeper.connection.timeout.ms=6000 
 --override confluent.support.metrics.enable=true 
 --override group.initial.rebalance.delay.ms=0 
 --override confluent.support.customer.id=anonymous 
 --override ssl.truststore.location=kafka.broker.truststore.jks 
 --override ssl.truststore.password=changeit 
 --override ssl.keystore.location=kafka.broker.keystore.jks 
 --override ssl.keystore.password=changeit 
 --override ssl.keystore.type=PKCS12 
 --override ssl.key.password=changeit 
 --override listeners=SASL_SSL://0.0.0.0:9093,PLAINTEXT://0.0.0.0:9092 
 --override authorizer_class_name=kafka.security.auth.SimpleAclAuthorizer 
 --override ssl.endpoint.identification.algorithm 
 --override ssl.client.auth=requested 
 --override sasl.enabled.mechanisms=SCRAM-SHA-512 
 --override sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512 
 --override security.inter.broker.protocol=SASL_SSL 
 --override super.users=test:test
 --override zookeeper.set.acl=false

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2020-12-01 Thread Omnia Ibrahim
Hey everyone, 
Please take a look at KIP-690
https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention

Thanks for the feedback and support
Omnia


[DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2020-12-01 Thread Omnia Ibrahim
Hey everyone, 
Please take a look at KIP-690:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention

Thanks for your feedback and support.

Omnia



[DISCUSS] KIP-690: Add additional configuration to control MirrorMaker 2 internal topics naming convention

2020-12-01 Thread Omnia Ibrahim
Hi everyone
I want to start discussion of the KIP 690, the proposal is here
https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention

Thanks for your time and feedback.

Omnia


[jira] [Created] (KAFKA-10790) Detect/Prevent Deadlock on Producer Network Thread

2020-12-01 Thread Gary Russell (Jira)
Gary Russell created KAFKA-10790:


 Summary: Detect/Prevent Deadlock on Producer Network Thread
 Key: KAFKA-10790
 URL: https://issues.apache.org/jira/browse/KAFKA-10790
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.6.0, 2.7.0
Reporter: Gary Russell


I realize this is contrived, but I stumbled across the problem while testing 
some library code with 2.7.0 RC3 (although the issue is not limited to 2.7).

For example, calling flush() on the producer callback deadlocks the network 
thread (and any attempt to close the producer thereafter).
{code:java}
producer.send(new ProducerRecord("foo", "bar"), (rm, ex) -> {
producer.flush();
});
Thread.sleep(1000);
producer.close();
{code}
It took some time to figure out why the close was blocked.

There is existing logic in close() to avoid it blocking if called from the 
callback; perhaps similar logic could be added to flush() (and any other 
methods that might block), even if it means throwing an exception to make it 
clear that you can't call flush() from the callback. 

These stack traces are with the 2.6.0 client.
{noformat}
"main" #1 prio=5 os_prio=31 cpu=1333.10ms elapsed=13.05s tid=0x7ff259012800 
nid=0x2803 in Object.wait()  [0x7fda5000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@14.0.2/Native Method)
- waiting on <0x000700d0> (a 
org.apache.kafka.common.utils.KafkaThread)
at java.lang.Thread.join(java.base@14.0.2/Thread.java:1297)
- locked <0x000700d0> (a 
org.apache.kafka.common.utils.KafkaThread)
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1205)
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1182)
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1158)
at com.example.demo.Rk1Application.lambda$2(Rk1Application.java:55)


"kafka-producer-network-thread | producer-1" #24 daemon prio=5 os_prio=31 
cpu=225.80ms elapsed=11.64s tid=0x7ff256963000 nid=0x7103 waiting on 
condition  [0x700011d04000]
   java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@14.0.2/Native Method)
- parking to wait for  <0x0007020b27e0> (a 
java.util.concurrent.CountDownLatch$Sync)
at 
java.util.concurrent.locks.LockSupport.park(java.base@14.0.2/LockSupport.java:211)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@14.0.2/AbstractQueuedSynchronizer.java:714)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@14.0.2/AbstractQueuedSynchronizer.java:1046)
at 
java.util.concurrent.CountDownLatch.await(java.base@14.0.2/CountDownLatch.java:232)
at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:712)
at 
org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:)
at com.example.demo.Rk1Application.lambda$3(Rk1Application.java:52)
at 
com.example.demo.Rk1Application$$Lambda$528/0x000800e28840.onCompletion(Unknown
 Source)
at 
org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1363)
at 
org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:228)
at 
org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:653)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:634)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:554)
at 
org.apache.kafka.clients.producer.internals.Sender.lambda$sendProduceRequest$0(Sender.java:743)
at 
org.apache.kafka.clients.producer.internals.Sender$$Lambda$642/0x000800ea2040.onComplete(Unknown
 Source)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:566)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:558)
at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.lang.Thread.run(java.base@14.0.2/Thread.java:832)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)