Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #464

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10866: Add metadata to ConsumerRecords (#9836)

[github] MINOR: Upgrade to Scala 2.12.13 (#9981)


--
[...truncated 3.57 MB...]
TopologyTestDriverAtLeastOnceTest > shouldNotRequireParameters() PASSED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
STARTED

TopologyTestDriverAtLeastOnceTest > shouldPunctuateIfWallClockTimeAdvances() 
PASSED

TestTopicsTest > testNonUsedOutputTopic() STARTED

TestTopicsTest > testNonUsedOutputTopic() PASSED

TestTopicsTest > testEmptyTopic() STARTED

TestTopicsTest > testEmptyTopic() PASSED

TestTopicsTest > testStartTimestamp() STARTED

TestTopicsTest > testStartTimestamp() PASSED

TestTopicsTest > testNegativeAdvance() STARTED

TestTopicsTest > testNegativeAdvance() PASSED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() PASSED

TestTopicsTest > testDuration() STARTED

TestTopicsTest > testDuration() PASSED

TestTopicsTest > testOutputToString() STARTED

TestTopicsTest > testOutputToString() PASSED

TestTopicsTest > testValue() STARTED

TestTopicsTest > testValue() PASSED

TestTopicsTest > testTimestampAutoAdvance() STARTED

TestTopicsTest > testTimestampAutoAdvance() PASSED

TestTopicsTest > testOutputWrongSerde() STARTED

TestTopicsTest > testOutputWrongSerde() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() PASSED

TestTopicsTest > testWrongSerde() STARTED

TestTopicsTest > testWrongSerde() PASSED

TestTopicsTest > testKeyValuesToMapWithNull() STARTED

TestTopicsTest > testKeyValuesToMapWithNull() PASSED

TestTopicsTest > testNonExistingOutputTopic() STARTED

TestTopicsTest > testNonExistingOutputTopic() PASSED

TestTopicsTest > testMultipleTopics() STARTED

TestTopicsTest > testMultipleTopics() PASSED

TestTopicsTest > testKeyValueList() STARTED

TestTopicsTest > testKeyValueList() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() PASSED

TestTopicsTest > testValueList() STARTED

TestTopicsTest > testValueList() PASSED

TestTopicsTest > testRecordList() STARTED

TestTopicsTest > testRecordList() PASSED

TestTopicsTest > testNonExistingInputTopic() STARTED

TestTopicsTest > testNonExistingInputTopic() PASSED

TestTopicsTest > testKeyValuesToMap() STARTED

TestTopicsTest > testKeyValuesToMap() PASSED

TestTopicsTest > testRecordsToList() STARTED

TestTopicsTest > testRecordsToList() PASSED

TestTopicsTest > testKeyValueListDuration() STARTED

TestTopicsTest > testKeyValueListDuration() PASSED

TestTopicsTest > testInputToString() STARTED

TestTopicsTest > testInputToString() PASSED

TestTopicsTest > testTimestamp() STARTED

TestTopicsTest > testTimestamp() PASSED

TestTopicsTest > testWithHeaders() STARTED

TestTopicsTest > testWithHeaders() PASSED

TestTopicsTest > testKeyValue() STARTED

TestTopicsTest > testKeyValue() PASSED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() PASSED

> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-11:compileTestJava
> Task :streams:upgrade-system-tests-11:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:testClasses
> Task 

[jira] [Resolved] (KAFKA-10658) ErrantRecordReporter.report always return completed future even though the record is not sent to DLQ topic yet

2021-01-27 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10658.

Fix Version/s: 2.6.2
   2.7.1
   2.8.0
   Resolution: Fixed

> ErrantRecordReporter.report always return completed future even though the 
> record is not sent to DLQ topic yet 
> ---
>
> Key: KAFKA-10658
> URL: https://issues.apache.org/jira/browse/KAFKA-10658
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> This issue happens when both DLQ and error log are enabled. There is a 
> incorrect filter in handling multiple reports and it results in the 
> uncompleted future is filtered out. Hence, users always receive a completed 
> future even though the record is still in producer buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Requesting review of a long pending Kafka Connect balanced task distribution PR

2021-01-27 Thread Konstantine Karantasis
I'm on it Ramesh. Thanks for the contribution and the notification.
Expect my second round of comments within the next few days.

Konstantine

On Tue, Jan 19, 2021 at 8:44 AM ramesh krishnan muthusamy <
ramkrish1...@gmail.com> wrote:

> Hi Team,
>
> This is a major fix required to use consumer incremental co operative
> rebalance in kafka connect. Please help in reviewing the PR.
>
> https://github.com/apache/kafka/pull/9319
>
> Thanks,
> Ramesh
>


Re: [DISCUSS] KIP-691: Transactional Producer Exception Handling

2021-01-27 Thread Boyang Chen
Thanks Jason for the thoughts.

On Wed, Jan 27, 2021 at 11:52 AM Jason Gustafson  wrote:

> Hi Boyang,
>
> Thanks for the iterations here. I think this is something we should have
> done a long time ago. It sounds like there are two API changes here:
>
> 1. We are introducing the `CommitFailedException` to wrap abortable
> errors that are raised from `commitTransaction`. This sounds fine to me. As
> far as I know, the only case we might need this is when we add support to
> let producers recover from coordinator timeouts. Are there any others?
>
> I think the purpose here is to ensure non-fatal exceptions are unified
under the same
exception umbrella, to make the proceeding to abort any ongoing transaction
much easier.
I don't think `coordinator timeouts` is the only case to recover here,
since we have other
non-fatal exceptions such as UnknownPid.

2. We are wrapping non-fatal errors raised from `send` as `KafkaException`.
> The motivation for this is less clear to me and it doesn't look like the
> example from the KIP depends on it. My concern here is compatibility.
> Currently we have the following documentation for the `Callback` API:
>
> ```
>  *  Non-Retriable exceptions (fatal, the message will
> never be sent):
>  *
>  *  InvalidTopicException
>  *  OffsetMetadataTooLargeException
>  *  RecordBatchTooLargeException
>  *  RecordTooLargeException
>  *  UnknownServerException
>  *  UnknownProducerIdException
>  *  InvalidProducerEpochException
>  *
>  *  Retriable exceptions (transient, may be covered by
> increasing #.retries):
>  *
>  *  CorruptRecordException
>  *  InvalidMetadataException
>  *  NotEnoughReplicasAfterAppendException
>  *  NotEnoughReplicasException
>  *  OffsetOutOfRangeException
>  *  TimeoutException
>  *  UnknownTopicOrPartitionException
> ```
>
> If we wrap all the retriable exceptions documented here as
> `KafkaException`, wouldn't that break any error handling that users might
> already have? it's gonna introduce a compatibility issue.
>
> The original intention was to simplify `send` callback error handling by
doing exception wrapping, as on Streams level
we have to prepare an exhausting list of exceptions to catch as fatal, and
the same lengthy list to catch as
non-fatal. It would be much easier if we got `hints` from the callback.
However,
I agree there is a compatibility concern, what about deprecating the
existing:

void onCompletion(RecordMetadata metadata, Exception exception)

and replace it with:

default void onCompletion(RecordMetadata metadata, Exception exception,
boolean isFatal) {
  this.onCompletion(metadata, exception);
}

to make sure new users get the benefit of understanding the fatality based
on the info presented by the producer?

Thanks,
> Jason
>
>
> On Sat, Jan 23, 2021 at 3:31 AM Hiringuru  wrote:
>
> > Why  we are receiving all emails kindly remove us from
> > dev@kafka.apache.org we don't want to receive emails anymore.
> >
> > Thanks
> > > On 01/23/2021 4:14 AM Guozhang Wang  wrote:
> > >
> > >
> > > Thanks Boyang, yes I think I was confused about the different handling
> of
> > > two abortTxn calls, and now I get it was not intentional. I think I do
> > not
> > > have more concerns.
> > >
> > > On Fri, Jan 22, 2021 at 1:12 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > > wrote:
> > >
> > > > Thanks for the clarification Guozhang, I got your point that we want
> to
> > > > have a consistent handling of fatal exceptions being thrown from the
> > > > abortTxn. I modified the current template to move the fatal exception
> > > > try-catch outside of the processing loop to make sure we could get a
> > chance
> > > > to close consumer/producer modules. Let me know what you think.
> > > >
> > > > Best,
> > > > Boyang
> > > >
> > > > On Fri, Jan 22, 2021 at 11:05 AM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > > wrote:
> > > >
> > > > > My understanding is that abortTransaction would only throw when the
> > > > > producer is in fatal state. For CommitFailed, the producer should
> > still
> > > > be
> > > > > in the abortable error state, so that abortTransaction call would
> not
> > > > throw.
> > > > >
> > > > > On Fri, Jan 22, 2021 at 11:02 AM Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > >> Or are you going to maintain some internal state such that, the
> > > > >> `abortTransaction` in the catch block would never throw again?
> > > > >>
> > > > >> On Fri, Jan 22, 2021 at 11:01 AM Guozhang Wang <
> wangg...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >> > Hi Boyang/Jason,
> > > > >> >
> > > > >> > I've also thought about this (i.e. using CommitFailed for all
> > > > >> non-fatal),
> > > > >> > but what I'm 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #415

2021-01-27 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #446

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10866: Add metadata to ConsumerRecords (#9836)


--
[...truncated 3.58 MB...]
TestTopicsTest > testKeyValue() STARTED

TestTopicsTest > testKeyValue() PASSED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() PASSED

TopologyTestDriverEosTest > shouldReturnCorrectPersistentStoreTypeOnly() PASSED

TopologyTestDriverEosTest > shouldRespectTaskIdling() STARTED

TopologyTestDriverEosTest > shouldRespectTaskIdling() PASSED

TopologyTestDriverEosTest > shouldUseSourceSpecificDeserializers() STARTED

TopologyTestDriverEosTest > shouldUseSourceSpecificDeserializers() PASSED

TopologyTestDriverEosTest > shouldReturnAllStores() STARTED

TopologyTestDriverEosTest > shouldReturnAllStores() PASSED

TopologyTestDriverEosTest > shouldNotCreateStateDirectoryForStatelessTopology() 
STARTED

TopologyTestDriverEosTest > shouldNotCreateStateDirectoryForStatelessTopology() 
PASSED

TopologyTestDriverEosTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() STARTED

TopologyTestDriverEosTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() PASSED

TopologyTestDriverEosTest > shouldReturnAllStoresNames() STARTED

TopologyTestDriverEosTest > shouldReturnAllStoresNames() PASSED

TopologyTestDriverEosTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() STARTED

TopologyTestDriverEosTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() PASSED

TopologyTestDriverEosTest > shouldProcessConsumerRecordList() STARTED

TopologyTestDriverEosTest > shouldProcessConsumerRecordList() PASSED

TopologyTestDriverEosTest > shouldUseSinkSpecificSerializers() STARTED

TopologyTestDriverEosTest > shouldUseSinkSpecificSerializers() PASSED

TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() STARTED

TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() PASSED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() STARTED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() PASSED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() STARTED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() PASSED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() STARTED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() PASSED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
STARTED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() PASSED

TopologyTestDriverEosTest > shouldSetRecordMetadata() STARTED

TopologyTestDriverEosTest > shouldSetRecordMetadata() PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() PASSED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() STARTED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() PASSED

TopologyTestDriverEosTest > shouldThrowForMissingTime() STARTED

TopologyTestDriverEosTest > shouldThrowForMissingTime() PASSED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
STARTED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() PASSED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() STARTED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() PASSED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
STARTED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() PASSED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
STARTED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
PASSED

TopologyTestDriverEosTest > shouldNotRequireParameters() STARTED

TopologyTestDriverEosTest > shouldNotRequireParameters() PASSED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() STARTED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() PASSED

> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task 

Re: Impact of setting max.inflight.requests.per.connection > 1 in Kafka connect

2021-01-27 Thread Matthias J. Sax
There should not be any data loss.

However, if a request fails and is retried, it may lead to reordering of
sends. Thus, records would not be ordered based on the `send()` calls
any longer.

If you would enable idempotent writes, ordering is guaranteed even with
multiple in-flight requests per connection though.



-Matthias

On 1/27/21 11:35 AM, nitin agarwal wrote:
> Hi All,
> 
> I see that max.inflight.requests.per.connection is set to 1 explicitly in
> Kafka Connect but there is a way to override it. I want to understand the
> impact of setting its value > 1.
> As per my understanding, it will lead to data loss in some cases. Is it
> correct ?
> 
> 
> Thank you,
> Nitin
> 


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-01-27 Thread Bill Bejeck
Thanks for taking this on Sophie. +1

Bill

On Wed, Jan 27, 2021 at 5:59 PM Ismael Juma  wrote:

> Thanks Sophie! +1
>
> Ismael
>
> On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman 
> wrote:
>
> > Hi all,
> >
> > I'd like to volunteer as release manager for a 2.6.2 release. This is
> being
> > accelerated
> > to address a critical regression in Kafka Streams for Windows users.
> >
> > You can find the release plan on the wiki:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
> >
> > Thanks,
> > Sophie
> >
>


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-01-27 Thread Ismael Juma
Thanks Sophie! +1

Ismael

On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman 
wrote:

> Hi all,
>
> I'd like to volunteer as release manager for a 2.6.2 release. This is being
> accelerated
> to address a critical regression in Kafka Streams for Windows users.
>
> You can find the release plan on the wiki:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
>
> Thanks,
> Sophie
>


Re: 2.6.1 Critical Regression

2021-01-27 Thread Ismael Juma
Thanks Sophie!

Ismael

On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman 
wrote:

> Yes, the change is in Kafka Streams, so the kafka-clients jar would not be
> sufficient.
>
> I've just sent out a note to the mailing list volunteering to be the
> release manager for a
> 2.6.2 release. I think we can get this out in a timely manner
>
> Thanks Gary
>
> On Tue, Jan 26, 2021 at 12:43 PM Ismael Juma  wrote:
>
> > Hi Gary,
> >
> > Thanks for raising this. Looking at the PR, the change is in kafka
> streams,
> > not clients?
> >
> > Ismael
> >
> > On Tue, Jan 26, 2021 at 12:39 PM Gary Russell 
> wrote:
> >
> > > A critical regression was included in kafka-clients 2.6.1, which is now
> > > fixed and cherry picked [1].
> > >
> > > For some devs, it is not easy (or they are not allowed by their
> > > enterprise) to move up to a new release, even a minor release (2.7).
> Devs
> > > are often allowed to upgrade to patch releases without approval,
> though.
> > >
> > > There seems to be some reticence to getting 2.6.2 out ASAP, due to the
> > > effort needed; I understand that a full release of everything is not
> > > trivial, but could consideration be given to releasing, say, 2.6.1.1
> for
> > > just the kafka-clients jar with this fix?
> > >
> > > Thanks.
> > >
> > >
> > > [1]: https://github.com/apache/kafka/pull/9947#issuecomment-767798579
> > >
> > >
> > >
> > >
> > >
> > >
> >
>


Re: 2.6.1 Critical Regression

2021-01-27 Thread Gary Russell
My apologies; I blame keyboard memory - I mostly work with kafka-clients.

Sophie, many thanks for stepping up to the plate on this one.

From: Sophie Blee-Goldman 
Sent: Wednesday, January 27, 2021 5:45 PM
To: dev 
Subject: Re: 2.6.1 Critical Regression

Yes, the change is in Kafka Streams, so the kafka-clients jar would not be
sufficient.

I've just sent out a note to the mailing list volunteering to be the
release manager for a
2.6.2 release. I think we can get this out in a timely manner

Thanks Gary

On Tue, Jan 26, 2021 at 12:43 PM Ismael Juma  wrote:

> Hi Gary,
>
> Thanks for raising this. Looking at the PR, the change is in kafka streams,
> not clients?
>
> Ismael
>
> On Tue, Jan 26, 2021 at 12:39 PM Gary Russell  wrote:
>
> > A critical regression was included in kafka-clients 2.6.1, which is now
> > fixed and cherry picked [1].
> >
> > For some devs, it is not easy (or they are not allowed by their
> > enterprise) to move up to a new release, even a minor release (2.7). Devs
> > are often allowed to upgrade to patch releases without approval, though.
> >
> > There seems to be some reticence to getting 2.6.2 out ASAP, due to the
> > effort needed; I understand that a full release of everything is not
> > trivial, but could consideration be given to releasing, say, 2.6.1.1 for
> > just the kafka-clients jar with this fix?
> >
> > Thanks.
> >
> >
> > [1]: 
> > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fkafka%2Fpull%2F9947%23issuecomment-767798579data=04%7C01%7Cgrussell%40vmware.com%7C460707c2f37c4b1c8bea08d8c3154f2d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637473843588255278%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=YJZMu2Ycol9QONFS2nmh34BScHN%2BUPESMRDqZpmPX5k%3Dreserved=0
> >
> >
> >
> >
> >
> >
>


Re: 2.6.1 Critical Regression

2021-01-27 Thread Sophie Blee-Goldman
Yes, the change is in Kafka Streams, so the kafka-clients jar would not be
sufficient.

I've just sent out a note to the mailing list volunteering to be the
release manager for a
2.6.2 release. I think we can get this out in a timely manner

Thanks Gary

On Tue, Jan 26, 2021 at 12:43 PM Ismael Juma  wrote:

> Hi Gary,
>
> Thanks for raising this. Looking at the PR, the change is in kafka streams,
> not clients?
>
> Ismael
>
> On Tue, Jan 26, 2021 at 12:39 PM Gary Russell  wrote:
>
> > A critical regression was included in kafka-clients 2.6.1, which is now
> > fixed and cherry picked [1].
> >
> > For some devs, it is not easy (or they are not allowed by their
> > enterprise) to move up to a new release, even a minor release (2.7). Devs
> > are often allowed to upgrade to patch releases without approval, though.
> >
> > There seems to be some reticence to getting 2.6.2 out ASAP, due to the
> > effort needed; I understand that a full release of everything is not
> > trivial, but could consideration be given to releasing, say, 2.6.1.1 for
> > just the kafka-clients jar with this fix?
> >
> > Thanks.
> >
> >
> > [1]: https://github.com/apache/kafka/pull/9947#issuecomment-767798579
> >
> >
> >
> >
> >
> >
>


[DISCUSS] Apache Kafka 2.6.2 release

2021-01-27 Thread Sophie Blee-Goldman
Hi all,

I'd like to volunteer as release manager for a 2.6.2 release. This is being
accelerated
to address a critical regression in Kafka Streams for Windows users.

You can find the release plan on the wiki:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2

Thanks,
Sophie


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #414

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8930: MirrorMaker v2 documentation (#324) (#9983)


--
[...truncated 3.57 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16574c90, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16574c90, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3609e34c, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3609e34c, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6539dfe1, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6539dfe1, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b0aa830, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b0aa830, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@229a9949, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@229a9949, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7abf1b7e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7abf1b7e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4d1e651c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4d1e651c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1eb4f1b1, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1eb4f1b1, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7a35a292, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7a35a292, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@33e5f23a, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@33e5f23a, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@47d26193, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@47d26193, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@284edf16, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@284edf16, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1870ec1a, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1870ec1a, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 

Re: [VOTE] KIP-663: API to Start and Shut Down Stream Threads and to Request Closing of Kafka Streams Clients

2021-01-27 Thread Sophie Blee-Goldman
Thanks Bruno, that sounds like a good addition. +1

On Wed, Jan 27, 2021 at 12:34 PM Bruno Cadonna  wrote:

> Hi all,
>
> During the implementation, we notices that method removeStreamThread()
> may block indefinitely when the stream thread chosen for removal is
> blocked and cannot be shut down. Thus, we will add an overload that
> takes a timeout. The newly added method will throw a TimeoutException,
> when the timeout is exceeded.
>
> We updated the KIP accordingly.
>
> KIP: https://cwiki.apache.org/confluence/x/FDd4CQ
>
> Best,
> Bruno
>
> On 30.09.20 13:51, Bruno Cadonna wrote:
> > Thank you all for voting!
> >
> > This KIP is accepted with 3 binding +1 (Guozhang, John, Matthias).
> >
> > Best,
> > Bruno
> >
> > On 29.09.20 22:24, Matthias J. Sax wrote:
> >> +1 (binding)
> >>
> >> I am not super happy with the impact on the client state. For example, I
> >> don't understand why it's ok to scale out if we lose one thread out of
> >> four, but why it's not ok to scale out if we lose one thread out of one
> >> (for this case, we would enter ERROR state and cannot add new threads
> >> afterwards).
> >>
> >> However, this might be an issue for a follow up KIP.
> >>
> >>
> >> -Matthias
> >>
> >> On 9/29/20 7:20 AM, John Roesler wrote:
> >>> Thanks, Bruno, this sounds good to me.
> >>> -John
> >>>
> >>> On Tue, Sep 29, 2020, at 03:13, Bruno Cadonna wrote:
>  Hi all,
> 
>  I did two minor modifications to the KIP.
> 
>  - I removed the rather strict guarantee "Dead stream threads are
>  removed
>  from a Kafka Streams client at latest after the next call to
>  KafkaStreams#addStreamThread() or KafkaStreams#removeStreamThread()
>  following the transition to state DEAD."
>  Dead stream threads will be still removed, but the behavior will be
>  less
>  strict.
> 
>  - Added a sentence that states that the Kafka Streams client will
>  transit to ERROR if the last alive stream thread dies exceptionally.
>  This corresponds to the current behavior.
> 
>  I will not restart voting and keep the votes so far.
> 
>  Best,
>  Bruno
> 
>  On 22.09.20 01:19, John Roesler wrote:
> > I’m +1 also. Thanks, Bruno!
> > -John
> >
> > On Mon, Sep 21, 2020, at 17:08, Guozhang Wang wrote:
> >> Thanks Bruno. I'm +1 on the KIP.
> >>
> >> On Mon, Sep 21, 2020 at 2:49 AM Bruno Cadonna 
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> I would like to restart from zero the voting on KIP-663 that
> >>> proposes to
> >>> add methods to the Kafka Streams client to add and remove stream
> >>> threads
> >>> during execution.
> >>>
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads
> >>>
> >>>
> >>> Matthias, if you are still +1, please vote again.
> >>>
> >>> Best,
> >>> Bruno
> >>>
> >>> On 04.09.20 23:12, John Roesler wrote:
>  Hi Sophie,
> 
>  Uh, oh, it's never a good sign when the discussion moves
>  into the vote thread :)
> 
>  I agree with you, it seems like a good touch for
>  removeStreamThread() to return the name of the thread that
>  got removed, rather than a boolean flag. Maybe the return
>  value would be `null` if there is no thread to remove.
> 
>  If we go that way, I'd suggest that addStreamThread() also
>  return the name of the newly created thread, or null if no
>  thread can be created right now.
> 
>  I'm not completely sure if I think that callers of this
>  method would know exactly how many threads there are. Sure,
>  if a human being is sitting there looking at the metrics or
>  logs and decides to call the method, it would work out, but
>  I'd expect this kind of method to find its way into
>  automated tooling that reacts to things like current system
>  load or resource saturation. Those kinds of toolchains often
>  are part of a distributed system, and it's probably not that
>  easy to guarantee that the thread count they observe is
>  fully consistent with the number of threads that are
>  actually running. Therefore, an in-situ `int
>  numStreamThreads()` method might not be a bad idea. Then
>  again, it seems sort of optional. A caller can catch an
>  exception or react to a `null` return value just the same
>  either way. Having both add/remove methods behave similarly
>  is probably more valuable.
> 
>  Thanks,
>  -John
> 
> 
>  On Thu, 2020-09-03 at 12:15 -0700, Sophie Blee-Goldman
>  wrote:
> > Hey, sorry for the late reply, I just have one minor
> > suggestion. Since
> >>> we
> > don't
> > make any guarantees about which 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #445

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8930: MirrorMaker v2 documentation (#324) (#9983)


--
[...truncated 3.57 MB...]

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldForwardClose() STARTED

KeyValueStoreFacadeTest > shouldForwardClose() PASSED

KeyValueStoreFacadeTest > shouldForwardFlush() STARTED

KeyValueStoreFacadeTest > shouldForwardFlush() PASSED

KeyValueStoreFacadeTest > shouldForwardInit() STARTED

KeyValueStoreFacadeTest > shouldForwardInit() PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task 

Re: [VOTE] KIP-663: API to Start and Shut Down Stream Threads and to Request Closing of Kafka Streams Clients

2021-01-27 Thread Bruno Cadonna

Hi all,

During the implementation, we notices that method removeStreamThread() 
may block indefinitely when the stream thread chosen for removal is 
blocked and cannot be shut down. Thus, we will add an overload that 
takes a timeout. The newly added method will throw a TimeoutException, 
when the timeout is exceeded.


We updated the KIP accordingly.

KIP: https://cwiki.apache.org/confluence/x/FDd4CQ

Best,
Bruno

On 30.09.20 13:51, Bruno Cadonna wrote:

Thank you all for voting!

This KIP is accepted with 3 binding +1 (Guozhang, John, Matthias).

Best,
Bruno

On 29.09.20 22:24, Matthias J. Sax wrote:

+1 (binding)

I am not super happy with the impact on the client state. For example, I
don't understand why it's ok to scale out if we lose one thread out of
four, but why it's not ok to scale out if we lose one thread out of one
(for this case, we would enter ERROR state and cannot add new threads
afterwards).

However, this might be an issue for a follow up KIP.


-Matthias

On 9/29/20 7:20 AM, John Roesler wrote:

Thanks, Bruno, this sounds good to me.
-John

On Tue, Sep 29, 2020, at 03:13, Bruno Cadonna wrote:

Hi all,

I did two minor modifications to the KIP.

- I removed the rather strict guarantee "Dead stream threads are 
removed

from a Kafka Streams client at latest after the next call to
KafkaStreams#addStreamThread() or KafkaStreams#removeStreamThread()
following the transition to state DEAD."
Dead stream threads will be still removed, but the behavior will be 
less

strict.

- Added a sentence that states that the Kafka Streams client will
transit to ERROR if the last alive stream thread dies exceptionally.
This corresponds to the current behavior.

I will not restart voting and keep the votes so far.

Best,
Bruno

On 22.09.20 01:19, John Roesler wrote:

I’m +1 also. Thanks, Bruno!
-John

On Mon, Sep 21, 2020, at 17:08, Guozhang Wang wrote:

Thanks Bruno. I'm +1 on the KIP.

On Mon, Sep 21, 2020 at 2:49 AM Bruno Cadonna  
wrote:



Hi,

I would like to restart from zero the voting on KIP-663 that 
proposes to
add methods to the Kafka Streams client to add and remove stream 
threads

during execution.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads 



Matthias, if you are still +1, please vote again.

Best,
Bruno

On 04.09.20 23:12, John Roesler wrote:

Hi Sophie,

Uh, oh, it's never a good sign when the discussion moves
into the vote thread :)

I agree with you, it seems like a good touch for
removeStreamThread() to return the name of the thread that
got removed, rather than a boolean flag. Maybe the return
value would be `null` if there is no thread to remove.

If we go that way, I'd suggest that addStreamThread() also
return the name of the newly created thread, or null if no
thread can be created right now.

I'm not completely sure if I think that callers of this
method would know exactly how many threads there are. Sure,
if a human being is sitting there looking at the metrics or
logs and decides to call the method, it would work out, but
I'd expect this kind of method to find its way into
automated tooling that reacts to things like current system
load or resource saturation. Those kinds of toolchains often
are part of a distributed system, and it's probably not that
easy to guarantee that the thread count they observe is
fully consistent with the number of threads that are
actually running. Therefore, an in-situ `int
numStreamThreads()` method might not be a bad idea. Then
again, it seems sort of optional. A caller can catch an
exception or react to a `null` return value just the same
either way. Having both add/remove methods behave similarly
is probably more valuable.

Thanks,
-John


On Thu, 2020-09-03 at 12:15 -0700, Sophie Blee-Goldman
wrote:
Hey, sorry for the late reply, I just have one minor 
suggestion. Since

we

don't
make any guarantees about which thread gets removed or allow 
the user to
specify, I think we should return either the index or full name 
of the

thread
that does get removed by removeThread().

I know you just updated the KIP to return true/false if there

are/aren't any
threads to be removed, but I think this would be more 
appropriate as an

exception than as a return type. I think it's reasonable to expect

users to
have some sense to how many threads are remaining, and not try 
to remove
a thread when there is none left. To me, that indicates 
something wrong
with the user application code and should be treated as an 
exceptional

case.
I don't think the same code clarify argument applies here as to 
the
addStreamThread() case, as there's no reason for an application 
to be
looping and retrying removeStreamThread()  since if that fails, 
it's

because
there are no threads left and thus it will continue to always 
fail. And

if

the
user actually wants to shut down all threads, they should just 
close the

whole application rather than call removeStreamThread() in a loop.

While I generally think 

Re: [VOTE] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2021-01-27 Thread Leah Thomas
Hi Bruno,
I'm still +1, non-binding. Thanks for the updates!

Leah

On Wed, Jan 27, 2021 at 1:56 PM Matthias J. Sax  wrote:

> Thanks for updating the KIP.
>
> +1 (binding)
>
>
> -Matthias
>
> On 1/27/21 10:19 AM, Bruno Cadonna wrote:
> > Hi all,
> >
> > Thanks for voting!
> >
> > I updated the KIP with some additional feedback I got.
> >
> > If I do not hear anything from folks that have already voted in the next
> > couple of days, I will assume their vote is still valid. You can also
> > confirm your vote if you want.
> >
> > KIP: https://cwiki.apache.org/confluence/x/7CnZCQ
> >
> > Best,
> > Bruno
> >
> > On 26.01.21 02:19, Sophie Blee-Goldman wrote:
> >> Thanks for the KIP Bruno, +1 (binding)
> >>
> >> Sophie
> >>
> >> On Mon, Jan 25, 2021 at 11:23 AM Guozhang Wang 
> >> wrote:
> >>
> >>> Hey Bruno,
> >>>
> >>> Thanks for your response!
> >>>
> >>> 1) Yup I'm good with option a) as well.
> >>> 2) Thanks!
> >>> 3) Sounds good to me. I think it would not change any StreamThread
> >>> implementation regarding capturing exceptions from consumer.poll()
> >>> since it
> >>> captures StreamsException as fatal.
> >>>
> >>>
> >>> Guozhang
> >>>
> >>> On Wed, Dec 16, 2020 at 4:43 AM Bruno Cadonna 
> >>> wrote:
> >>>
>  Hi Guozhang,
> 
>  Thank for the feedback!
> 
>  Please find my answers inline.
> 
>  Best,
>  Bruno
> 
> 
>  On 14.12.20 23:33, Guozhang Wang wrote:
> > Hello Bruno,
> >
> > Just a few more questions about the KIP:
> >
> > 1) If the internal topics exist but the calculated num.partitions do
> >>> not
> > match the existing topics, what would Streams do;
> 
>  Good point! I missed to explicitly consider misconfigurations in the
>  KIP.
> 
>  I propose to throw a fatal error in this case during manual and
>  automatic initialization. For the fatal error, we have two options:
>  a) introduce a second exception besides MissingInternalTopicException,
>  e.g. MisconfiguredInternalTopicException
>  b) rename MissingInternalTopicException to
>  MissingOrMisconfiguredInternalTopicException and throw that in both
> >>> cases.
> 
>  Since the process to react on such an exception user-side should be
>  similar, I am fine with option b). However, IMO option a) is a bit
>  cleaner. WDYT?
> 
> > 2) Since `init()` is a blocking call (we only return after all topics
> >>> are
> > confirmed to be created), should we have a timeout for this call as
> >>> well
>  or
> > not;
> 
>  I will add an overload with a timeout to the KIP.
> 
> > 3) If the configure is set to `MANUAL_SETUP`, then during rebalance
> do
> >>> we
> > still check if number of partitions of the existing topic match or
> > not;
>  if
> > not, do we throw the newly added exception or throw a fatal
> > StreamsException? Today we would throw the StreamsException from
> >>> assign()
> > which would be then thrown from consumer.poll() as a fatal error.
> >
> 
>  Yes, I think we should check if the number of partitions match. I
>  propose to throw the newly added exception in the same way as we throw
>  now the MissingSourceTopicException, i.e., throw it from
>  consumer.poll(). WDYT?
> 
> > Guozhang
> >
> >
> > On Mon, Dec 14, 2020 at 12:47 PM John Roesler 
>  wrote:
> >
> >> Thanks, Bruno!
> >>
> >> I'm +1 (binding)
> >>
> >> -John
> >>
> >> On Mon, 2020-12-14 at 09:57 -0600, Leah Thomas wrote:
> >>> Thanks for the KIP Bruno, LGTM. +1 (non-binding)
> >>>
> >>> Cheers,
> >>> Leah
> >>>
> >>> On Mon, Dec 14, 2020 at 4:29 AM Bruno Cadonna 
> >> wrote:
> >>>
>  Hi,
> 
>  I'd like to start the voting on KIP-698 that proposes an explicit
> >>> user
>  initialization of broker-side state for Kafka Streams instead of
> >> letting
>  Kafka Streams setting up the broker-side state automatically
> during
>  rebalance. Such an explicit initialization avoids possible data
>  loss
>  issues due to automatic initialization.
> 
>  https://cwiki.apache.org/confluence/x/7CnZCQ
> 
>  Best,
>  Bruno
> 
> >>
> >>
> >>
> >
> 
> >>>
> >>>
> >>> --
> >>> -- Guozhang
> >>>
> >>
>


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #108

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[Bill Bejeck] KAFKA-8930: MirrorMaker v2 documentation (#324) (#9983)


--
[...truncated 3.44 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED


Re: [VOTE] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2021-01-27 Thread Matthias J. Sax
Thanks for updating the KIP.

+1 (binding)


-Matthias

On 1/27/21 10:19 AM, Bruno Cadonna wrote:
> Hi all,
> 
> Thanks for voting!
> 
> I updated the KIP with some additional feedback I got.
> 
> If I do not hear anything from folks that have already voted in the next
> couple of days, I will assume their vote is still valid. You can also
> confirm your vote if you want.
> 
> KIP: https://cwiki.apache.org/confluence/x/7CnZCQ
> 
> Best,
> Bruno
> 
> On 26.01.21 02:19, Sophie Blee-Goldman wrote:
>> Thanks for the KIP Bruno, +1 (binding)
>>
>> Sophie
>>
>> On Mon, Jan 25, 2021 at 11:23 AM Guozhang Wang 
>> wrote:
>>
>>> Hey Bruno,
>>>
>>> Thanks for your response!
>>>
>>> 1) Yup I'm good with option a) as well.
>>> 2) Thanks!
>>> 3) Sounds good to me. I think it would not change any StreamThread
>>> implementation regarding capturing exceptions from consumer.poll()
>>> since it
>>> captures StreamsException as fatal.
>>>
>>>
>>> Guozhang
>>>
>>> On Wed, Dec 16, 2020 at 4:43 AM Bruno Cadonna 
>>> wrote:
>>>
 Hi Guozhang,

 Thank for the feedback!

 Please find my answers inline.

 Best,
 Bruno


 On 14.12.20 23:33, Guozhang Wang wrote:
> Hello Bruno,
>
> Just a few more questions about the KIP:
>
> 1) If the internal topics exist but the calculated num.partitions do
>>> not
> match the existing topics, what would Streams do;

 Good point! I missed to explicitly consider misconfigurations in the
 KIP.

 I propose to throw a fatal error in this case during manual and
 automatic initialization. For the fatal error, we have two options:
 a) introduce a second exception besides MissingInternalTopicException,
 e.g. MisconfiguredInternalTopicException
 b) rename MissingInternalTopicException to
 MissingOrMisconfiguredInternalTopicException and throw that in both
>>> cases.

 Since the process to react on such an exception user-side should be
 similar, I am fine with option b). However, IMO option a) is a bit
 cleaner. WDYT?

> 2) Since `init()` is a blocking call (we only return after all topics
>>> are
> confirmed to be created), should we have a timeout for this call as
>>> well
 or
> not;

 I will add an overload with a timeout to the KIP.

> 3) If the configure is set to `MANUAL_SETUP`, then during rebalance do
>>> we
> still check if number of partitions of the existing topic match or
> not;
 if
> not, do we throw the newly added exception or throw a fatal
> StreamsException? Today we would throw the StreamsException from
>>> assign()
> which would be then thrown from consumer.poll() as a fatal error.
>

 Yes, I think we should check if the number of partitions match. I
 propose to throw the newly added exception in the same way as we throw
 now the MissingSourceTopicException, i.e., throw it from
 consumer.poll(). WDYT?

> Guozhang
>
>
> On Mon, Dec 14, 2020 at 12:47 PM John Roesler 
 wrote:
>
>> Thanks, Bruno!
>>
>> I'm +1 (binding)
>>
>> -John
>>
>> On Mon, 2020-12-14 at 09:57 -0600, Leah Thomas wrote:
>>> Thanks for the KIP Bruno, LGTM. +1 (non-binding)
>>>
>>> Cheers,
>>> Leah
>>>
>>> On Mon, Dec 14, 2020 at 4:29 AM Bruno Cadonna 
>> wrote:
>>>
 Hi,

 I'd like to start the voting on KIP-698 that proposes an explicit
>>> user
 initialization of broker-side state for Kafka Streams instead of
>> letting
 Kafka Streams setting up the broker-side state automatically during
 rebalance. Such an explicit initialization avoids possible data
 loss
 issues due to automatic initialization.

 https://cwiki.apache.org/confluence/x/7CnZCQ

 Best,
 Bruno

>>
>>
>>
>

>>>
>>>
>>> -- 
>>> -- Guozhang
>>>
>>


Re: [DISCUSS] KIP-691: Transactional Producer Exception Handling

2021-01-27 Thread Jason Gustafson
Hi Boyang,

Thanks for the iterations here. I think this is something we should have
done a long time ago. It sounds like there are two API changes here:

1. We are introducing the `CommitFailedException` to wrap abortable
errors that are raised from `commitTransaction`. This sounds fine to me. As
far as I know, the only case we might need this is when we add support to
let producers recover from coordinator timeouts. Are there any others?

2. We are wrapping non-fatal errors raised from `send` as `KafkaException`.
The motivation for this is less clear to me and it doesn't look like the
example from the KIP depends on it. My concern here is compatibility.
Currently we have the following documentation for the `Callback` API:

```
 *  Non-Retriable exceptions (fatal, the message will
never be sent):
 *
 *  InvalidTopicException
 *  OffsetMetadataTooLargeException
 *  RecordBatchTooLargeException
 *  RecordTooLargeException
 *  UnknownServerException
 *  UnknownProducerIdException
 *  InvalidProducerEpochException
 *
 *  Retriable exceptions (transient, may be covered by
increasing #.retries):
 *
 *  CorruptRecordException
 *  InvalidMetadataException
 *  NotEnoughReplicasAfterAppendException
 *  NotEnoughReplicasException
 *  OffsetOutOfRangeException
 *  TimeoutException
 *  UnknownTopicOrPartitionException
```

If we wrap all the retriable exceptions documented here as
`KafkaException`, wouldn't that break any error handling that users might
already have?

Thanks,
Jason


On Sat, Jan 23, 2021 at 3:31 AM Hiringuru  wrote:

> Why  we are receiving all emails kindly remove us from
> dev@kafka.apache.org we don't want to receive emails anymore.
>
> Thanks
> > On 01/23/2021 4:14 AM Guozhang Wang  wrote:
> >
> >
> > Thanks Boyang, yes I think I was confused about the different handling of
> > two abortTxn calls, and now I get it was not intentional. I think I do
> not
> > have more concerns.
> >
> > On Fri, Jan 22, 2021 at 1:12 PM Boyang Chen 
> > wrote:
> >
> > > Thanks for the clarification Guozhang, I got your point that we want to
> > > have a consistent handling of fatal exceptions being thrown from the
> > > abortTxn. I modified the current template to move the fatal exception
> > > try-catch outside of the processing loop to make sure we could get a
> chance
> > > to close consumer/producer modules. Let me know what you think.
> > >
> > > Best,
> > > Boyang
> > >
> > > On Fri, Jan 22, 2021 at 11:05 AM Boyang Chen <
> reluctanthero...@gmail.com>
> > > wrote:
> > >
> > > > My understanding is that abortTransaction would only throw when the
> > > > producer is in fatal state. For CommitFailed, the producer should
> still
> > > be
> > > > in the abortable error state, so that abortTransaction call would not
> > > throw.
> > > >
> > > > On Fri, Jan 22, 2021 at 11:02 AM Guozhang Wang 
> > > wrote:
> > > >
> > > >> Or are you going to maintain some internal state such that, the
> > > >> `abortTransaction` in the catch block would never throw again?
> > > >>
> > > >> On Fri, Jan 22, 2021 at 11:01 AM Guozhang Wang 
> > > >> wrote:
> > > >>
> > > >> > Hi Boyang/Jason,
> > > >> >
> > > >> > I've also thought about this (i.e. using CommitFailed for all
> > > >> non-fatal),
> > > >> > but what I'm pondering is that, in the catch (CommitFailed) block,
> > > what
> > > >> > would happen if the `producer.abortTransaction();` throws again?
> > > should
> > > >> > that be captured as a fatal and cause the client to close again.
> > > >> >
> > > >> > If yes, then naively the pattern would be:
> > > >> >
> > > >> > ...
> > > >> > catch (CommitFailedException e) {
> > > >> > // Transaction commit failed with abortable error, user
> could
> > > >> reset
> > > >> > // the application state and resume with a new
> transaction.
> > > The
> > > >> > root
> > > >> > // cause was wrapped in the thrown exception.
> > > >> > resetToLastCommittedPositions(consumer);
> > > >> > try {
> > > >> > producer.abortTransaction();
> > > >> > } catch (KafkaException e) {
> > > >> > producer.close();
> > > >> > consumer.close();
> > > >> > throw e;
> > > >> > }
> > > >> > } catch (KafkaException e) {
> > > >> > producer.close();
> > > >> > consumer.close();
> > > >> > throw e;
> > > >> > }
> > > >> > ...
> > > >> >
> > > >> > Guozhang
> > > >> >
> > > >> > On Fri, Jan 22, 2021 at 10:47 AM Boyang Chen <
> > > >> reluctanthero...@gmail.com>
> > > >> > wrote:
> > > >> >
> > > >> >> Hey Guozhang,
> > > >> >>
> > > >> >> Jason and I were discussing the new API offline and decided to
> take
> 

Impact of setting max.inflight.requests.per.connection > 1 in Kafka connect

2021-01-27 Thread nitin agarwal
Hi All,

I see that max.inflight.requests.per.connection is set to 1 explicitly in
Kafka Connect but there is a way to override it. I want to understand the
impact of setting its value > 1.
As per my understanding, it will lead to data loss in some cases. Is it
correct ?


Thank you,
Nitin


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #462

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update zookeeper to 3.5.9 (#9977)


--
[...truncated 3.59 MB...]

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() PASSED

TestTopicsTest > testDuration() STARTED

TestTopicsTest > testDuration() PASSED

TestTopicsTest > testOutputToString() STARTED

TestTopicsTest > testOutputToString() PASSED

TestTopicsTest > testValue() STARTED

TestTopicsTest > testValue() PASSED

TestTopicsTest > testTimestampAutoAdvance() STARTED

TestTopicsTest > testTimestampAutoAdvance() PASSED

TestTopicsTest > testOutputWrongSerde() STARTED

TestTopicsTest > testOutputWrongSerde() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() PASSED

TestTopicsTest > testWrongSerde() STARTED

TestTopicsTest > testWrongSerde() PASSED

TestTopicsTest > testKeyValuesToMapWithNull() STARTED

TestTopicsTest > testKeyValuesToMapWithNull() PASSED

TestTopicsTest > testNonExistingOutputTopic() STARTED

TestTopicsTest > testNonExistingOutputTopic() PASSED

TestTopicsTest > testMultipleTopics() STARTED

TestTopicsTest > testMultipleTopics() PASSED

TestTopicsTest > testKeyValueList() STARTED

TestTopicsTest > testKeyValueList() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() PASSED

TestTopicsTest > testValueList() STARTED

TestTopicsTest > testValueList() PASSED

TestTopicsTest > testRecordList() STARTED

TestTopicsTest > testRecordList() PASSED

TestTopicsTest > testNonExistingInputTopic() STARTED

TestTopicsTest > testNonExistingInputTopic() PASSED

TestTopicsTest > testKeyValuesToMap() STARTED

TestTopicsTest > testKeyValuesToMap() PASSED

TestTopicsTest > testRecordsToList() STARTED

TestTopicsTest > testRecordsToList() PASSED

TestTopicsTest > testKeyValueListDuration() STARTED

TestTopicsTest > testKeyValueListDuration() PASSED

TestTopicsTest > testInputToString() STARTED

TestTopicsTest > testInputToString() PASSED

TestTopicsTest > testTimestamp() STARTED

TestTopicsTest > testTimestamp() PASSED

TestTopicsTest > testWithHeaders() STARTED

TestTopicsTest > testWithHeaders() PASSED

TestTopicsTest > testKeyValue() STARTED

TestTopicsTest > testKeyValue() PASSED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-11:compileTestJava
> Task :streams:upgrade-system-tests-11:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:testClasses
> Task :streams:upgrade-system-tests-11:checkstyleTest
> Task :streams:upgrade-system-tests-11:spotbugsMain NO-SOURCE
> Task 

[GitHub] [kafka-site] bbejeck merged pull request #326: KAFKA-8930: Port changes for MM2 docs to 2.6 docs

2021-01-27 Thread GitBox


bbejeck merged pull request #326:
URL: https://github.com/apache/kafka-site/pull/326


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck edited a comment on pull request #326: KAFKA-8930: Port changes for MM2 docs to 2.6 docs

2021-01-27 Thread GitBox


bbejeck edited a comment on pull request #326:
URL: https://github.com/apache/kafka-site/pull/326#issuecomment-768481852


   > I didn't test this PR locally myself to ensure proper HTML rendering etc.
   
   FWIW I rendered it locally and it seemed fine



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck commented on pull request #326: KAFKA-8930: Port changes for MM2 docs to 2.6 docs

2021-01-27 Thread GitBox


bbejeck commented on pull request #326:
URL: https://github.com/apache/kafka-site/pull/326#issuecomment-768481852


   > I didn't test this PR locally myself to ensure proper HTML rendering etc.
   FWIW I rendered it locally and it seemed fine



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2021-01-27 Thread Bruno Cadonna

Hi all,

Thanks for voting!

I updated the KIP with some additional feedback I got.

If I do not hear anything from folks that have already voted in the next 
couple of days, I will assume their vote is still valid. You can also 
confirm your vote if you want.


KIP: https://cwiki.apache.org/confluence/x/7CnZCQ

Best,
Bruno

On 26.01.21 02:19, Sophie Blee-Goldman wrote:

Thanks for the KIP Bruno, +1 (binding)

Sophie

On Mon, Jan 25, 2021 at 11:23 AM Guozhang Wang  wrote:


Hey Bruno,

Thanks for your response!

1) Yup I'm good with option a) as well.
2) Thanks!
3) Sounds good to me. I think it would not change any StreamThread
implementation regarding capturing exceptions from consumer.poll() since it
captures StreamsException as fatal.


Guozhang

On Wed, Dec 16, 2020 at 4:43 AM Bruno Cadonna  wrote:


Hi Guozhang,

Thank for the feedback!

Please find my answers inline.

Best,
Bruno


On 14.12.20 23:33, Guozhang Wang wrote:

Hello Bruno,

Just a few more questions about the KIP:

1) If the internal topics exist but the calculated num.partitions do

not

match the existing topics, what would Streams do;


Good point! I missed to explicitly consider misconfigurations in the KIP.

I propose to throw a fatal error in this case during manual and
automatic initialization. For the fatal error, we have two options:
a) introduce a second exception besides MissingInternalTopicException,
e.g. MisconfiguredInternalTopicException
b) rename MissingInternalTopicException to
MissingOrMisconfiguredInternalTopicException and throw that in both

cases.


Since the process to react on such an exception user-side should be
similar, I am fine with option b). However, IMO option a) is a bit
cleaner. WDYT?


2) Since `init()` is a blocking call (we only return after all topics

are

confirmed to be created), should we have a timeout for this call as

well

or

not;


I will add an overload with a timeout to the KIP.


3) If the configure is set to `MANUAL_SETUP`, then during rebalance do

we

still check if number of partitions of the existing topic match or not;

if

not, do we throw the newly added exception or throw a fatal
StreamsException? Today we would throw the StreamsException from

assign()

which would be then thrown from consumer.poll() as a fatal error.



Yes, I think we should check if the number of partitions match. I
propose to throw the newly added exception in the same way as we throw
now the MissingSourceTopicException, i.e., throw it from
consumer.poll(). WDYT?


Guozhang


On Mon, Dec 14, 2020 at 12:47 PM John Roesler 

wrote:



Thanks, Bruno!

I'm +1 (binding)

-John

On Mon, 2020-12-14 at 09:57 -0600, Leah Thomas wrote:

Thanks for the KIP Bruno, LGTM. +1 (non-binding)

Cheers,
Leah

On Mon, Dec 14, 2020 at 4:29 AM Bruno Cadonna 

wrote:



Hi,

I'd like to start the voting on KIP-698 that proposes an explicit

user

initialization of broker-side state for Kafka Streams instead of

letting

Kafka Streams setting up the broker-side state automatically during
rebalance. Such an explicit initialization avoids possible data loss
issues due to automatic initialization.

https://cwiki.apache.org/confluence/x/7CnZCQ

Best,
Bruno












--
-- Guozhang





Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-01-27 Thread John Roesler
Hello again, all.

This is a reminder that *today* is the KIP freeze for Apache
Kafka 2.8.0.

The next checkpoint is the Feature Freeze on Feb 3rd.

When considering any last-minute KIPs today, please be
mindful of the scope, since we have only one week to merge a
stable implementation of the KIP.

For those whose KIPs have been accepted already, please work
closely with your reviewers so that your features can be
merged in a stable form in before the Feb 3rd cutoff. Also,
don't forget to update the documentation as part of your
feature.

Finally, as a gentle reminder to all contributors. There
seems to have been a recent increase in test and system test
failures. Please take some time starting now to stabilize
the codebase so we can ensure a high quality and timely
2.8.0 release!

Thanks to all of you for your contributions,
John

On Sat, 2021-01-23 at 18:15 +0300, Ivan Ponomarev wrote:
> Hi John,
> 
> KIP-418 is already implemented and reviewed, but I don't see it in the 
> release plan. Can it be added?
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-418%3A+A+method-chaining+way+to+branch+KStream
> 
> Regards,
> 
> Ivan
> 
> 22.01.2021 21:49, John Roesler пишет:
> > Sure thing, Leah!
> > -John
> > On Thu, Jan 21, 2021, at 07:54, Leah Thomas wrote:
> > > Hi John,
> > > 
> > > KIP-659 was just accepted as well, can it be added to the release plan?
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-659%3A+Improve+TimeWindowedDeserializer+and+TimeWindowedSerde+to+handle+window+size
> > > 
> > > Thanks,
> > > Leah
> > > 
> > > On Thu, Jan 14, 2021 at 9:36 AM John Roesler  wrote:
> > > 
> > > > Hi David,
> > > > 
> > > > Thanks for the heads-up; it's added.
> > > > 
> > > > -John
> > > > 
> > > > On Thu, 2021-01-14 at 08:43 +0100, David Jacot wrote:
> > > > > Hi John,
> > > > > 
> > > > > KIP-700 just got accepted. Can we add it to the release plan?
> > > > > 
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-700%3A+Add+Describe+Cluster+API
> > > > > 
> > > > > Thanks,
> > > > > David
> > > > > 
> > > > > On Wed, Jan 13, 2021 at 11:22 PM John Roesler 
> > > > wrote:
> > > > > 
> > > > > > Thanks, Gary! Sorry for the oversight.
> > > > > > -John
> > > > > > 
> > > > > > On Wed, 2021-01-13 at 21:25 +, Gary Russell wrote:
> > > > > > > Can you add a link to the summary page [1]?
> > > > > > > 
> > > > > > > I always start there.
> > > > > > > 
> > > > > > > Thanks
> > > > > > > 
> > > > > > > [1]:
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> > > > > > > Future release plan - Apache Kafka - Apache Software Foundation<
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan>
> > > > > > > Release Plan 0.10.0; Release Plan 0.10.1; Release Plan 0.10.2.0;
> > > > Release
> > > > > > Plan 0.10.2.2; Release Plan 0.11.0.0; Release Plan 0.11.0.3; Release
> > > > Plan
> > > > > > 1.0.0 (2017 Oct.)
> > > > > > > cwiki.apache.org
> > > > > > > 
> > > > > > > 
> > > > > > > From: John Roesler 
> > > > > > > Sent: Wednesday, January 13, 2021 4:11 PM
> > > > > > > To: dev@kafka.apache.org 
> > > > > > > Subject: Re: [DISCUSS] Apache Kafka 2.8.0 release
> > > > > > > 
> > > > > > > Hello again, all,
> > > > > > > 
> > > > > > > I have published a release plan at
> > > > > > > 
> > > > > > 
> > > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fpages%2Fviewpage.action%3FpageId%3D173081737data=04%7C01%7Cgrussell%40vmware.com%7C6bb299de16bf4730c73608d8b8079404%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637461689989420036%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=8RfiDJZRr%2BZ1S46I5ZrHRyNOEKiBzjHYlxD4AnAb8p8%3Dreserved=0
> > > > > > > 
> > > > > > > I have included all the KIPs that are currently approved,
> > > > > > > but I am happy to make adjustments as necessary.
> > > > > > > 
> > > > > > > The KIP freeze is Jan 27th.
> > > > > > > 
> > > > > > > Please let me know if you have any objections.
> > > > > > > 
> > > > > > > Thanks,
> > > > > > > -John
> > > > > > > 
> > > > > > > On Wed, 2021-01-06 at 23:30 -0600, John Roesler wrote:
> > > > > > > > Hello All,
> > > > > > > > 
> > > > > > > > I'd like to volunteer to be the release manager for our next
> > > > > > > > feature release, 2.8.0. If there are no objections, I'll
> > > > > > > > send out the release plan soon.
> > > > > > > > 
> > > > > > > > Thanks,
> > > > > > > > John Roesler
> > > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > 
> > > > 
> > > > 
> > > 
> 




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #413

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update zookeeper to 3.5.9 (#9977)


--
[...truncated 3.56 MB...]

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5141955d, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31c2aaf6, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31c2aaf6, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@74438cad, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@74438cad, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5b099620, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5b099620, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7449d27b, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7449d27b, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@202ee817, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@202ee817, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@228fdf97, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@228fdf97, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@37057787, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@37057787, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5bd93d90, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5bd93d90, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2bb24168, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2bb24168, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@18941dfe, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@18941dfe, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4f80b3b2, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4f80b3b2, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@608f5a32, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@608f5a32, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@77abe61b, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 

[GitHub] [kafka-site] mimaison merged pull request #322: KAFKA-6223: Use archive.apache.org for older releases

2021-01-27 Thread GitBox


mimaison merged pull request #322:
URL: https://github.com/apache/kafka-site/pull/322


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #444

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update zookeeper to 3.5.9 (#9977)


--
[...truncated 3.57 MB...]
TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() STARTED

TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() PASSED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() STARTED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() PASSED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() STARTED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() PASSED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() STARTED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() PASSED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
STARTED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() PASSED

TopologyTestDriverEosTest > shouldSetRecordMetadata() STARTED

TopologyTestDriverEosTest > shouldSetRecordMetadata() PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() PASSED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() STARTED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() PASSED

TopologyTestDriverEosTest > shouldThrowForMissingTime() STARTED

TopologyTestDriverEosTest > shouldThrowForMissingTime() PASSED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
STARTED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() PASSED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() STARTED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() PASSED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
STARTED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() PASSED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
STARTED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
PASSED

TopologyTestDriverEosTest > shouldNotRequireParameters() STARTED

TopologyTestDriverEosTest > shouldNotRequireParameters() PASSED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() STARTED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() PASSED

WindowStoreFacadeTest > shouldReturnIsOpen() STARTED

WindowStoreFacadeTest > shouldReturnIsOpen() PASSED

WindowStoreFacadeTest > shouldReturnName() STARTED

WindowStoreFacadeTest > shouldReturnName() PASSED

WindowStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

WindowStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

WindowStoreFacadeTest > shouldPutWindowStartTimestampWithUnknownTimestamp() 
STARTED

WindowStoreFacadeTest > shouldPutWindowStartTimestampWithUnknownTimestamp() 
PASSED

WindowStoreFacadeTest > shouldReturnIsPersistent() STARTED

WindowStoreFacadeTest > shouldReturnIsPersistent() PASSED

WindowStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

WindowStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

WindowStoreFacadeTest > shouldForwardClose() STARTED

WindowStoreFacadeTest > shouldForwardClose() PASSED

WindowStoreFacadeTest > shouldForwardFlush() STARTED

WindowStoreFacadeTest > shouldForwardFlush() PASSED

WindowStoreFacadeTest > shouldForwardInit() STARTED

WindowStoreFacadeTest > shouldForwardInit() PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task 

[GitHub] [kafka-site] miguno commented on pull request #326: KAFKA-8930: Port changes for MM2 docs to 2.6 docs

2021-01-27 Thread GitBox


miguno commented on pull request #326:
URL: https://github.com/apache/kafka-site/pull/326#issuecomment-768466468


   This LGTM, though (1) there were some minor HTML changes not directly 
related to the original PR and (2) I didn't test this PR locally myself to 
ensure proper HTML rendering etc.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck commented on pull request #326: KAFKA-8930: Port changes for MM2 docs to 2.6 docs

2021-01-27 Thread GitBox


bbejeck commented on pull request #326:
URL: https://github.com/apache/kafka-site/pull/326#issuecomment-768461055


   ping @miguno for a +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck opened a new pull request #326: KAFKA-8930: Port changes for MM2 docs to 2.6 docs

2021-01-27 Thread GitBox


bbejeck opened a new pull request #326:
URL: https://github.com/apache/kafka-site/pull/326


   The MM2 docs are already in for 2.7 via 
https://github.com/apache/kafka-site/pull/324, this PR adds them to 2.6



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] miguno commented on pull request #324: KAFKA-8930: MirrorMaker v2 documentation

2021-01-27 Thread GitBox


miguno commented on pull request #324:
URL: https://github.com/apache/kafka-site/pull/324#issuecomment-768416513


   kafka/docs PR is up at https://github.com/apache/kafka/pull/9983



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck commented on pull request #324: KAFKA-8930: MirrorMaker v2 documentation

2021-01-27 Thread GitBox


bbejeck commented on pull request #324:
URL: https://github.com/apache/kafka-site/pull/324#issuecomment-768404929


   > We should also add these docs to kafka/docs repo
   
   @omkreddy, yes a PR for kafka/docs is coming soon



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #461

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12233: Align the length passed to FileChannel by 
`FileRecords.writeTo` (#9970)

[github] MINOR: Remove redundant apostrophe in doc (#9976)

[github] MINOR: Remove outdated comment in Connect's WorkerCoordinator (#9805)

[github] MINOR: Remove redundant casting and if condition from ConnectSchema 
(#9959)


--
[...truncated 7.16 MB...]

TopologyTestDriverEosTest > shouldUseSinkSpecificSerializers() PASSED

TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() STARTED

TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() PASSED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() STARTED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() PASSED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() STARTED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() PASSED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() STARTED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() PASSED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
STARTED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() PASSED

TopologyTestDriverEosTest > shouldSetRecordMetadata() STARTED

TopologyTestDriverEosTest > shouldSetRecordMetadata() PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() PASSED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() STARTED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() PASSED

TopologyTestDriverEosTest > shouldThrowForMissingTime() STARTED

TopologyTestDriverEosTest > shouldThrowForMissingTime() PASSED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
STARTED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() PASSED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() STARTED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() PASSED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
STARTED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() PASSED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
STARTED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
PASSED

TopologyTestDriverEosTest > shouldNotRequireParameters() STARTED

TopologyTestDriverEosTest > shouldNotRequireParameters() PASSED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() STARTED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() PASSED

> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task 

[GitHub] [kafka-site] omkreddy commented on pull request #324: KAFKA-8930: MirrorMaker v2 documentation

2021-01-27 Thread GitBox


omkreddy commented on pull request #324:
URL: https://github.com/apache/kafka-site/pull/324#issuecomment-768372236


   We should also add these docs to `kafka/docs` repo



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck merged pull request #324: KAFKA-8930: MirrorMaker v2 documentation

2021-01-27 Thread GitBox


bbejeck merged pull request #324:
URL: https://github.com/apache/kafka-site/pull/324


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #412

2021-01-27 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-708: Rack aware Kafka Streams with pluggable StandbyTask assignor

2021-01-27 Thread John Roesler
Hi Levani,

Thanks for this KIP! I think this is really high value; it was something I was 
disappointed I didn’t get to do as part of KIP-441.

Rack awareness is a feature provided by other distributed systems as well. I 
wonder if your KIP could devote a section to summarizing what rack awareness 
looks like in other distributed systems, to help us put this design in context. 

Thanks!
John


On Tue, Jan 26, 2021, at 16:46, Levani Kokhreidze wrote:
> Hello all,
> 
> I’d like to start discussion on KIP-708 [1] that aims to introduce rack 
> aware standby task distribution in Kafka Streams.
> In addition to changes mentioned in the KIP, I’d like to get some ideas 
> on additional change I have in mind. 
> Assuming KIP moves forward, I was wondering if it makes sense to 
> configure Kafka Streams consumer instances with the rack ID passed with 
> the new StreamsConfig#RACK_ID_CONFIG property. 
> In practice, that would mean that when “rack.id ” is 
> configured in Kafka Streams, it will automatically translate into 
> ConsumerConfig#CLIENT_RACK_ID config for all the KafkaConsumer clients 
> that is used by Kafka Streams internally.
> 
> [1] 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+aware+Kafka+Streams+with+pluggable+StandbyTask+assignor
>  
> 
> 
> P.S 
> I have draft PR ready, if it helps the discussion moving forward, I can 
> provide the draft PR link in this thread.
> 
> Regards, 
> Levani


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #443

2021-01-27 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-27 Thread Thomas Scott
Hi Magnus,

  Thanks for the review, I've added //moved and explanation as requested.

Thanks

  Tom


On Wed, Jan 27, 2021 at 12:05 PM Magnus Edenhill  wrote:

> Hey Thomas,
>
> I'm late to the game.
>
> It looks like the "top level" ErrorCode moved from the top-level to the
> Group array, which makes sense,
> but it would be good if it was marked as // MOVED in the KIP and also a
> note that top level errors that
> are unrelated to the group will be returned as per-group errors.
>
>
> Regards,
> Magnus
>
>
> Den tis 26 jan. 2021 kl 15:42 skrev Thomas Scott :
>
> > Thanks David I've updated it.
> >
> > On Tue, Jan 26, 2021 at 1:55 PM David Jacot  wrote:
> >
> > > Great. That answers my question!
> > >
> > > Thomas, I suggest adding a Related/Future Work section in the
> > > KIP to link KIP-699 more explicitly.
> > >
> > > Thanks,
> > > David
> > >
> > > On Tue, Jan 26, 2021 at 1:30 PM Thomas Scott  wrote:
> > >
> > > > Hi Mickael/David,
> > > >
> > > >   I feel like the combination of these 2 KIPs gives the complete
> > solution
> > > > but they can be implemented independently. I have added a description
> > and
> > > > links to KIP-699 to KIP-709 to this effect.
> > > >
> > > > Thanks
> > > >
> > > >   Tom
> > > >
> > > >
> > > > On Tue, Jan 26, 2021 at 11:44 AM Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Thomas,
> > > > > Thanks, the KIP looks good.
> > > > >
> > > > > David,
> > > > > I started working on exactly that a few weeks ago:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+FindCoordinators
> > > > > I hope to complete my draft and start a discussion later on this
> > week.
> > > > >
> > > > > Thanks
> > > > >
> > > > > On Tue, Jan 26, 2021 at 10:06 AM David Jacot 
> > > > wrote:
> > > > > >
> > > > > > Hi Thomas,
> > > > > >
> > > > > > Thanks for the KIP. Overall, the KIP looks good to me.
> > > > > >
> > > > > > I have only one question: The FindCoordinator API only supports
> > > > > > resolving one group id at the time. If we want to get the offsets
> > for
> > > > > > say N groups, that means that we have to first issue N
> > > FindCoordinator
> > > > > > requests, wait for the responses, group by coordinators, and then
> > > > > > send a OffsetFetch request per coordinator. I wonder if we should
> > > > > > also extend the FindCoordinator API to support resolving multiple
> > > > > > groups as well. This would make the implementation in the admin
> > > > > > client a bit easier and would ensure that we can handle multiple
> > > > > > groups end-to-end. Have you thought about this?
> > > > > >
> > > > > > Best,
> > > > > > David
> > > > > >
> > > > > > On Tue, Jan 26, 2021 at 10:13 AM Rajini Sivaram <
> > > > rajinisiva...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Thomas,
> > > > > > >
> > > > > > > Thanks for the KIP, this is a useful addition for admin use
> > cases.
> > > It
> > > > > may
> > > > > > > be worth starting the voting thread soon if we want to get this
> > > into
> > > > > 2.8.0.
> > > > > > >
> > > > > > > Regards,
> > > > > > >
> > > > > > > Rajini
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks Ismael, that's a lot better. I've updated the KIP with
> > > this
> > > > > > > > behaviour instead.
> > > > > > > >
> > > > > > > > On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma <
> > ism...@juma.me.uk>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks for the KIP, Thomas. One question below:
> > > > > > > > >
> > > > > > > > > Should an Admin client with this new functionality be used
> > > > against
> > > > > an
> > > > > > > old
> > > > > > > > > > broker that cannot handle these requests then the methods
> > > will
> > > > > throw
> > > > > > > > > > UnsupportedVersionException as per the usual pattern.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Did we consider automatically falling back to the single
> > group
> > > id
> > > > > > > request
> > > > > > > > > if the more efficient one is not supported?
> > > > > > > > >
> > > > > > > > > Ismael
> > > > > > > > >
> > > > > > > > > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott <
> > t...@confluent.io
> > > >
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > I'm starting this thread to discuss KIP-709 to extend
> > > > OffsetFetch
> > > > > > > > > requests
> > > > > > > > > > to accept multiple group ids. Please check out the KIP
> > here:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > > > > > > > > >
> > > > > > > > > > Any comments much appreciated.
> > > > > > > > > >
> > > > > > > > > > thanks,
> > > > > > > > > >
> > > > > > > > > > Tom
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > 

Re: [VOTE] KIP-709: Extend OffsetFetch requests to accept multiple group ids.

2021-01-27 Thread Tom Bentley
+1 (non-binding),

Thanks for the KIP.

On Wed, Jan 27, 2021 at 10:33 AM David Jacot  wrote:

> Thanks for the KIP, Tom.
>
> +1 (binding)
>
> Best,
> David
>
> On Wed, Jan 27, 2021 at 11:12 AM Rajini Sivaram 
> wrote:
>
> > +1 (binding)
> >
> > Thanks for the KIP, Tom.
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Tue, Jan 26, 2021 at 9:51 AM Thomas Scott  wrote:
> >
> > > Hey all,
> > >
> > >  I'd like to start the vote for
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > >
> > > Thanks
> > >
> > >   Tom
> > >
> >
>


Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-27 Thread Magnus Edenhill
Hey Thomas,

I'm late to the game.

It looks like the "top level" ErrorCode moved from the top-level to the
Group array, which makes sense,
but it would be good if it was marked as // MOVED in the KIP and also a
note that top level errors that
are unrelated to the group will be returned as per-group errors.


Regards,
Magnus


Den tis 26 jan. 2021 kl 15:42 skrev Thomas Scott :

> Thanks David I've updated it.
>
> On Tue, Jan 26, 2021 at 1:55 PM David Jacot  wrote:
>
> > Great. That answers my question!
> >
> > Thomas, I suggest adding a Related/Future Work section in the
> > KIP to link KIP-699 more explicitly.
> >
> > Thanks,
> > David
> >
> > On Tue, Jan 26, 2021 at 1:30 PM Thomas Scott  wrote:
> >
> > > Hi Mickael/David,
> > >
> > >   I feel like the combination of these 2 KIPs gives the complete
> solution
> > > but they can be implemented independently. I have added a description
> and
> > > links to KIP-699 to KIP-709 to this effect.
> > >
> > > Thanks
> > >
> > >   Tom
> > >
> > >
> > > On Tue, Jan 26, 2021 at 11:44 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Hi Thomas,
> > > > Thanks, the KIP looks good.
> > > >
> > > > David,
> > > > I started working on exactly that a few weeks ago:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+FindCoordinators
> > > > I hope to complete my draft and start a discussion later on this
> week.
> > > >
> > > > Thanks
> > > >
> > > > On Tue, Jan 26, 2021 at 10:06 AM David Jacot 
> > > wrote:
> > > > >
> > > > > Hi Thomas,
> > > > >
> > > > > Thanks for the KIP. Overall, the KIP looks good to me.
> > > > >
> > > > > I have only one question: The FindCoordinator API only supports
> > > > > resolving one group id at the time. If we want to get the offsets
> for
> > > > > say N groups, that means that we have to first issue N
> > FindCoordinator
> > > > > requests, wait for the responses, group by coordinators, and then
> > > > > send a OffsetFetch request per coordinator. I wonder if we should
> > > > > also extend the FindCoordinator API to support resolving multiple
> > > > > groups as well. This would make the implementation in the admin
> > > > > client a bit easier and would ensure that we can handle multiple
> > > > > groups end-to-end. Have you thought about this?
> > > > >
> > > > > Best,
> > > > > David
> > > > >
> > > > > On Tue, Jan 26, 2021 at 10:13 AM Rajini Sivaram <
> > > rajinisiva...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi Thomas,
> > > > > >
> > > > > > Thanks for the KIP, this is a useful addition for admin use
> cases.
> > It
> > > > may
> > > > > > be worth starting the voting thread soon if we want to get this
> > into
> > > > 2.8.0.
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > Rajini
> > > > > >
> > > > > >
> > > > > > On Mon, Jan 25, 2021 at 1:52 PM Thomas Scott 
> > > wrote:
> > > > > >
> > > > > > > Thanks Ismael, that's a lot better. I've updated the KIP with
> > this
> > > > > > > behaviour instead.
> > > > > > >
> > > > > > > On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma <
> ism...@juma.me.uk>
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks for the KIP, Thomas. One question below:
> > > > > > > >
> > > > > > > > Should an Admin client with this new functionality be used
> > > against
> > > > an
> > > > > > old
> > > > > > > > > broker that cannot handle these requests then the methods
> > will
> > > > throw
> > > > > > > > > UnsupportedVersionException as per the usual pattern.
> > > > > > > >
> > > > > > > >
> > > > > > > > Did we consider automatically falling back to the single
> group
> > id
> > > > > > request
> > > > > > > > if the more efficient one is not supported?
> > > > > > > >
> > > > > > > > Ismael
> > > > > > > >
> > > > > > > > On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott <
> t...@confluent.io
> > >
> > > > wrote:
> > > > > > > >
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > I'm starting this thread to discuss KIP-709 to extend
> > > OffsetFetch
> > > > > > > > requests
> > > > > > > > > to accept multiple group ids. Please check out the KIP
> here:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> > > > > > > > >
> > > > > > > > > Any comments much appreciated.
> > > > > > > > >
> > > > > > > > > thanks,
> > > > > > > > >
> > > > > > > > > Tom
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-709: Extend OffsetFetch requests to accept multiple group ids.

2021-01-27 Thread David Jacot
Thanks for the KIP, Tom.

+1 (binding)

Best,
David

On Wed, Jan 27, 2021 at 11:12 AM Rajini Sivaram 
wrote:

> +1 (binding)
>
> Thanks for the KIP, Tom.
>
> Regards,
>
> Rajini
>
>
> On Tue, Jan 26, 2021 at 9:51 AM Thomas Scott  wrote:
>
> > Hey all,
> >
> >  I'd like to start the vote for
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> >
> > Thanks
> >
> >   Tom
> >
>


Re: [VOTE] KIP-709: Extend OffsetFetch requests to accept multiple group ids.

2021-01-27 Thread Rajini Sivaram
+1 (binding)

Thanks for the KIP, Tom.

Regards,

Rajini


On Tue, Jan 26, 2021 at 9:51 AM Thomas Scott  wrote:

> Hey all,
>
>  I'd like to start the vote for
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
>
> Thanks
>
>   Tom
>


[jira] [Created] (KAFKA-12244) Deleting a topic within metadata.max.idle after last message floods log with warnings

2021-01-27 Thread Bart van Deenen (Jira)
Bart van Deenen created KAFKA-12244:
---

 Summary: Deleting a topic within metadata.max.idle after last 
message floods log with warnings
 Key: KAFKA-12244
 URL: https://issues.apache.org/jira/browse/KAFKA-12244
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 2.5.0
 Environment: Linux, Confluent 5.5.0
Reporter: Bart van Deenen


In a test we produce to a topic, then stop producing and delete the topic ca. 
30 seconds later.

This leads to a flood of WARN messages (ca. 10 per second) after several 
minutes that lasts for several minutes.

WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-2] 
Error while fetching metadata with correlation id 141 : 
\{test-topic=UNKNOWN_TOPIC_OR_PARTITION}

Investigation has shown that the issue can be solved by setting the 
metadata.max.idle.ms property to shorter than the interval between stopping 
messages and deleting the topic.

The issue itself is not critical, but can lead to a large pollution of your 
log, and thereby obscuring (or possibly losing) important messages.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #442

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Call logSegments.toBuffer only when required (#9971)

[github] KAFKA-12233: Align the length passed to FileChannel by 
`FileRecords.writeTo` (#9970)


--
[...truncated 3.57 MB...]
OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
STARTED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
PASSED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() STARTED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldForwardClose() STARTED

KeyValueStoreFacadeTest > shouldForwardClose() PASSED

KeyValueStoreFacadeTest > shouldForwardFlush() STARTED

KeyValueStoreFacadeTest > shouldForwardFlush() PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #411

2021-01-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10793: move handling of FindCoordinatorFuture to fix race 
condition (#9671)

[github] MINOR: Fix meaningless message in assertNull validation (#9965)


--
[...truncated 7.10 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25aded4c, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25aded4c, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2cf14c14, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2cf14c14, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12c0b857, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12c0b857, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@42070ccc, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@42070ccc, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5aff30ec, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5aff30ec, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@39c9e73e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@39c9e73e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@234aec30, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@234aec30, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e27e15b, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e27e15b, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@107e47b7, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@107e47b7, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b68d368, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b68d368, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@68110e06, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@68110e06, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3fad2474, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3fad2474, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6ef165e, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6ef165e, 
timestamped =