Contributor access

2020-09-19 Thread Piotr Rżysko
Hello,

I would like to contribute to Kafka. Can you please add me to the
contributor list?
My JIRA username: piotr.rzysko

Best regards,
Piotr Rżysko


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #73

2020-09-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Streams docs fixes (#9308)


--
[...truncated 6.53 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED


Re: [DISCUSSION] Upgrade system tests to python 3

2020-09-19 Thread Guozhang Wang
I've triggered a system test on top of your branch.

Maybe you could also re-run the jenkins unit tests since currently all of
them fails but you've only touched on system tests, so I'd like to confirm
at least one successful run.

On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov  wrote:

> Hello, Guozhang.
>
> > I can help run the test suite once your PR is cleanly rebased to verify
> the whole suite works
>
> Thank you for joining to the review.
>
> 1. PR rebased on the current trunk.
>
> 2. I triggered all tests in my private environment to verify them after
> rebase.
> Will inform you once tests passed on my environment.
>
> 3. We need a new ducktape release [1] to be able to merge PR [2].
> For now, PR based on the ducktape trunk branch [3], not some
> specific release.
> If ducktape team need any help with the release, please, let me
> know.
>
> [1] https://github.com/confluentinc/ducktape/issues/245
> [2] https://github.com/apache/kafka/pull/9196
> [3]
> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
>
> > 16 сент. 2020 г., в 07:32, Guozhang Wang 
> написал(а):
> >
> > Hello Nikolay,
> >
> > I can help run the test suite once your PR is cleanly rebased to verify
> the
> > whole suite works and then I can merge (I'm trusting Ivan and Magnus here
> > for their reviews :)
> >
> > Guozhang
> >
> > On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
> wrote:
> >
> >> Hello!
> >>
> >> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
> >> Committers, please, join the review.
> >>
> >>> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
> >> написал(а):
> >>>
> >>> Hello!
> >>>
> >>> Just a friendly reminder.
> >>>
> >>> Patch to resolve some kind of technical debt - python2 in system tests
> >> is ready!
> >>> Can someone, please, take a look?
> >>>
> >>> https://github.com/apache/kafka/pull/9196
> >>>
>  28 авг. 2020 г., в 11:19, Nikolay Izhikov 
> >> написал(а):
> 
>  Hello!
> 
>  Any feedback on this?
>  What I should additionally do to prepare system tests migration?
> 
> > 24 авг. 2020 г., в 11:17, Nikolay Izhikov 
> >> написал(а):
> >
> > Hello.
> >
> > PR [1] is ready.
> > Please, review.
> >
> > But, I need help with the two following questions:
> >
> > 1. We need a new release of ducktape which includes fixes [2], [3]
> for
> >> python3.
> > I created the issue in ducktape repo [4].
> > Can someone help me with the release?
> >
> > 2. I know that some companies run system tests for the trunk on a
> >> regular bases.
> > Can someone show me some results of these runs?
> > So, I can compare failures in my PR and in the trunk.
> >
> > Results [5] of run all for my PR available in the ticket [6]
> >
> > ```
> > SESSION REPORT (ALL TESTS)
> > ducktape version: 0.8.0
> > session_id:   2020-08-23--002
> > run time: 1010 minutes 46.483 seconds
> > tests run:684
> > passed:   505
> > failed:   9
> > ignored:  170
> > ```
> >
> > [1] https://github.com/apache/kafka/pull/9196
> > [2]
> >>
> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
> > [3]
> >>
> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
> > [4] https://github.com/confluentinc/ducktape/issues/245
> > [5]
> >> https://issues.apache.org/jira/secure/attachment/13010366/report.txt
> > [6] https://issues.apache.org/jira/browse/KAFKA-10402
> >
> >> 14 авг. 2020 г., в 21:26, Ismael Juma 
> написал(а):
> >>
> >> +1
> >>
> >> On Fri, Aug 14, 2020 at 7:42 AM John Roesler 
> >> wrote:
> >>
> >>> Thanks Nikolay,
> >>>
> >>> No objection. This would be very nice to have.
> >>>
> >>> Thanks,
> >>> John
> >>>
> >>> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>  Hello.
> 
> > If anyone's interested in porting it to Python 3 it would be a
> good
> >>> change.
> 
>  I’ve created a ticket [1] to upgrade system tests to python3.
>  Does someone have any additional inputs or objections for this
> >> change?
> 
>  [1] https://issues.apache.org/jira/browse/KAFKA-10402
> 
> 
> > 1 июля 2020 г., в 00:26, Gokul Ramanan Subramanian <
> >>> gokul24...@gmail.com> написал(а):
> >
> > Thanks Colin.
> >
> > While at the subject of system tests, there are a few times I see
> >> tests
> > timed out (even on a large machine such as m5.4xlarge EC2 with
> >> Linux).
> >>> Are
> > there any knobs that system tests provide to control timeouts /
> >>> throughputs
> > across all tests?
> > Thanks.
> >
> > On Tue, Jun 30, 2020 at 6:32 PM Colin McCabe  >
> >>> wrote:
> 

Re: [DISCUSS] KIP-671: Shutdown Streams Application when appropriate exception is thrown

2020-09-19 Thread Guozhang Wang
Sounds good to me. I also feel that this call should be non-blocking but I
guess I was confused from the discussion thread that the API is designed in
a blocking fashion which contradicts with my perspective and hence I asked
for clarification :)

Guozhang


On Wed, Sep 16, 2020 at 12:32 PM Walker Carlson 
wrote:

> Hello Guozhang,
>
> As for the logging I plan on having three logs. First, the client log that
> it is requesting an application shutdown, second, the leader log processId
> of the invoker, third, then the StreamRebalanceListener it logs that it is
> closing because of an `stream.appShutdown`. Hopefully this will be enough
> to make the cause of the close clear.
>
> I see what you mean about the name being dependent on the behavior of the
> method so I will try to clarify.  This is how I currently envision the call
> working.
>
> It is not an option to directly initiate a shutdown through a StreamThread
> object from a KafkaStreams object because "KafkaConsumer is not safe for
> multi-threaded access". So how it works is that the method in KafkaStreams
> finds the first alive thread and sets a flag in the StreamThread. The
> StreamThread will receive the flag in its runloop then set the error code
> and trigger a rebalance, afterwards it will stop processing. After the
> KafkaStreams has set the flag it will return true and continue running. If
> there are no alive threads the shutdown will fail and return false.
>
> What do you think the blocking behavior should be? I think that the
> StreamThread should definitely stop to prevent any of the corruption we are
> trying to avoid by shutting down, but I don't see any advantage of the
> KafkaStreams call blocking.
>
> You are correct to be concerned about the uncaught exception handler. If
> there are no live StreamThreads the rebalance will not be started at all
> and this would be a problem. However the user should be aware of this
> because of the return of false and react appropriately. This would also be
> fixed if we implemented our own handler so we can rebalance before the
> StreamThread closes.
>
> With that in mind I believe that `initiateClosingAllClients` would be an
> appropriate name. WDYT?
>
> Walker
>
>
> On Wed, Sep 16, 2020 at 11:43 AM Guozhang Wang  wrote:
>
> > Hello Walker,
> >
> > Thanks for the updated KIP. Previously I'm also a bit hesitant on the
> newly
> > added public exception to communicate user-requested whole app shutdown,
> > but the reason I did not bring this up is that I feel there's still a
> need
> > from operational aspects that we can differentiate the scenario where an
> > instance is closed because of a) local `streams.close()` triggered, or
> b) a
> > remote instance's `stream.shutdownApp` triggered. So if we are going to
> > remove that exception (which I'm also in favor), we should at least
> > differentiate from the log4j levels.
> >
> > Regarding the semantics that "It should wait to receive the shutdown
> > request in the rebalance it triggers." I'm not sure I fully understand,
> > since this may be triggered from the stream thread's uncaught exception
> > handler, if that thread is already dead then maybe a rebalance listener
> > would not even be fired at all. Although I know this is some
> implementation
> > details that you probably abstract away from the proposal, I'd like to
> make
> > sure that we are on the same page regarding its blocking behavior since
> it
> > is quite crucial to users as well. Could you elaborate a bit more?
> >
> > Regarding the function name, I guess my personal preference would depend
> on
> > its actual blocking behavior as above :)
> >
> >
> > Guozhang
> >
> >
> >
> >
> > On Wed, Sep 16, 2020 at 10:52 AM Walker Carlson 
> > wrote:
> >
> > > Hello all again,
> > >
> > > I have updated the kip to no longer use an exception and instead add a
> > > method to the KafkaStreams class, this seems to satisfy everyone's
> > concerns
> > > about how and when the functionality will be invoked.
> > >
> > > There is still a question over the name. We must decide between
> > > "shutdownApplication", "initateCloseAll", "closeAllInstaces" or some
> > > variation.
> > >
> > > I am rather indifferent to the name. I think that they all get the
> point
> > > across. The most clear to me would be shutdownApplicaiton or
> > > closeAllInstacnes but WDYT?
> > >
> > > Walker
> > >
> > >
> > >
> > > On Wed, Sep 16, 2020 at 9:30 AM Walker Carlson 
> > > wrote:
> > >
> > > > Hello Guozhang and Bruno,
> > > >
> > > > Thanks for the feedback.
> > > >
> > > > I will respond in two parts but I would like to clarify that I am not
> > > tied
> > > > down to any of these names, but since we are still deciding if we
> want
> > to
> > > > have an exception or not I would rather not get tripped up on
> choosing
> > a
> > > > name just yet.
> > > >
> > > > Guozhang:
> > > > 1)  you are right about the INCOMPLETE_SOURCE_TOPIC_METADATA error. I
> > am
> > > > not planning on changing the behavior of 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #73

2020-09-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Streams docs fixes (#9308)


--
[...truncated 3.29 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #74

2020-09-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Streams docs fixes (#9308)


--
[...truncated 3.29 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED


Re: [DISCUSS] KIP-663: API to Start and Shut Down Stream Threads and to Request Closing of Kafka Streams Clients

2020-09-19 Thread Guozhang Wang
I'm fine with always removing threads at DEAD state from
localThreadsMetadata().

On the other hand, if we want to give users more info for debugging, we can
consider eating the complexity ourselves by guaranteeing that, within the
user registered uncaught exception handler, the localThreadsMetadata()
would include the thread who's dying / throwing at that moment.
Implementation wise we can always register our own internal handler which
calles the user registered one if it is set, and only after that we remove
the thread metadata from the instance cache. Personally I think it is a bit
too much, so I'm in favor of "adding it later when people complain" :)


Guozhang


On Fri, Sep 18, 2020 at 5:17 PM Sophie Blee-Goldman 
wrote:

> Makes sense to me :)
>
> On Thu, Sep 17, 2020 at 9:34 AM Bruno Cadonna  wrote:
>
> > Hi Sophie,
> >
> > Thank you for the feedback! I replied inline.
> >
> > Best,
> > Bruno
> >
> > On 16.09.20 19:19, Sophie Blee-Goldman wrote:
> > >>
> > >> We guarantee that the metadata of the dead stream threads  will be
> > >> returned by KafkaStreams#localThreadsMetadata() at least until the
> next
> > >> call to KafkaStreams#addStreamThread() or
> > >> KafkaStreams#removeStreamThread() after the stream thread transited to
> > >> DEAD
> > >
> > >
> > > This seems kind of tricky...personally I would find it pretty odd if I
> > > queried the
> > > local thread metadata and found two threads, A (alive) and B (dead),
> and
> > > then
> > > called removeStreamThread() and now suddenly I have zero. Or if I call
> > > addStreamThread and now I still have two threads.
> > >
> >
> > The behavior might be unusual, but it is well defined and not random by
> > any means.
> >
> > > Both of those results seem to indicate that only live threads "count"
> and
> > > are returned
> > > by localThreadsMetadata(). But in reality we do temporarily keep the
> dead
> > > thread,
> > > but only for the arbitrary amount of time until the next time you want
> to
> > > add or
> > > remove some other stream thread? That seems like a weird side effect of
> > the
> > > add/removeStreamThread APIs.
> > >
> >
> > This is not a side effect that just happens to occur. This is a
> > guarantee that users get. It gives users the possibility to retrieve the
> > metadata of the dead stream threads since the last call to
> > add/removeStreamThread. Admittedly, this guarantee overlap with the
> > current/planned implementation. But that is more a coincidence.
> >
> > I would be more concerned about when add/removeStreamThread is called
> > from different threads which could happen if an uncaught exception
> > handler is called that wants to replace a stream thread and a thread
> > that is responsible for automated scaling up is running.
> >
> > > If we really think users might want to log the metadata of dead
> threads,
> > > then
> > > let's just do that for them or give them a way to do exactly that.
> > >
> >
> > Logging the metatdata of dead stream threads for the user is a valid
> > alternative. Giving users the way to do exactly that is hard because the
> > StreamThread class is not part of the public API. They would always need
> > to call a method on the KafkaStreams object where we already have
> > localThreadsMetadata().
> >
> > > I'm not that concerned about the backwards compatibility of removing
> dead
> > > threads from the localThreadsMetadata, because I find it hard to
> believe
> > > that
> > > users do anything other than just skip over them in the list (set?)
> that
> > > gets
> > > returned. But maybe someone can chime in with an example use case.
> > >
> >
> > I am also not too much concerned about backwards compatibility. That
> > would indeed be a side effect of the current proposal.
> >
> > > I'm actually even a little skeptical that any users might want to log
> the
> > > metadata of a
> > > dead thread, since all of the metadata is only useful for IQ on live
> > > threads or
> > > already covered by other easily discoverable logging elsewhere, or
> both.
> > >
> >
> > Said all of the above, I actually agree with you that there is not that
> > much information in the metadata of a dead stream thread that is
> > interesting. The name of the stream thread is known in the uncaught
> > exception handler. The names of the clients, like consumer etc., used by
> > the stream thread can be derived from the name of the stream thread.
> > Finally, the sets of active and standby tasks should be empty for a dead
> > stream thread.
> >
> > Hence, I backpedal and propose to filter out dead stream threads from
> > localThreadsMetadata(). WDYT?
> >
> > > On Wed, Sep 16, 2020 at 2:07 AM Bruno Cadonna 
> > wrote:
> > >
> > >> Hi again,
> > >>
> > >> I just realized that if we filter out DEAD stream threads in
> > >> localThreadsMetadata(), users cannot log the metadata of dying stream
> > >> threads in the uncaught exception handler.
> > >>
> > >> I realized this thanks to the example Guozhang requested in the KIP.
> > >> 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #73

2020-09-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Generator config-specific HTML ids (#8878)

[github] MINOR: Log warn message with details when there's kerberos login issue 
(#9236)


--
[...truncated 6.58 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #72

2020-09-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Log warn message with details when there's kerberos login issue 
(#9236)


--
[...truncated 3.29 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #72

2020-09-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Generator config-specific HTML ids (#8878)

[github] MINOR: Log warn message with details when there's kerberos login issue 
(#9236)


--
[...truncated 3.27 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #71

2020-09-19 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] KAFKA-8098: fix the flaky test by disabling the auto commit 
to avoid member rejoining

[github] MINOR: Generator config-specific HTML ids (#8878)


--
[...truncated 6.58 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #71

2020-09-19 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10504) It will not work to skip to InitProducerId as lastError is always null

2020-09-19 Thread HaiyuanZhao (Jira)
HaiyuanZhao created KAFKA-10504:
---

 Summary: It will not work to skip to InitProducerId as lastError 
is always null
 Key: KAFKA-10504
 URL: https://issues.apache.org/jira/browse/KAFKA-10504
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: HaiyuanZhao
Assignee: HaiyuanZhao


Kafka-8805 introduced an optimization for txn abort process: If the last error 
is 

an INVALID_PRODUCER_ID_MAPPING error, skip directly to InitProduceId.

However this optimization will not work as the var lastError is always null. 
Because the txn state will transit to ABORTING_TRANSACTION from ABORTABLE_ERROR 
when beginAbort is called, and the lastError will updated to null.

So then EndTxn is always called before InitProduceId.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10503) MockProducer doesn't throw ClassCastException when no partition for topic

2020-09-19 Thread Jira
Gonzalo Muñoz Fernández created KAFKA-10503:
---

 Summary: MockProducer doesn't throw ClassCastException when no 
partition for topic
 Key: KAFKA-10503
 URL: https://issues.apache.org/jira/browse/KAFKA-10503
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 2.6.0
Reporter: Gonzalo Muñoz Fernández


Though {{MockProducer}} admits serializers in its constructors, it doesn't 
check during {{send}} method that those serializers are the proper ones to 
serialize key/value included into the {{ProducerRecord}}.

[This 
check|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java#L499-L500]
 is only done if there is a partition assigned for that topic.

It would be an enhancement if these serialize methods were also invoked in 
simple scenarios, where no partition is assigned to a topic.

eg:
{code:java}
@Test
public void shouldThrowClassCastException() {
MockProducer producer = new MockProducer<>(true, new 
IntegerSerializer(), new StringSerializer());
ProducerRecord record = new ProducerRecord(TOPIC, "key1", "value1");
try {
producer.send(record);
fail("Should have thrown ClassCastException because record cannot 
be casted with serializers");
} catch (ClassCastException e) {}
}
{code}

Currently, for obtaining the ClassCastException is needed to define the topic 
into a partition:

{code:java}
PartitionInfo partitionInfo = new PartitionInfo(TOPIC, 0, null, null, null);
Cluster cluster = new Cluster(null, emptyList(), asList(partitionInfo),
  emptySet(), emptySet());
 producer = new MockProducer(cluster, 
true, 
new DefaultPartitioner(), 
new IntegerSerializer(), 
new StringSerializer());
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)