Build failed in Jenkins: kafka-2.3-jdk8 #171

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9507; AdminClient should check for missing committed offsets


--
[...truncated 3.03 MB...]

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
STARTED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion PASSED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers STARTED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers PASSED

kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
STARTED

kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr PASSED

kafka.controller.ControllerChannelManagerTest > 
testMixedDeleteAndNotDeleteStopReplicaRequests STARTED

kafka.controller.ControllerChannelManagerTest > 
testMixedDeleteAndNotDeleteStopReplicaRequests PASSED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrInterBrokerProtocolVersion STARTED

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrInterBrokerProtocolVersion PASSED

kafka.controller.ControllerChannelManagerTest > testUpdateMetadataRequestSent 
STARTED

kafka.controller.ControllerChannelManagerTest > testUpdateMetadataRequestSent 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataRequestDuringTopicDeletion STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataRequestDuringTopicDeletion PASSED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers STARTED

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrSh

Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-02-07 Thread John Roesler
Looking at where the log message comes from:
org.apache.kafka.common.config.AbstractConfig#logUnused
it seems like maybe the warning just happens when you pass
extra configs to a client that it has no knowledge of (and therefore
doesn't "use").

I'm now suspicious if Streams is actually sending extra configs to the
clients, although it seems like we _don't_ see these warnings in other cases.

Maybe some of the folks who actually see these messages can try to pinpoint
where exactly the rogue configs are coming from?

I might have overlooked a message at some point, but it wasn't clear to
me that we were talking about warnings that were actually caused by Streams.
I thought the unknown configs were something user-specified.

Thanks,
-John

On Fri, Feb 7, 2020, at 13:10, Gwen Shapira wrote:
> Ah, got it! I am indeed curious why they do this :)
> 
> Maybe John can shed more light. But if we can't find a better fix, 
> perhaps the nice thing to do is really a separate logger, so users who 
> are not worried about shooting themselves in the foot can make those 
> warnings go away.
> 
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
> 
> On Fri, Feb 07, 2020 at 4:13 AM, Patrik Kleindl < pklei...@gmail.com > wrote:
> 
> > 
> > 
> > 
> > Hi Gwen
> > 
> > 
> > 
> > Kafka Streams is not a third party library and produces a lot of these
> > warnings, e.g.
> > 
> > 
> > 
> > *The configuration 'main.consumer.max.poll.records' was supplied but isn't
> > a known config.*
> > *The configuration 'admin.retries' was supplied but isn't a known config.*
> > and various others if you try to fine-tune the restoration consumer or
> > inject parameters for state stores.
> > This results in a lot of false positives and only makes new people worried
> > and then ignore the warnings altogether.
> > 
> > 
> > 
> > Unless this is taken care of at least the Kafka Streams users will
> > probably be better off having this on debug level.
> > 
> > 
> > 
> > Best regards
> > 
> > 
> > 
> > Patrik
> > 
> > 
> > 
> > On Thu, 6 Feb 2020 at 16:55, Gwen Shapira < gwen@ confluent. io (
> > g...@confluent.io ) > wrote:
> > 
> > 
> >> 
> >> 
> >> INFO is the default log level, and while it looks less "alarming" than
> >> WARN, users will still see it and in my experience, they will worry that
> >> something is wrong anyway. Or if INFO isn't the default, users won't see
> >> it, so it is no different from debug and we are left with no way of
> >> warning users that they misconfigured something.
> >> 
> >> 
> >> 
> >> The point is that "known configs" exist in Kafka as a validation step. It
> >> is there to protect users. So anything that makes the concerns about
> >> unknown configs invisible to users, makes the validation step useless and
> >> we may as well remove it. I'm against that - I think users should be made
> >> aware of misconfigs as much as possible - especially since if you misspell
> >> 
> >> "retention", you will lose data.
> >> 
> >> 
> >> 
> >> If we look away from the symptom and go back to the actual cause
> >> 
> >> 
> >> 
> >> I think Kafka had a way (and maybe it still does) for 3rd party developers
> >> who create client plugins (mostly interceptors) to make their configs
> >> "known". 3rd party developers should be responsible for the good
> >> experience of their users. Now it is possible that you'll pick a 3rd party
> >> library that didn't do it and have a worse user experience, but I am not
> >> sure it is the job of Apache Kafka to protect users from their choice of
> >> libraries
> >> (and as long as those libraries are OSS, users can fix them). Especially
> >> not at the expense of someone who doesn't use 3rd party libs.
> >> 
> >> 
> >> 
> >> Gwen
> >> 
> >> 
> >> 
> >> Gwen Shapira
> >> Engineering Manager | Confluent
> >> 650.450.2760 | @gwenshap
> >> Follow us: Twitter | blog
> >> 
> >> 
> >> 
> >> On Thu, Feb 06, 2020 at 2:06 AM, Artur Burtsev < artjock@ gmail. com (
> >> artj...@gmail.com ) > wrote:
> >> 
> >> 
> >>> 
> >>> 
> >>> Hi John,
> >>> 
> >>> 
> >>> 
> >>> In out case it wont help, since we are running instance per partition and
> >>> even with summary only we get 32 warnings per rollout.
> >>> 
> >>> 
> >>> 
> >>> Hi Gwen,
> >>> 
> >>> 
> >>> 
> >>> Thanks for you reply, I understand and share your concern, I also
> >>> mentioned it earlier in the thread. Do you think it will work if we
> >>> 
> >>> 
> >> 
> >> 
> >> 
> >> change
> >> 
> >> 
> >>> 
> >>> 
> >>> DEBUG to INFO?
> >>> 
> >>> 
> >>> 
> >>> Thanks,
> >>> Artur
> >>> 
> >>> 
> >>> 
> >>> On Thu, Feb 6, 2020 at 4:21 AM Gwen Shapira < gwen@ confluent. io ( gwen@ 
> >>> confluent.
> >>> io ( g...@confluent.io ) ) > wrote:
> >>> 
> >>> 
>  
>  
>  Sorry for late response. The reason that unused configs is in WARN is
>  
>  
> >>> 
> >>> 
> >> 
> >> 
> >> 
> >> that
> >> 
> >> 
> >>> 
>  
>  
>  if you misspell a config, it means that it will not apply. In

Build failed in Jenkins: kafka-2.5-jdk8 #7

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-9519: Deprecate the --zookeeper flag in ConfigCommand (#8056)

[jason] KAFKA-9507; AdminClient should check for missing committed offsets


--
[...truncated 2.85 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.stream

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-07 Thread John Roesler
Hi all,

Thanks for the well motivated KIP, Sophie. I had some alternatives in mind, 
which
I won't even bother to relate because I feel like the motivation made a 
compelling
argument for the API as proposed.

One very minor point you might as well fix is that the API change is targeted at
KafkaConsumer (the implementation), but should be targeted at
Consumer (the interface).

I agree with your discomfort about the name. Adding a "rejoin" method seems 
strange
since there's no "join" method. Instead the way you join the group the first 
time is just
by calling "subscribe". But "resubscribe" seems too indirect from what we're 
really trying
to do, which is to trigger a rebalance by sending a new JoinGroup request.

Another angle is that we don't want the method to sound like something you 
should
be calling in normal circumstances, or people will be "tricked" into calling it 
unnecessarily.

So, I think "rejoinGroup" is fine, although a person _might_ be forgiven for 
thinking they
need to call it periodically or something. Did you consider "triggerRebalance", 
which 
sounds pretty advanced-ish, and accurately describes what happens when you call 
it?

All in all, the KIP sounds good to me, and I'm in favor.

Thanks,
-John

On Fri, Feb 7, 2020, at 21:22, Anna McDonald wrote:
> This situation was discussed at length after a recent talk I gave. This KIP
> would be a great step towards increased availability and in facilitating
> lightweight rebalances.
> 
> anna
> 
> 
> 
> On Fri, Feb 7, 2020, 9:38 PM Sophie Blee-Goldman 
> wrote:
> 
> > Hi all,
> >
> > In light of some recent and upcoming rebalancing and availability
> > improvements, it seems we have a need for explicitly triggering a consumer
> > group rebalance. Therefore I'd like to propose adding a new
> > rejoinGroup()method
> > to the Consumer client (better method name suggestions are very welcome).
> >
> > Please take a look at the KIP and let me know what you think!
> >
> > KIP document:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> >
> > JIRA: https://issues.apache.org/jira/browse/KAFKA-9525
> >
> > Cheers,
> > Sophie
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4216

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9519: Deprecate the --zookeeper flag in ConfigCommand (#8056)

[github] KAFKA-9507; AdminClient should check for missing committed offsets


--
[...truncated 2.84 MB...]
org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> 

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-07 Thread Anna McDonald
This situation was discussed at length after a recent talk I gave. This KIP
would be a great step towards increased availability and in facilitating
lightweight rebalances.

anna



On Fri, Feb 7, 2020, 9:38 PM Sophie Blee-Goldman 
wrote:

> Hi all,
>
> In light of some recent and upcoming rebalancing and availability
> improvements, it seems we have a need for explicitly triggering a consumer
> group rebalance. Therefore I'd like to propose adding a new
> rejoinGroup()method
> to the Consumer client (better method name suggestions are very welcome).
>
> Please take a look at the KIP and let me know what you think!
>
> KIP document:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
>
> JIRA: https://issues.apache.org/jira/browse/KAFKA-9525
>
> Cheers,
> Sophie
>


Build failed in Jenkins: kafka-trunk-jdk11 #1139

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9519: Deprecate the --zookeeper flag in ConfigCommand (#8056)

[github] KAFKA-9507; AdminClient should check for missing committed offsets


--
[...truncated 2.85 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-

[DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-07 Thread Sophie Blee-Goldman
Hi all,

In light of some recent and upcoming rebalancing and availability
improvements, it seems we have a need for explicitly triggering a consumer
group rebalance. Therefore I'd like to propose adding a new rejoinGroup()method
to the Consumer client (better method name suggestions are very welcome).

Please take a look at the KIP and let me know what you think!

KIP document:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer

JIRA: https://issues.apache.org/jira/browse/KAFKA-9525

Cheers,
Sophie


[jira] [Created] (KAFKA-9526) Augment topology description with serdes

2020-02-07 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-9526:


 Summary: Augment topology description with serdes
 Key: KAFKA-9526
 URL: https://issues.apache.org/jira/browse/KAFKA-9526
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang
Assignee: Guozhang Wang


Today we have multiple ways to infer and inherit serde along the topology, and 
only fall back to the configured serde when inference does not apply. So it is 
a bit hard for users to reason which operators inside the topology still lacks 
serde specification.

So I'd propose we augment the topology description with serde information on 
source / sink and state store operators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4215

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Fix two test failures in JDK11 (#8063)

[github] KAFKA-9509; Fixing flakiness of


--
[...truncated 5.72 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled 

Build failed in Jenkins: kafka-trunk-jdk11 #1138

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Fix two test failures in JDK11 (#8063)

[github] KAFKA-9509; Fixing flakiness of


--
[...truncated 2.84 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org

Build failed in Jenkins: kafka-2.5-jdk8 #6

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9509; Fixing flakiness of


--
[...truncated 2.85 MB...]

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tes

[jira] [Resolved] (KAFKA-9507) AdminClient should check for missing committed offsets

2020-02-07 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9507.

Fix Version/s: 2.4.1
   2.3.2
   2.5.0
   Resolution: Fixed

> AdminClient should check for missing committed offsets
> --
>
> Key: KAFKA-9507
> URL: https://issues.apache.org/jira/browse/KAFKA-9507
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: David Mao
>Priority: Major
>  Labels: newbie
> Fix For: 2.5.0, 2.3.2, 2.4.1
>
>
> I noticed this exception getting raised:
> {code}
> Caused by: java.lang.IllegalArgumentException: Invalid negative offset
>   at 
> org.apache.kafka.clients.consumer.OffsetAndMetadata.(OffsetAndMetadata.java:50)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$24$1.handleResponse(KafkaAdminClient.java:2832)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1032)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1160)
> {code}
> The AdminClient should check for negative offsets in OffsetFetch responses in 
> the api `listConsumerGroupOffsets`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-544: Make metrics exposed via JMX configurable

2020-02-07 Thread Xavier Léauté
Based on PR feedback I've updated the KIP to align the configs between
clients and brokers.
Broker configs now start with "metrics.xxx" instead of "kafka.metrics.xxx",
in line with clients configs.
This is also more consistent with newer broker configs.

On Fri, Nov 8, 2019 at 12:23 PM Alexandre Dupriez <
alexandre.dupr...@gmail.com> wrote:

> Hello,
>
> This can be very handy when dealing with large numbers of partitions on a
> broker.
>
> I was recently experimenting with a third-party monitoring framework which
> provides a JMX collector [1] with the same mechanism to filter out the JMX
> beans retrieved from Kafka.
> When running a couple of tests with all filters removed, the time to fetch
> all beans could become quickly prohibitive as the number of partitions on
> the tested broker increased.
>
> After some investigation, the main source of "friction" was found in the
> (too) many RMI RPCs required to fetch the names and attributes of the JMX
> beans.
> Configuring the same JMX collector to run as a JVM agent, and taking care
> of unplugging the JMX-RMI connector, yielded significant gains (*).
>
> Note that this was obtained by fetching the beans via HTTP, with all values
> sent in a batch.
> I find one of the potential follow-up mentioned (exposing the beans via an
> alternative API) also very interesting from a performance perspective.
>
> [1] https://github.com/prometheus/jmx_exporter
> (*) On a 4-cores Xeon 8175M broker, hosting 1,000 replicas, the time to
> fetch all beans dropped from 13 seconds to ~400 ms.
>
> Le ven. 8 nov. 2019 à 17:29, Guozhang Wang  a écrit :
>
> > Sounds good, thanks.
> >
> > Guozhang
> >
> > On Fri, Nov 8, 2019 at 9:26 AM Xavier Léauté 
> wrote:
> >
> > > >
> > > > 1. I do feel there're similar needs for clients make JMX
> configurable.
> > > Some
> > > > context: in modules like Connect and Streams we have added /
> > refactored a
> > > > large number of metrics so far [0, 1], and although we've added a
> > > reporting
> > > > level config [2] to clients, this is statically defined at code and
> > > cannot
> > > > be dynamically changed either.
> > > >
> > >
> > > Thanks for providing some context there, I have updated the KIP to add
> > > equivalent configs for clients, streams, and connect
> > >
> > >
> > > > 2. This may be out of the scope of this KIP, but have you thought
> about
> > > how
> > > > to make the metrics collection to be configurable (i.e. basically for
> > > those
> > > > metrics which we know would not be exposed, we do not collect them
> > > either)
> > > > dynamically?
> > >
> > >
> > > Yes, given what you described above, it would make sense to look into
> > this.
> > > One difficulty though, is that we'd probably want to define this at the
> > > sensor level,
> > > which does not always map to the metric names users understand.
> > >
> > > There are also cases where someone may want to expose different sets of
> > > metrics
> > > using different reporters, so I think a reporting level config is still
> > > useful.
> > > For this KIP, I am proposing we stick to making reporting configurable,
> > > independent of the underlying collection mechanism.
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


Build failed in Jenkins: kafka-trunk-jdk11 #1137

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] DOCS - clarify transactionalID and idempotent behavior (#7821)


--
[...truncated 2.85 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.stream

[jira] [Created] (KAFKA-9525) Allow explicit rebalance triggering on the Consumer

2020-02-07 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9525:
--

 Summary: Allow explicit rebalance triggering on the Consumer
 Key: KAFKA-9525
 URL: https://issues.apache.org/jira/browse/KAFKA-9525
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Sophie Blee-Goldman


Currently the only way to explicitly trigger a rebalance is by unsubscribing 
the consumer. This has two drawbacks: it does not work with static membership, 
and it causes the consumer to revoke all its currently owned partitions. 
Streams relies on being able to enforce a rebalance for its version probing 
upgrade protocol and the upcoming KIP-441, both of which should be able to work 
with static membership and be able to leverage the improvements of KIP-429 to 
no longer revoke all owned partitions.

We should add an API that will allow users to explicitly trigger a rebalance 
without going through #unsubscribe



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4214

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] DOCS - clarify transactionalID and idempotent behavior (#7821)


--
[...truncated 2.83 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources 

[jira] [Created] (KAFKA-9524) Default window retention does not consider grace period

2020-02-07 Thread Michael Bingham (Jira)
Michael Bingham created KAFKA-9524:
--

 Summary: Default window retention does not consider grace period
 Key: KAFKA-9524
 URL: https://issues.apache.org/jira/browse/KAFKA-9524
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.4.0
Reporter: Michael Bingham


In a windowed aggregation, if you specify a window size larger than the default 
window retention (1 day), Streams will implicitly set retention accordingly to 
accommodate windows of that size. For example,

 
{code:java}
.windowedBy(TimeWindows.of(Duration.ofDays(20))) 
{code}
In this case, Streams will implicitly set window retention to 20 days, and no 
exceptions will occur.

However, if you also include a non-zero grace period on the window, such as:

 
{code:java}
.windowedBy(TimeWindows.of(Duration.ofDays(20)).grace(Duration.ofMinutes(5)))
{code}
In this case, Streams will still implicitly set the window retention 20 days 
(not 20 days + 5 minutes grace), and an exception will be thrown:
Exception in thread "main" java.lang.IllegalArgumentException: The retention 
period of the window store KSTREAM-KEY-SELECT-02 must be no smaller 
than its window size plus the grace period. Got size=[172800], 
grace=[30], retention=[172800]
Ideally, Streams should include grace period when implicitly setting window 
retention.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9523) Reduce flakiness of BranchedMultiLevelRepartitionConnectedTopologyTest

2020-02-07 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9523:
--

 Summary: Reduce flakiness of 
BranchedMultiLevelRepartitionConnectedTopologyTest
 Key: KAFKA-9523
 URL: https://issues.apache.org/jira/browse/KAFKA-9523
 Project: Kafka
  Issue Type: Test
Reporter: Boyang Chen


KAFKA-9335 introduces an integration test to verify the topology builder itself 
could survive from building a complex topology. This test gets flaky some time 
for stream client to broker connection, so we should consider making it less 
flaky by either converting to a unit test or just focus on making the test 
logic more robust.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9248) Foreign key join does not pickup default serdes and dies with NPE

2020-02-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9248.

Resolution: Duplicate

Agreed. Closing this ticket as duplicate.

> Foreign key join does not pickup default serdes and dies with NPE
> -
>
> Key: KAFKA-9248
> URL: https://issues.apache.org/jira/browse/KAFKA-9248
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> The foreign key join operator only works if `Serdes` are passed in by the 
> user via corresponding API methods.
> If one tries to fall back to default Serdes from `StreamsConfig` the operator 
> does not pick up those Serdes but dies with a NPE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-02-07 Thread Gwen Shapira
Ah, got it! I am indeed curious why they do this :)

Maybe John can shed more light. But if we can't find a better fix, perhaps the 
nice thing to do is really a separate logger, so users who are not worried 
about shooting themselves in the foot can make those warnings go away.

Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog

On Fri, Feb 07, 2020 at 4:13 AM, Patrik Kleindl < pklei...@gmail.com > wrote:

> 
> 
> 
> Hi Gwen
> 
> 
> 
> Kafka Streams is not a third party library and produces a lot of these
> warnings, e.g.
> 
> 
> 
> *The configuration 'main.consumer.max.poll.records' was supplied but isn't
> a known config.*
> *The configuration 'admin.retries' was supplied but isn't a known config.*
> and various others if you try to fine-tune the restoration consumer or
> inject parameters for state stores.
> This results in a lot of false positives and only makes new people worried
> and then ignore the warnings altogether.
> 
> 
> 
> Unless this is taken care of at least the Kafka Streams users will
> probably be better off having this on debug level.
> 
> 
> 
> Best regards
> 
> 
> 
> Patrik
> 
> 
> 
> On Thu, 6 Feb 2020 at 16:55, Gwen Shapira < gwen@ confluent. io (
> g...@confluent.io ) > wrote:
> 
> 
>> 
>> 
>> INFO is the default log level, and while it looks less "alarming" than
>> WARN, users will still see it and in my experience, they will worry that
>> something is wrong anyway. Or if INFO isn't the default, users won't see
>> it, so it is no different from debug and we are left with no way of
>> warning users that they misconfigured something.
>> 
>> 
>> 
>> The point is that "known configs" exist in Kafka as a validation step. It
>> is there to protect users. So anything that makes the concerns about
>> unknown configs invisible to users, makes the validation step useless and
>> we may as well remove it. I'm against that - I think users should be made
>> aware of misconfigs as much as possible - especially since if you misspell
>> 
>> "retention", you will lose data.
>> 
>> 
>> 
>> If we look away from the symptom and go back to the actual cause
>> 
>> 
>> 
>> I think Kafka had a way (and maybe it still does) for 3rd party developers
>> who create client plugins (mostly interceptors) to make their configs
>> "known". 3rd party developers should be responsible for the good
>> experience of their users. Now it is possible that you'll pick a 3rd party
>> library that didn't do it and have a worse user experience, but I am not
>> sure it is the job of Apache Kafka to protect users from their choice of
>> libraries
>> (and as long as those libraries are OSS, users can fix them). Especially
>> not at the expense of someone who doesn't use 3rd party libs.
>> 
>> 
>> 
>> Gwen
>> 
>> 
>> 
>> Gwen Shapira
>> Engineering Manager | Confluent
>> 650.450.2760 | @gwenshap
>> Follow us: Twitter | blog
>> 
>> 
>> 
>> On Thu, Feb 06, 2020 at 2:06 AM, Artur Burtsev < artjock@ gmail. com (
>> artj...@gmail.com ) > wrote:
>> 
>> 
>>> 
>>> 
>>> Hi John,
>>> 
>>> 
>>> 
>>> In out case it wont help, since we are running instance per partition and
>>> even with summary only we get 32 warnings per rollout.
>>> 
>>> 
>>> 
>>> Hi Gwen,
>>> 
>>> 
>>> 
>>> Thanks for you reply, I understand and share your concern, I also
>>> mentioned it earlier in the thread. Do you think it will work if we
>>> 
>>> 
>> 
>> 
>> 
>> change
>> 
>> 
>>> 
>>> 
>>> DEBUG to INFO?
>>> 
>>> 
>>> 
>>> Thanks,
>>> Artur
>>> 
>>> 
>>> 
>>> On Thu, Feb 6, 2020 at 4:21 AM Gwen Shapira < gwen@ confluent. io ( gwen@ 
>>> confluent.
>>> io ( g...@confluent.io ) ) > wrote:
>>> 
>>> 
 
 
 Sorry for late response. The reason that unused configs is in WARN is
 
 
>>> 
>>> 
>> 
>> 
>> 
>> that
>> 
>> 
>>> 
 
 
 if you misspell a config, it means that it will not apply. In some cases
 (default retention) you want know until too late. We wanted to warn
 
 
>>> 
>>> 
>> 
>> 
>> 
>> admins
>> 
>> 
>>> 
 
 
 about possible misconfigurations.
 
 
 
 In the context of a company supporting Kafka - customers run logs at
 
 
>>> 
>>> 
>> 
>> 
>> 
>> INFO
>> 
>> 
>>> 
 
 
 level normally, so if we suspect a misconfiguration, we don't want to
 
 
>>> 
>>> 
>> 
>> 
>> 
>> ask
>> 
>> 
>>> 
 
 
 the customer to change level to DEBUG and bounce the broker. It is time
 consuming and can be risky.
 
 
 
 *Gwen Shapira*
 Product Manager | Confluent
 650.450.2760 | @gwenshap
 Follow us: Twitter ( https:/ / twitter. com/ ConfluentInc ( https:/ / 
 twitter.
 com/ ConfluentInc ( https://twitter.com/ConfluentInc ) ) ) | blog ( http:/
 / www. confluent.
 
 
>>> 
>>> 
>> 
>> 
>> 
>> io/
>> 
>> 
>>> 
 
 
 blog ( http:/ / www. confluent. io/ blog ( http://www.confluent.io/blog ) )
 )
 
 
 
 Sent via Superhuman ( https:/ / sprh. mn/ ?vip=gwe

[jira] [Resolved] (KAFKA-9177) Pause completed partitions on restore consumer

2020-02-07 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9177.
--
Fix Version/s: 2.6.0
 Assignee: Guozhang Wang
   Resolution: Fixed

As part of KAFKA-9113 fix, we will pause the restore consumer once the 
corresponding partition has completed restoration, so I'm resolving this ticket 
now.

> Pause completed partitions on restore consumer
> --
>
> Key: KAFKA-9177
> URL: https://issues.apache.org/jira/browse/KAFKA-9177
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.6.0
>
>
> The StoreChangelogReader is responsible for tracking and restoring active 
> tasks, but once a store has finished restoring it will continue polling for 
> records on that partition.
> Ordinarily this doesn't make a difference as a store is not completely 
> restored until its entire changelog has been read, so there are no more 
> records for poll to return anyway. But if the restoring state is actually an 
> optimized source KTable, the changelog is just the source topic and poll will 
> keep returning records for that partition until all stores have been restored.
> Note that this isn't a correctness issue since it's just the restore 
> consumer, but it is wasteful to be polling for records and throwing them 
> away. We should pause completed partitions in StoreChangelogReader so we 
> don't slow down the restore consumer in reading from the unfinished changelog 
> topics, and avoid wasted network.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1136

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Fix spotsbug failure in Kafka examples (#8051)

[github] MINOR: Add timer for update limit offsets (#8047)

[github] MINOR: further InternalTopologyBuilder cleanup  (#8046)


--
[...truncated 2.85 MB...]

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-

Build failed in Jenkins: kafka-trunk-jdk8 #4213

2020-02-07 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Fix spotsbug failure in Kafka examples (#8051)

[github] MINOR: Add timer for update limit offsets (#8047)

[github] MINOR: further InternalTopologyBuilder cleanup  (#8046)


--
[...truncated 2.83 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE

Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-02-07 Thread Patrik Kleindl
Hi Gwen

Kafka Streams is not a third party library and produces a lot of these
warnings, e.g.

*The configuration 'main.consumer.max.poll.records' was supplied but isn't
a known config.*
*The configuration 'admin.retries' was supplied but isn't a known config.*
and various others if you try to fine-tune the restoration consumer or
inject parameters for state stores.
This results in a lot of false positives and only makes new people worried
and then ignore the warnings altogether.

Unless this is taken care of at least the Kafka Streams users will probably
be better off having this on debug level.

Best regards

Patrik

On Thu, 6 Feb 2020 at 16:55, Gwen Shapira  wrote:

> INFO is the default log level, and while it looks less "alarming" than
> WARN, users will still see it and in my experience, they will worry that
> something is wrong anyway.  Or if INFO isn't the default, users won't see
> it, so it is no different from debug and we are left with no way of warning
> users that they misconfigured something.
>
> The point is that "known configs" exist in Kafka as a validation step. It
> is there to protect users. So anything that makes the concerns about
> unknown configs invisible to users, makes the validation step useless and
> we may as well remove it. I'm against that - I think users should be made
> aware of misconfigs as much as possible - especially since if you misspell
> "retention", you will lose data.
>
> If we look away from the symptom and go back to the actual cause
>
> I think Kafka had a way (and maybe it still does) for 3rd party developers
> who create client plugins (mostly interceptors) to make their configs
> "known". 3rd party developers should be responsible for the good experience
> of their users.  Now it is possible that you'll pick a 3rd party library
> that didn't do it and have a worse user experience, but I am not sure it is
> the job of Apache Kafka to protect users from their choice of libraries
> (and as long as those libraries are OSS, users can fix them). Especially
> not at the expense of someone who doesn't use 3rd party libs.
>
> Gwen
>
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>
> On Thu, Feb 06, 2020 at 2:06 AM, Artur Burtsev < artj...@gmail.com >
> wrote:
>
> >
> >
> >
> > Hi John,
> >
> >
> >
> > In out case it wont help, since we are running instance per partition and
> > even with summary only we get 32 warnings per rollout.
> >
> >
> >
> > Hi Gwen,
> >
> >
> >
> > Thanks for you reply, I understand and share your concern, I also
> > mentioned it earlier in the thread. Do you think it will work if we
> change
> > DEBUG to INFO?
> >
> >
> >
> > Thanks,
> > Artur
> >
> >
> >
> > On Thu, Feb 6, 2020 at 4:21 AM Gwen Shapira < gwen@ confluent. io (
> > g...@confluent.io ) > wrote:
> >
> >
> >>
> >>
> >> Sorry for late response. The reason that unused configs is in WARN is
> that
> >> if you misspell a config, it means that it will not apply. In some cases
> >> (default retention) you want know until too late. We wanted to warn
> admins
> >> about possible misconfigurations.
> >>
> >>
> >>
> >> In the context of a company supporting Kafka - customers run logs at
> INFO
> >> level normally, so if we suspect a misconfiguration, we don't want to
> ask
> >> the customer to change level to DEBUG and bounce the broker. It is time
> >> consuming and can be risky.
> >>
> >>
> >>
> >> *Gwen Shapira*
> >> Product Manager | Confluent
> >> 650.450.2760 | @gwenshap
> >> Follow us: Twitter ( https:/ / twitter. com/ ConfluentInc (
> >> https://twitter.com/ConfluentInc ) ) | blog ( http:/ / www. confluent.
> io/
> >> blog ( http://www.confluent.io/blog ) )
> >>
> >>
> >>
> >> Sent via Superhuman ( https:/ / sprh. mn/ ?vip=gwen@ confluent. io (
> >> https://sprh.mn/?vip=g...@confluent.io ) )
> >>
> >>
> >>
> >> On Mon, Jan 06, 2020 at 4:21 AM, Stanislav Kozlovski < stanislav@
> confluent.
> >> io ( stanis...@confluent.io ) > wrote:
> >>
> >>
> >>>
> >>>
> >>> Hey Artur,
> >>>
> >>>
> >>>
> >>> Perhaps changing the log level to DEBUG is the simplest approach.
> >>>
> >>>
> >>>
> >>> I wonder if other people know what the motivation behind the WARN log
> was?
> >>> I'm struggling to think up of a scenario where I'd like to see unused
> >>> values printed in anything above DEBUG.
> >>>
> >>>
> >>>
> >>> Best,
> >>> Stanislav
> >>>
> >>>
> >>>
> >>> On Mon, Dec 30, 2019 at 12:52 PM Artur Burtsev < artjock@ gmail. com
> ( artjock@
> >>> gmail. com ( artj...@gmail.com ) ) > wrote:
> >>>
> >>>
> 
> 
>  Hi,
> 
> 
> 
>  Indeed changing the log level for the whole AbstractConfig is not an
>  option, because logAll is extremely useful.
> 
> 
> 
>  Grouping warnings into 1 (with the count of unused only) will not be a
>  good option for us either. It will still be pretty noisy. Imagine we
> have
>  32 partitions and scaled up the application to 32 instances then we
> still
> >

[jira] [Created] (KAFKA-9522) kafka-connect failed when the topic can not be accessed

2020-02-07 Thread Deblock (Jira)
Deblock created KAFKA-9522:
--

 Summary: kafka-connect failed when the topic can not be accessed
 Key: KAFKA-9522
 URL: https://issues.apache.org/jira/browse/KAFKA-9522
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.3.1
Reporter: Deblock


The kafka-connect fail if the topic can not be join (permission issue or topic 
doesn't exists).

 

This issue happend using Debezium CDC : 
[https://issues.redhat.com/browse/DBZ-1770]

 

The topic can be choosen using a column on the database. If the topic on a 
database contains an issue (permission issue or topic doesn't exists), the 
connect stop to work, and new event will not be sent.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)