Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #19

2020-08-17 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #7

2020-08-17 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Use new version of ducktape


--
[...truncated 3.15 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task 

Consumer side help requirement

2020-08-17 Thread Preethkumar Thirupathi
Hi Team,

I want to develope a consumer program for consuming 1 million records per
seconds. I have developed with java program. That is not consuming that
much efficiently. Your help is much appreciated.

Thanks,
PREETHKUMAR T


Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-17 Thread Sven Erik Knop
Please use Design A

On 2020/08/17 18:18:12, Abraham Leal  wrote:
> I'd like to cast a non-binding vote to Design A.>
> Love the otter.>
>
> On 2020/08/15 14:05:10, Adam Bellemare  wrote:>
> > I prefer Design B, but given that I missed the discussion thread, I>
> think>>
> > it would be better without the Otter obscuring any part of the Kafka>
> logo.>>
> > >
> > On Thu, Aug 13, 2020 at 6:31 PM Boyang Chen >>
> > wrote:>>
> > >
> > > Hello everyone,>>
> > > > >
> > > I would like to start a vote thread for KIP-657:>>
> > > > >
> > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-657%3A+Add+Customized+Kafka+Streams+Logo>>
>
> > > > >
> > > This KIP is aiming to add a new logo for the Kafka Streams library. And>
> we>>
> > > prepared two candidates with a cute otter. You could look up the KIP>
> to>>
> > > find those logos.>>
> > > > >
> > > > >
> > > Please post your vote against these two customized logos. For example,>
> I>>
> > > would vote for *design-A (binding)*.>>
> > > > >
> > > This vote thread shall be open for one week to collect enough votes to>
> call>>
> > > for a winner. Still, feel free to post any question you may have>
> regarding>>
> > > this KIP, thanks!>>
> > > > >
> > >

Cheers

Sven Erik
sek...@gmail.com




Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-17 Thread Laurent Domenech-Cabaud
I'd like to cast a vote to Design A.

Great idea, the otter...

Thanks,
Laurent



[jira] [Created] (KAFKA-10410) OnRestoreStart disappeared from StateRestoreCallback in 2.6.0 and reappeared in a useless place

2020-08-17 Thread Mark Shelton (Jira)
Mark Shelton created KAFKA-10410:


 Summary: OnRestoreStart disappeared from StateRestoreCallback  in 
2.6.0 and reappeared in a useless place
 Key: KAFKA-10410
 URL: https://issues.apache.org/jira/browse/KAFKA-10410
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.0
Reporter: Mark Shelton


In version 2.5.0 and earlier there are "onRestoreStart" and "onRestoreEnd" 
methods on StateRestoreCallback.

Version 2.6.0 removed these calls and put them into StateRestoreListener and 
requires "streaming.setGlobalStateRestoreListener".

This makes it impossible for the actual StateRestoreCallback implementation to 
receive the start and end indication and is blocking me from moving to 2.6.0.

See:

[https://kafka.apache.org/25/javadoc/index.html?org/apache/kafka/streams/processor/AbstractNotifyingRestoreCallback.html]
 

Related JIRA:

https://issues.apache.org/jira/browse/KAFKA-4322 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-640 Add log compression analysis tool

2020-08-17 Thread Christopher Beard (BLOOMBERG/ 919 3RD A)
Hi everyone,

I would like to start a discussion on KIP-640:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-640%3A+Add+log+compression+analysis+tool

This KIP outlines a new CLI tool which helps compare how the various 
compression types supported by Kafka reduce the size of a log (and therefore 
more broadly, of a topic).

I've put together a PR that might help serve as a starting point for comments 
and suggestions.
[WIP] PR: https://github.com/apache/kafka/pull/9193

Thanks,
Chris Beard

Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #17

2020-08-17 Thread Apache Jenkins Server
See 




[GitHub] [kafka-site] scott-confluent opened a new pull request #295: fix for text wrapping in some tables in docs

2020-08-17 Thread GitBox


scott-confluent opened a new pull request #295:
URL: https://github.com/apache/kafka-site/pull/295


   Someone opened a JIRA ticket for this: 
https://issues.apache.org/jira/browse/KAFKA-10406



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-10158) Fix flaky kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress

2020-08-17 Thread Brian Byrne (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Byrne resolved KAFKA-10158.
-
Resolution: Fixed

> Fix flaky 
> kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress
> ---
>
> Key: KAFKA-10158
> URL: https://issues.apache.org/jira/browse/KAFKA-10158
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Chia-Ping Tsai
>Assignee: Brian Byrne
>Priority: Minor
> Fix For: 2.7.0
>
>
> Altering the assignments is a async request so it is possible that the 
> reassignment is still in progress when we start to verify the 
> "under-replicated-partitions". In order to make it stable, it needs a wait 
> for the reassignment completion before verifying the topic command with 
> "under-replicated-partitions".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #18

2020-08-17 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10023: Enforce broker-wide and per-listener connection creation… 
(#8768)


--
[...truncated 6.45 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED


Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-17 Thread Abraham Leal
I'd like to cast a non-binding vote to Design A.
Love the otter.

On 2020/08/15 14:05:10, Adam Bellemare  wrote:
> I prefer Design B, but given that I missed the discussion thread, I
think>
> it would be better without the Otter obscuring any part of the Kafka
logo.>
>
> On Thu, Aug 13, 2020 at 6:31 PM Boyang Chen >
> wrote:>
>
> > Hello everyone,>
> >>
> > I would like to start a vote thread for KIP-657:>
> >>
> >
https://cwiki.apache.org/confluence/display/KAFKA/KIP-657%3A+Add+Customized+Kafka+Streams+Logo>

> >>
> > This KIP is aiming to add a new logo for the Kafka Streams library. And
we>
> > prepared two candidates with a cute otter. You could look up the KIP
to>
> > find those logos.>
> >>
> >>
> > Please post your vote against these two customized logos. For example,
I>
> > would vote for *design-A (binding)*.>
> >>
> > This vote thread shall be open for one week to collect enough votes to
call>
> > for a winner. Still, feel free to post any question you may have
regarding>
> > this KIP, thanks!>
> >>
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #16

2020-08-17 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10023: Enforce broker-wide and per-listener connection creation… 
(#8768)


--
[...truncated 3.20 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED


[VOTE] KIP-656: MirrorMaker2 Exactly-once Semantics

2020-08-17 Thread Ning Zhang
Hello everyone,

I'd like to start a vote on KIP-656. This KIP aims to make MirrorMaker 2 
replicates messages exactly-once across clusters
https://cwiki.apache.org/confluence/display/KAFKA/KIP-656%3A+MirrorMaker2+Exactly-once+Semantics


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-08-17 Thread John Roesler
Hi Navinder,

I see what you mean about the global consumer being similar
to the restore consumer.

I also agree that automatically performing the recovery
steps should be strictly an improvement over the current
situation.

Also, yes, it would be a good idea to make it clear that the
global topic should be compacted in order to ensure correct
semantics. It's the same way with input topics for KTables;
we rely on users to ensure the topics are compacted, and if
they aren't, then the execution semantics will be broken.

Thanks,
-John

On Sun, 2020-08-16 at 11:44 +, Navinder Brar wrote:
> Hi John,
> 
> 
> 
> 
> 
> 
> 
> Thanks for your inputs. Since, global topics are in a way their own 
> changelog, wouldn’t the global consumers be more akin to restore consumers 
> than the main consumer? 
> 
> 
> 
> 
> 
> 
> 
> I am also +1 on catching the exception and setting it to the earliest for 
> now. Whenever an instance starts, currently global stream thread(if 
> available) goes to RUNNING before stream threads are started so that means 
> the global state is available when the processing by stream threads start. 
> So, with the new change of catching the exception, cleaning store and 
> resetting to earlier would probably be “stop the world” as you said John, as 
> I think we will have to pause the stream threads till the whole global state 
> is recovered. I assume it is "stop the world" right now as well, since now 
> also if an InvalidOffsetException comes, we throw streams exception and the 
> user has to clean up and handle all this manually and when that instance will 
> start, it will restore global state first.
> 
> 
> 
> 
> 
> 
> 
> I had an additional thought to this whole problem, would it be helpful to 
> educate the users that global topics should have cleanup policy as compact, 
> so that this invalid offset exception never arises for them. Assume for 
> example, that the cleanup policy in global topic is "delete" and it has 
> deleted k1, k2 keys(via retention.ms) although all the instances had already 
> consumed them so they are in all global stores and all other instances are up 
> to date on the global data(so no InvalidOffsetException). Now, a new instance 
> is added to the cluster, and we have already lost k1, k2 from the global 
> topic so it will start consuming from the earliest point in the global topic. 
> So, wouldn’t this global store on the new instance has 2 keys less than all 
> the other global stores already available in the cluster? Please let me know 
> if I am missing something. Thanks.
> 
> 
> 
> 
> 
> 
> 
> Regards,
> 
> Navinder
> 
> 
> On Friday, 14 August, 2020, 10:03:42 am IST, John Roesler 
>  wrote:  
>  
>  Hi all,
> 
> It seems like the main motivation for this proposal is satisfied if we just 
> implement some recovery mechanism instead of crashing. If the mechanism is 
> going to be pausing all the threads until the state is recovered, then it 
> still seems like a big enough behavior change to warrant a KIP still. 
> 
> I have to confess I’m a little unclear on why a custom reset policy for a 
> global store, table, or even consumer might be considered wrong. It’s clearly 
> wrong for the restore consumer, but the global consumer seems more 
> semantically akin to the main consumer than the restore consumer. 
> 
> In other words, if it’s wrong to reset a GlobalKTable from latest, shouldn’t 
> it also be wrong for a KTable, for exactly the same reason? It certainly 
> seems like it would be an odd choice, but I’ve seen many choices I thought 
> were odd turn out to have perfectly reasonable use cases. 
> 
> As far as the PAPI global store goes, I could see adding the option to 
> configure it, since as Matthias pointed out, there’s really no specific 
> semantics for the PAPI. But if automatic recovery is really all Navinder 
> wanted, the I could also see deferring this until someone specifically wants 
> it.
> 
> So the tl;dr is, if we just want to catch the exception and rebuild the store 
> by seeking to earliest with no config or API changes, then I’m +1.
> 
> I’m wondering if we can improve on the “stop the world” effect of rebuilding 
> the global store, though. It seems like we could put our heads together and 
> come up with a more fine-grained approach to maintaining the right semantics 
> during recovery while still making some progress.  
> 
> Thanks,
> John
> 
> 
> On Sun, Aug 9, 2020, at 02:04, Navinder Brar wrote:
> > Hi Matthias,
> > 
> > IMHO, now as you explained using ‘global.consumer.auto.offset.reset’ is 
> > not as straightforward 
> > as it seems and it might change the existing behavior for users without 
> > they releasing it, I also 
> > 
> > think that we should change the behavior inside global stream thread to 
> > not die on 
> > 
> > InvalidOffsetException and instead clean and rebuild the state from the 
> > earliest. On this, as you 
> > 
> > mentioned that we would need to pause the stream threads till the 
> > global 

[jira] [Resolved] (KAFKA-10023) Enforce broker-wide and per-listener connection creation rate (KIP-612, part 1)

2020-08-17 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-10023.

Fix Version/s: 2.7.0
 Reviewer: Rajini Sivaram
   Resolution: Fixed

> Enforce broker-wide and per-listener connection creation rate (KIP-612, part 
> 1)
> ---
>
> Key: KAFKA-10023
> URL: https://issues.apache.org/jira/browse/KAFKA-10023
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Major
>  Labels: features
> Fix For: 2.7.0
>
>
> This JIRA is for the first part of KIP-612 – Add an ability to configure and 
> enforce broker-wide and per-listener connection creation rate. 
> As described here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-612%3A+Ability+to+Limit+Connection+Creation+Rate+on+Brokers]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10409) Refactor Kafka Streams RocksDb iterators

2020-08-17 Thread Jorge Esteban Quilcate Otoya (Jira)
Jorge Esteban Quilcate Otoya created KAFKA-10409:


 Summary: Refactor Kafka Streams RocksDb iterators 
 Key: KAFKA-10409
 URL: https://issues.apache.org/jira/browse/KAFKA-10409
 Project: Kafka
  Issue Type: Improvement
Reporter: Jorge Esteban Quilcate Otoya


>From [https://github.com/apache/kafka/pull/9137#discussion_r470345513] :

[~ableegoldman] :  

> Kind of unrelated, but WDYT about renaming {{RocksDBDualCFIterator}} to 
> {{RocksDBDualCFAllIterator}} or something on the side? I feel like these 
> iterators could be cleaned up a bit in general to be more understandable -- 
> for example, it's weird that we do the {{iterator#seek}}-ing in the actual 
> {{all()}} method but for range queries we do the seeking inside the iterator 
> constructor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10408) Calendar based windows

2020-08-17 Thread Antony Stubbs (Jira)
Antony Stubbs created KAFKA-10408:
-

 Summary: Calendar based windows
 Key: KAFKA-10408
 URL: https://issues.apache.org/jira/browse/KAFKA-10408
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Affects Versions: 2.6.0
Reporter: Antony Stubbs
Assignee: Bruno Cadonna


A date based window, for example aggregate all payments made until each month 
date of the 15th, or all payments made each year until April 1st.

Should handle time zones "properly", e.g. allow user to specify which time zone 
to base it on

Example implementation of a specific aggregator, with a window implementation 
implicitly embedded:

https://github.com/astubbs/ks-tributary/blob/denormalisation-base-cp-libs/streams-module/src/main/java/io/confluent/ps/streams/processors/YearlyAggregator.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #6

2020-08-17 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-10387: Fix inclusion of transformation configs when topic 
creation is enabled in Connect (#9172)


--
[...truncated 3.15 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #15

2020-08-17 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10387: Fix inclusion of transformation configs when topic 
creation is enabled in Connect (#9172)


--
[...truncated 3.19 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #16

2020-08-17 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10387: Fix inclusion of transformation configs when topic 
creation is enabled in Connect (#9172)


--
[...truncated 3.22 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED


[jira] [Resolved] (KAFKA-10387) Cannot include SMT configs with source connector that uses topic.creation.* properties

2020-08-17 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-10387.

Resolution: Fixed

> Cannot include SMT configs with source connector that uses topic.creation.* 
> properties
> --
>
> Key: KAFKA-10387
> URL: https://issues.apache.org/jira/browse/KAFKA-10387
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Arjun Satish
>Assignee: Konstantine Karantasis
>Priority: Critical
> Fix For: 2.7.0, 2.6.1
>
>
> Let's say we try to create a connector with the following config:
> {code:java}
> {
>   "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
>   "tasks.max": "1",
>   "database.hostname": "localhost",
>   "database.port": 32843,
>   "database.user": "test",
>   "database.password": "test",
>   "database.dbname": "test",
>   "database.server.name": "tcpsql",
>   "table.whitelist": "public.numerics",
>   "transforms": "abc",
>   "transforms.abc.type": "io.debezium.transforms.ExtractNewRecordState",
>   "topic.creation.default.partitions": "1",
>   "topic.creation.default.replication.factor": "1"
> }
> {code}
> this fails with the following error in the Connector worker:
> {code:java}
> [2020-08-11 02:47:05,908] ERROR Failed to start task deb-0 
> (org.apache.kafka.connect.runtime.Worker:560)
> org.apache.kafka.connect.errors.ConnectException: 
> org.apache.kafka.common.config.ConfigException: Unknown configuration 
> 'transforms.abc.type'
>   at 
> org.apache.kafka.connect.runtime.ConnectorConfig.transformations(ConnectorConfig.java:296)
>   at 
> org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:605)
>   at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:555)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1251)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1700(DistributedHerder.java:127)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$10.call(DistributedHerder.java:1266)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$10.call(DistributedHerder.java:1262)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
>   at java.base/java.lang.Thread.run(Thread.java:832)
> Caused by: org.apache.kafka.common.config.ConfigException: Unknown 
> configuration 'transforms.abc.type'
>   at 
> org.apache.kafka.common.config.AbstractConfig.get(AbstractConfig.java:159)
>   at 
> org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig.get(SourceConnectorConfig.java:57)
>   at 
> org.apache.kafka.connect.runtime.SourceConnectorConfig.get(SourceConnectorConfig.java:141)
>   at 
> org.apache.kafka.common.config.AbstractConfig.getClass(AbstractConfig.java:216)
>   at 
> org.apache.kafka.connect.runtime.ConnectorConfig.transformations(ConnectorConfig.java:281)
>   ... 10 more
> {code}
> connector creation works fine, if we remove the topic.creation properties 
> above. 
> Not entirely sure but it looks like the piece of code that might need a fix 
> is here (as it does not add {{transforms.*}} configs into the returned 
> ConfigDef instances: 
> https://github.com/apache/kafka/blob/2.6.0/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/SourceConnectorConfig.java#L94



--
This message was sent by Atlassian Jira
(v8.3.4#803005)