Build failed in Jenkins: kafka-trunk-jdk8 #4200

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9437; Make the Kafka Protocol Friendlier with L7 Proxies 
[KIP-559]


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task 

Build failed in Jenkins: kafka-trunk-jdk8 #4199

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9375: Add names to all Connect threads (#7901)


--
[...truncated 1.63 MB...]
kafka.coordinator.group.GroupMetadataManagerTest > testReadFromOldGroupMetadata 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testTransactionalCommitOffsetAppendFailure STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testTransactionalCommitOffsetAppendFailure PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testCommittedOffsetParsing 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testCommittedOffsetParsing 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testOffsetExpirationOfSimpleConsumer STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testOffsetExpirationOfSimpleConsumer PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testGroupMetadataRemoval 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testGroupMetadataRemoval 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadGroupWithTombstone 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadGroupWithTombstone 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testOffsetExpirationOfActiveGroupSemantics STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testOffsetExpirationOfActiveGroupSemantics PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadWithCommittedAndAbortedAndPendingTransactionalOffsetCommits STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadWithCommittedAndAbortedAndPendingTransactionalOffsetCommits PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testSerdeOffsetCommitValueWithExpireTimestamp STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testSerdeOffsetCommitValueWithExpireTimestamp PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetsAndGroup 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetsAndGroup 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadGroupAndOffsetsWithCorruptedLog STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadGroupAndOffsetsWithCorruptedLog PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testGroupLoadedWithPendingCommits STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testGroupLoadedWithPendingCommits PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreGroupErrorMapping 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreGroupErrorMapping 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testCommitOffsetFailure 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testCommitOffsetFailure 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadGroupAndOffsetsFromDifferentSegments STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadGroupAndOffsetsFromDifferentSegments PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testOffsetExpirationSemantics STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testOffsetExpirationSemantics PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testExpireOffset STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testExpireOffset PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testExpireGroupWithOffsetsOnly STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testExpireGroupWithOffsetsOnly PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testDoNotLoadAbortedTransactionalOffsetCommits STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testDoNotLoadAbortedTransactionalOffsetCommits PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption STARTED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption PASSED

kafka.tools.ConsumerPerformanceTest > testConfig STARTED

kafka.tools.ConsumerPerformanceTest > testConfig PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex PASSED

kafka.tools.MirrorMakerIntegrationTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1124

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9437; Make the Kafka Protocol Friendlier with L7 Proxies 
[KIP-559]


--
[...truncated 2.87 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-31 Thread Matthias J. Sax
I did not read the updated KIP itself yet. However, I do have concerns
about the idea to have different behavior for different operators.


(1) If there is a KStream aggregation, for which neither the
aggregation-value nor the result timestamp changes, there is no reason
to emit if we do emit-on-change semantics. Hence, why would be need to
stay on an emit-on-update model?


(2) If a KTable is materialized into a local state store or not, is
semantically irrelevant and an implementation detail IMHO. Hence, I
think we need to ensure that we have the same behavior for both cases:

Example:

stream.groupByKey()
  .count()
  .filter(...)
  .toStream().to(...);

stream.grouyByKey()
  .count()
  .filter(..., Materialized.as("filted-table"))
  .toStream().to(...);

It would be rather confusion for users if both would have a different
result.

However, I actually believe we can achieve emit-on-change semantics for
both cases. Note that internally, the output of `count()` is a
`>` changelog. Atm, we don't enable "emit
old value" for all cases, but I think if we always enable it if there is
no downstream state store, the downstream operator can actually
recompute its "current result" (that would otherwise be in the store)
based on the old value, the new result based on the new value, compare
old and new result and make the correct decision to emit or not.

However, we should verify that this really works as expected before we
decide on this KIP.


(3) I think we also need to think a little bit about the handling of
out-of-order data. Atm, I don't see any issue in particular, but it
would be great if everybody could think about out-of-order handling and
if/how it affects emit-on-change behavior. Also note, that KIP-280 is
allowing a timestamp-based compaction that might allow us to fix a
potential issue (in case there is one).


-Matthias


On 1/31/20 5:30 PM, John Roesler wrote:
> Hi Thomas and yuzhihong,
> 
> That’s an interesting idea. Can you help think of a use case that isn’t also 
> served by filtering or mapping beforehand?
> 
> Thanks for helping to design this feature!
> -John
> 
> On Fri, Jan 31, 2020, at 18:56, yuzhih...@gmail.com wrote:
>> I think this is good idea. 
>>
>>> On Jan 31, 2020, at 4:49 PM, Thomas Becker  wrote:
>>>
>>> How do folks feel about allowing the mechanism by which no-ops are 
>>> detected to be pluggable? Meaning use something like a hash by default, but 
>>> you could optionally provide an implementation of something to use instead, 
>>> like a ChangeDetector. This could be useful for example to ignore changes 
>>> to certain fields, which may not be relevant to the operation being 
>>> performed.
>>> 
>>> From: John Roesler 
>>> Sent: Friday, January 31, 2020 4:51 PM
>>> To: dev@kafka.apache.org 
>>> Subject: Re: [KAFKA-557] Add emit on change support for Kafka Streams
>>>
>>> [EXTERNAL EMAIL] Attention: This email was sent from outside TiVo. DO NOT 
>>> CLICK any links or attachments unless you expected them.
>>> 
>>>
>>>
>>> Hello all,
>>>
>>> Sorry for my silence. It seems like we are getting close to consensus.
>>> Hopefully, we could move to a vote soon!
>>>
>>> All of the reasoning from Matthias and Bruno around timestamp is 
>>> compelling. I
>>> would be strongly in favor of stating a few things very clearly in the KIP:
>>> 1. Streams will drop no-op updates only for KTable operations.
>>>
>>>   That is, we won't make any changes to KStream aggregations at the moment. 
>>> It
>>>   does seem like we can potentially revisit the time semantics of that 
>>> operation
>>>   in the future, but we don't need to do it now.
>>>
>>>   On the other hand, the proposed semantics for KTable timestamps (marking 
>>> the
>>>   beginning of the validity of that record) makes sense to me.
>>>
>>> 2. Streams will only drop no-op updates for _stateful_ KTable operations.
>>>
>>>   We don't want to add a hard guarantee that Streams will _never_ emit a 
>>> no-op
>>>   table update because it would require adding state to otherwise stateless
>>>   operations. If someone is really concerned about a particular stateless
>>>   operation producing a lot of no-op results, all they have to do is
>>>   materialize it, and Streams would automatically drop the no-ops.
>>>
>>> Additionally, I'm +1 on not adding an opt-out at this time.
>>>
>>> Regarding the KIP itself, I would clean it up a bit before calling for a 
>>> vote.
>>> There is a lot of "discussion"-type language there, which is very natural to
>>> read, but makes it a bit hard to see what _exactly_ the kip is proposing.
>>>
>>> Richard, would you mind just making the "proposed behavior change" a simple 
>>> and
>>> succinct list of bullet points? I.e., please drop glue phrases like "there 
>>> has
>>> been some discussion" or "possibly we could do X". For the final version of 
>>> the
>>> KIP, it should just say, "Streams will do X, Streams will do Y". 

Build failed in Jenkins: kafka-2.5-jdk8 #1

2020-01-31 Thread Apache Jenkins Server
See 

Changes:


--
[...truncated 2.85 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-31 Thread John Roesler
Hi Thomas and yuzhihong,

That’s an interesting idea. Can you help think of a use case that isn’t also 
served by filtering or mapping beforehand?

Thanks for helping to design this feature!
-John

On Fri, Jan 31, 2020, at 18:56, yuzhih...@gmail.com wrote:
> I think this is good idea. 
> 
> > On Jan 31, 2020, at 4:49 PM, Thomas Becker  wrote:
> > 
> > How do folks feel about allowing the mechanism by which no-ops are 
> > detected to be pluggable? Meaning use something like a hash by default, but 
> > you could optionally provide an implementation of something to use instead, 
> > like a ChangeDetector. This could be useful for example to ignore changes 
> > to certain fields, which may not be relevant to the operation being 
> > performed.
> > 
> > From: John Roesler 
> > Sent: Friday, January 31, 2020 4:51 PM
> > To: dev@kafka.apache.org 
> > Subject: Re: [KAFKA-557] Add emit on change support for Kafka Streams
> > 
> > [EXTERNAL EMAIL] Attention: This email was sent from outside TiVo. DO NOT 
> > CLICK any links or attachments unless you expected them.
> > 
> > 
> > 
> > Hello all,
> > 
> > Sorry for my silence. It seems like we are getting close to consensus.
> > Hopefully, we could move to a vote soon!
> > 
> > All of the reasoning from Matthias and Bruno around timestamp is 
> > compelling. I
> > would be strongly in favor of stating a few things very clearly in the KIP:
> > 1. Streams will drop no-op updates only for KTable operations.
> > 
> >   That is, we won't make any changes to KStream aggregations at the moment. 
> > It
> >   does seem like we can potentially revisit the time semantics of that 
> > operation
> >   in the future, but we don't need to do it now.
> > 
> >   On the other hand, the proposed semantics for KTable timestamps (marking 
> > the
> >   beginning of the validity of that record) makes sense to me.
> > 
> > 2. Streams will only drop no-op updates for _stateful_ KTable operations.
> > 
> >   We don't want to add a hard guarantee that Streams will _never_ emit a 
> > no-op
> >   table update because it would require adding state to otherwise stateless
> >   operations. If someone is really concerned about a particular stateless
> >   operation producing a lot of no-op results, all they have to do is
> >   materialize it, and Streams would automatically drop the no-ops.
> > 
> > Additionally, I'm +1 on not adding an opt-out at this time.
> > 
> > Regarding the KIP itself, I would clean it up a bit before calling for a 
> > vote.
> > There is a lot of "discussion"-type language there, which is very natural to
> > read, but makes it a bit hard to see what _exactly_ the kip is proposing.
> > 
> > Richard, would you mind just making the "proposed behavior change" a simple 
> > and
> > succinct list of bullet points? I.e., please drop glue phrases like "there 
> > has
> > been some discussion" or "possibly we could do X". For the final version of 
> > the
> > KIP, it should just say, "Streams will do X, Streams will do Y". Feel free 
> > to
> > add an elaboration section to explain more about what X and Y mean, but we 
> > don't
> > need to talk about possibilities or alternatives except in the "rejected
> > alternatives" section.
> > 
> > Accordingly, can you also move the options you presented in the intro to the
> > "rejected alternatives" section and only mention the final proposal itself?
> > 
> > This just really helps reviewers to know what they are voting for, and it 
> > helps
> > everyone after the fact when they are trying to get clarity on what exactly 
> > the
> > proposal is, versus all the things it could have been.
> > 
> > Thanks,
> > -John
> > 
> > 
> >> On Mon, Jan 27, 2020, at 18:14, Richard Yu wrote:
> >> Hello to all,
> >> 
> >> I've finished making some initial modifications to the KIP.
> >> I have decided to keep the implementation section in the KIP for
> >> record-keeping purposes.
> >> 
> >> For now, we should focus on only the proposed behavior changes instead.
> >> 
> >> See if you have any comments!
> >> 
> >> Cheers,
> >> Richard
> >> 
> >> On Sat, Jan 25, 2020 at 11:12 AM Richard Yu 
> >> wrote:
> >> 
> >>> Hi all,
> >>> 
> >>> Thanks for all the discussion!
> >>> 
> >>> @John and @Bruno I will survey other possible systems and see what I can
> >>> do.
> >>> Just a question, by systems, I suppose you would mean the pros and cons of
> >>> different reporting strategies?
> >>> 
> >>> I'm not completely certain on this point, so it would be great if you can
> >>> clarify on that.
> >>> 
> >>> So here's what I got from all the discussion so far:
> >>> 
> >>>   - Since both Matthias and John seems to have come to a consensus on
> >>>   this, then we will go for an all-round behavorial change for KTables. 
> >>> After
> >>>   some thought, I decided that for now, an opt-out config will not be 
> >>> added.
> >>>   As John have pointed out, no-op changes tend to explode further down the
> >>> 

Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-31 Thread yuzhihong
I think this is good idea. 

> On Jan 31, 2020, at 4:49 PM, Thomas Becker  wrote:
> 
> How do folks feel about allowing the mechanism by which no-ops are detected 
> to be pluggable? Meaning use something like a hash by default, but you could 
> optionally provide an implementation of something to use instead, like a 
> ChangeDetector. This could be useful for example to ignore changes to certain 
> fields, which may not be relevant to the operation being performed.
> 
> From: John Roesler 
> Sent: Friday, January 31, 2020 4:51 PM
> To: dev@kafka.apache.org 
> Subject: Re: [KAFKA-557] Add emit on change support for Kafka Streams
> 
> [EXTERNAL EMAIL] Attention: This email was sent from outside TiVo. DO NOT 
> CLICK any links or attachments unless you expected them.
> 
> 
> 
> Hello all,
> 
> Sorry for my silence. It seems like we are getting close to consensus.
> Hopefully, we could move to a vote soon!
> 
> All of the reasoning from Matthias and Bruno around timestamp is compelling. I
> would be strongly in favor of stating a few things very clearly in the KIP:
> 1. Streams will drop no-op updates only for KTable operations.
> 
>   That is, we won't make any changes to KStream aggregations at the moment. It
>   does seem like we can potentially revisit the time semantics of that 
> operation
>   in the future, but we don't need to do it now.
> 
>   On the other hand, the proposed semantics for KTable timestamps (marking the
>   beginning of the validity of that record) makes sense to me.
> 
> 2. Streams will only drop no-op updates for _stateful_ KTable operations.
> 
>   We don't want to add a hard guarantee that Streams will _never_ emit a no-op
>   table update because it would require adding state to otherwise stateless
>   operations. If someone is really concerned about a particular stateless
>   operation producing a lot of no-op results, all they have to do is
>   materialize it, and Streams would automatically drop the no-ops.
> 
> Additionally, I'm +1 on not adding an opt-out at this time.
> 
> Regarding the KIP itself, I would clean it up a bit before calling for a vote.
> There is a lot of "discussion"-type language there, which is very natural to
> read, but makes it a bit hard to see what _exactly_ the kip is proposing.
> 
> Richard, would you mind just making the "proposed behavior change" a simple 
> and
> succinct list of bullet points? I.e., please drop glue phrases like "there has
> been some discussion" or "possibly we could do X". For the final version of 
> the
> KIP, it should just say, "Streams will do X, Streams will do Y". Feel free to
> add an elaboration section to explain more about what X and Y mean, but we 
> don't
> need to talk about possibilities or alternatives except in the "rejected
> alternatives" section.
> 
> Accordingly, can you also move the options you presented in the intro to the
> "rejected alternatives" section and only mention the final proposal itself?
> 
> This just really helps reviewers to know what they are voting for, and it 
> helps
> everyone after the fact when they are trying to get clarity on what exactly 
> the
> proposal is, versus all the things it could have been.
> 
> Thanks,
> -John
> 
> 
>> On Mon, Jan 27, 2020, at 18:14, Richard Yu wrote:
>> Hello to all,
>> 
>> I've finished making some initial modifications to the KIP.
>> I have decided to keep the implementation section in the KIP for
>> record-keeping purposes.
>> 
>> For now, we should focus on only the proposed behavior changes instead.
>> 
>> See if you have any comments!
>> 
>> Cheers,
>> Richard
>> 
>> On Sat, Jan 25, 2020 at 11:12 AM Richard Yu 
>> wrote:
>> 
>>> Hi all,
>>> 
>>> Thanks for all the discussion!
>>> 
>>> @John and @Bruno I will survey other possible systems and see what I can
>>> do.
>>> Just a question, by systems, I suppose you would mean the pros and cons of
>>> different reporting strategies?
>>> 
>>> I'm not completely certain on this point, so it would be great if you can
>>> clarify on that.
>>> 
>>> So here's what I got from all the discussion so far:
>>> 
>>>   - Since both Matthias and John seems to have come to a consensus on
>>>   this, then we will go for an all-round behavorial change for KTables. 
>>> After
>>>   some thought, I decided that for now, an opt-out config will not be added.
>>>   As John have pointed out, no-op changes tend to explode further down the
>>>   topology as they are forwarded to more and more processor nodes 
>>> downstream.
>>>   - About using hash codes, after some explanation from John, it looks
>>>   like hash codes might not be as ideal (for implementation). For now, we
>>>   will omit that detail, and save it for the PR.
>>>   - @Bruno You do have valid concerns. Though, I am not completely
>>>   certain if we want to do emit-on-change only for materialized KTables. I
>>>   will put it down in the KIP regardless.
>>> 
>>> I will do my best to address all 

Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-31 Thread Thomas Becker
How do folks feel about allowing the mechanism by which no-ops are detected to 
be pluggable? Meaning use something like a hash by default, but you could 
optionally provide an implementation of something to use instead, like a 
ChangeDetector. This could be useful for example to ignore changes to certain 
fields, which may not be relevant to the operation being performed.

From: John Roesler 
Sent: Friday, January 31, 2020 4:51 PM
To: dev@kafka.apache.org 
Subject: Re: [KAFKA-557] Add emit on change support for Kafka Streams

[EXTERNAL EMAIL] Attention: This email was sent from outside TiVo. DO NOT CLICK 
any links or attachments unless you expected them.



Hello all,

Sorry for my silence. It seems like we are getting close to consensus.
Hopefully, we could move to a vote soon!

All of the reasoning from Matthias and Bruno around timestamp is compelling. I
would be strongly in favor of stating a few things very clearly in the KIP:
1. Streams will drop no-op updates only for KTable operations.

   That is, we won't make any changes to KStream aggregations at the moment. It
   does seem like we can potentially revisit the time semantics of that 
operation
   in the future, but we don't need to do it now.

   On the other hand, the proposed semantics for KTable timestamps (marking the
   beginning of the validity of that record) makes sense to me.

2. Streams will only drop no-op updates for _stateful_ KTable operations.

   We don't want to add a hard guarantee that Streams will _never_ emit a no-op
   table update because it would require adding state to otherwise stateless
   operations. If someone is really concerned about a particular stateless
   operation producing a lot of no-op results, all they have to do is
   materialize it, and Streams would automatically drop the no-ops.

Additionally, I'm +1 on not adding an opt-out at this time.

Regarding the KIP itself, I would clean it up a bit before calling for a vote.
There is a lot of "discussion"-type language there, which is very natural to
read, but makes it a bit hard to see what _exactly_ the kip is proposing.

Richard, would you mind just making the "proposed behavior change" a simple and
succinct list of bullet points? I.e., please drop glue phrases like "there has
been some discussion" or "possibly we could do X". For the final version of the
KIP, it should just say, "Streams will do X, Streams will do Y". Feel free to
add an elaboration section to explain more about what X and Y mean, but we don't
need to talk about possibilities or alternatives except in the "rejected
alternatives" section.

Accordingly, can you also move the options you presented in the intro to the
"rejected alternatives" section and only mention the final proposal itself?

This just really helps reviewers to know what they are voting for, and it helps
everyone after the fact when they are trying to get clarity on what exactly the
proposal is, versus all the things it could have been.

Thanks,
-John


On Mon, Jan 27, 2020, at 18:14, Richard Yu wrote:
> Hello to all,
>
> I've finished making some initial modifications to the KIP.
> I have decided to keep the implementation section in the KIP for
> record-keeping purposes.
>
> For now, we should focus on only the proposed behavior changes instead.
>
> See if you have any comments!
>
> Cheers,
> Richard
>
> On Sat, Jan 25, 2020 at 11:12 AM Richard Yu 
> wrote:
>
> > Hi all,
> >
> > Thanks for all the discussion!
> >
> > @John and @Bruno I will survey other possible systems and see what I can
> > do.
> > Just a question, by systems, I suppose you would mean the pros and cons of
> > different reporting strategies?
> >
> > I'm not completely certain on this point, so it would be great if you can
> > clarify on that.
> >
> > So here's what I got from all the discussion so far:
> >
> >- Since both Matthias and John seems to have come to a consensus on
> >this, then we will go for an all-round behavorial change for KTables. 
> > After
> >some thought, I decided that for now, an opt-out config will not be 
> > added.
> >As John have pointed out, no-op changes tend to explode further down the
> >topology as they are forwarded to more and more processor nodes 
> > downstream.
> >- About using hash codes, after some explanation from John, it looks
> >like hash codes might not be as ideal (for implementation). For now, we
> >will omit that detail, and save it for the PR.
> >- @Bruno You do have valid concerns. Though, I am not completely
> >certain if we want to do emit-on-change only for materialized KTables. I
> >will put it down in the KIP regardless.
> >
> > I will do my best to address all points raised so far on the discussion.
> > Hope we could keep this going!
> >
> > Best,
> > Richard
> >
> > On Fri, Jan 24, 2020 at 6:07 PM Bruno Cadonna  wrote:
> >
> >> Thank you Matthias for the use cases!
> >>
> >> Looking at both 

Re: New release branch 2.5

2020-01-31 Thread David Arthur
To clarify one point: all bug fixes are welcome on the release branch until
the code freeze on Feb 12th. After that, only blocker bugs should be merged
to the release branch. I will not be moving JIRAs to 2.6 until after the
code freeze.

-David


On Fri, Jan 31, 2020 at 5:21 PM David Arthur  wrote:

> Hello Kafka developers and friends,
>
> We now have a release branch for the 2.5 release. The branch name is "2.5"
> and the version will be "2.5.0". Trunk will be shortly be bumped to the
> next snapshot version 2.6.0-SNAPSHOT
> https://github.com/apache/kafka/pull/8026
>
> I'll be going over the JIRAs to move every non-blocker from this release to
> the next release.
>
> From this point, most changes should go to trunk.
>
> Blockers (existing and new that we discover while testing the release)
> will be committed to both trunk and the 2.5 release branch.
>
> Please discuss with your reviewer whether your PR should go to trunk or to
> trunk+release so they can merge accordingly.
>
> As always, please help us test the release!
>
> Thanks!
> David Arthur
>


-- 
David Arthur


[jira] [Created] (KAFKA-9490) Some factory methods in Grouped are missing generic parameters

2020-01-31 Thread Dariusz Kordonski (Jira)
Dariusz Kordonski created KAFKA-9490:


 Summary: Some factory methods in Grouped are missing generic 
parameters
 Key: KAFKA-9490
 URL: https://issues.apache.org/jira/browse/KAFKA-9490
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.4.0
Reporter: Dariusz Kordonski


The following methods in {{Grouped}} class seem to be missing generic 
parameters {{ }}in the declated return type:
{code:java}
public static  Grouped keySerde(final Serde keySerde) { return new   
Grouped<>(null, keySerde, null); 
}

public static  Grouped valueSerde(final Serde valueSerde) { return new 
Grouped<>(null, null, valueSerde); 
} {code}
I think it both cases it should be:
{code:java}
public static  Grouped ...() {code}
This causes "unchecked call" compiler warnings when called by clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


New release branch 2.5

2020-01-31 Thread David Arthur
Hello Kafka developers and friends,

We now have a release branch for the 2.5 release. The branch name is "2.5"
and the version will be "2.5.0". Trunk will be shortly be bumped to the
next snapshot version 2.6.0-SNAPSHOT
https://github.com/apache/kafka/pull/8026

I'll be going over the JIRAs to move every non-blocker from this release to
the next release.

>From this point, most changes should go to trunk.

Blockers (existing and new that we discover while testing the release) will
be committed to both trunk and the 2.5 release branch.

Please discuss with your reviewer whether your PR should go to trunk or to
trunk+release so they can merge accordingly.

As always, please help us test the release!

Thanks!
David Arthur


Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-01-31 Thread David Arthur
Thanks! I've updated the list.

On Thu, Jan 30, 2020 at 5:48 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Hi David,
>
> thanks for driving the release.
>
> Please also remove KIP-158 from the list of KIPs that you plan to include
> in 2.5
> KIP-158 has been accepted, but the implementation is not yet final. It
> will be included in the release that follows 2.5.
>
> Regards,
> Konstantine
>
> On 1/30/20, Matthias J. Sax  wrote:
> > Hi David,
> >
> > the following KIP from the list did not make it:
> >
> >  - KIP-216 (no PR yet)
> >  - KIP-399 (no PR yet)
> >  - KIP-401 (PR not merged yet)
> >
> >
> > KIP-444 should be included as we did make progress, but it is still not
> > fully implement and we need to finish in in 2.6 release.
> >
> > KIP-447 is partially implemented in 2.5 (ie, broker and
> > consumer/producer changes -- the Kafka Streams parts slip)
> >
> >
> > -Matthias
> >
> >
> > On 1/29/20 9:05 AM, David Arthur wrote:
> >> Hey everyone, just a quick update on the 2.5 release.
> >>
> >> I have updated the list of planned KIPs on the release wiki page
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=143428858
> .
> >> If I have missed anything, or there are KIPs included in this list which
> >> should *not* be included in 2.5, please let me know.
> >>
> >> Based on the release schedule, the feature freeze is today, Jan 29th.
> Any
> >> major feature work that is not already complete will need to push out to
> >> 2.6. I will work on cutting the release branch during the day tomorrow
> >> (Jan
> >> 30th).
> >>
> >> If you have any questions, please feel free to reach out to me directly
> >> or
> >> in this thread.
> >>
> >> Thanks!
> >> David
> >>
> >> On Mon, Jan 13, 2020 at 1:35 PM Colin McCabe 
> wrote:
> >>
> >>> +1.  Thanks for volunteering, David.
> >>>
> >>> best,
> >>> Colin
> >>>
> >>> On Fri, Dec 20, 2019, at 10:59, David Arthur wrote:
>  Greetings!
> 
>  I'd like to volunteer to be release manager for the next time-based
> >>> feature
>  release which will be 2.5. If there are no objections, I'll send out
>  the
>  release plan in the next few days.
> 
>  Thanks,
>  David Arthur
> 
> >>>
> >>
> >>
> >
> >
>


-- 
David Arthur


[jira] [Resolved] (KAFKA-9437) KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies

2020-01-31 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9437.

Fix Version/s: 2.5.0
   Resolution: Fixed

> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies
> ---
>
> Key: KAFKA-9437
> URL: https://issues.apache.org/jira/browse/KAFKA-9437
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-31 Thread John Roesler
Hello all,

Sorry for my silence. It seems like we are getting close to consensus.
Hopefully, we could move to a vote soon!

All of the reasoning from Matthias and Bruno around timestamp is compelling. I
would be strongly in favor of stating a few things very clearly in the KIP:
1. Streams will drop no-op updates only for KTable operations.

   That is, we won't make any changes to KStream aggregations at the moment. It
   does seem like we can potentially revisit the time semantics of that 
operation
   in the future, but we don't need to do it now.

   On the other hand, the proposed semantics for KTable timestamps (marking the
   beginning of the validity of that record) makes sense to me.

2. Streams will only drop no-op updates for _stateful_ KTable operations.
   
   We don't want to add a hard guarantee that Streams will _never_ emit a no-op
   table update because it would require adding state to otherwise stateless
   operations. If someone is really concerned about a particular stateless
   operation producing a lot of no-op results, all they have to do is
   materialize it, and Streams would automatically drop the no-ops.

Additionally, I'm +1 on not adding an opt-out at this time.

Regarding the KIP itself, I would clean it up a bit before calling for a vote.
There is a lot of "discussion"-type language there, which is very natural to
read, but makes it a bit hard to see what _exactly_ the kip is proposing.

Richard, would you mind just making the "proposed behavior change" a simple and
succinct list of bullet points? I.e., please drop glue phrases like "there has
been some discussion" or "possibly we could do X". For the final version of the
KIP, it should just say, "Streams will do X, Streams will do Y". Feel free to
add an elaboration section to explain more about what X and Y mean, but we don't
need to talk about possibilities or alternatives except in the "rejected
alternatives" section.

Accordingly, can you also move the options you presented in the intro to the
"rejected alternatives" section and only mention the final proposal itself?

This just really helps reviewers to know what they are voting for, and it helps
everyone after the fact when they are trying to get clarity on what exactly the
proposal is, versus all the things it could have been.

Thanks,
-John


On Mon, Jan 27, 2020, at 18:14, Richard Yu wrote:
> Hello to all,
> 
> I've finished making some initial modifications to the KIP.
> I have decided to keep the implementation section in the KIP for
> record-keeping purposes.
> 
> For now, we should focus on only the proposed behavior changes instead.
> 
> See if you have any comments!
> 
> Cheers,
> Richard
> 
> On Sat, Jan 25, 2020 at 11:12 AM Richard Yu 
> wrote:
> 
> > Hi all,
> >
> > Thanks for all the discussion!
> >
> > @John and @Bruno I will survey other possible systems and see what I can
> > do.
> > Just a question, by systems, I suppose you would mean the pros and cons of
> > different reporting strategies?
> >
> > I'm not completely certain on this point, so it would be great if you can
> > clarify on that.
> >
> > So here's what I got from all the discussion so far:
> >
> >- Since both Matthias and John seems to have come to a consensus on
> >this, then we will go for an all-round behavorial change for KTables. 
> > After
> >some thought, I decided that for now, an opt-out config will not be 
> > added.
> >As John have pointed out, no-op changes tend to explode further down the
> >topology as they are forwarded to more and more processor nodes 
> > downstream.
> >- About using hash codes, after some explanation from John, it looks
> >like hash codes might not be as ideal (for implementation). For now, we
> >will omit that detail, and save it for the PR.
> >- @Bruno You do have valid concerns. Though, I am not completely
> >certain if we want to do emit-on-change only for materialized KTables. I
> >will put it down in the KIP regardless.
> >
> > I will do my best to address all points raised so far on the discussion.
> > Hope we could keep this going!
> >
> > Best,
> > Richard
> >
> > On Fri, Jan 24, 2020 at 6:07 PM Bruno Cadonna  wrote:
> >
> >> Thank you Matthias for the use cases!
> >>
> >> Looking at both use cases, I think you need to elaborate on them in
> >> the KIP, Richard.
> >>
> >> Emit from plain KTable:
> >> I agree with Matthias that the lower timestamp makes sense because it
> >> marks the start of the validity of the record. Idempotent records with
> >> a higher timestamp can be safely ignored. A corner case that I
> >> discussed with Matthias offline is when we do not materialize a KTable
> >> due to optimization. Then we cannot avoid the idempotent records
> >> because we do not keep the first record with the lower timestamp to
> >> compare to.
> >>
> >> Emit from KTable with aggregations:
> >> If we specify that an aggregation result should have the highest
> >> timestamp of the records that 

Build failed in Jenkins: kafka-trunk-jdk11 #1123

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Introduce 2.5-IV0 IBP (#8010)

[github] KAFKA-9375: Add names to all Connect threads (#7901)


--
[...truncated 2.86 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 

Build failed in Jenkins: kafka-trunk-jdk8 #4198

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Introduce 2.5-IV0 IBP (#8010)


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task 

Build failed in Jenkins: kafka-2.4-jdk8 #133

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-8764: LogCleanerManager endless loop while compacting/cleaning


--
[...truncated 5.49 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED


[jira] [Resolved] (KAFKA-9375) Add thread names to kafka connect threads

2020-01-31 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9375.
---
Fix Version/s: 2.5.0
   Resolution: Fixed

> Add thread names to kafka connect threads
> -
>
> Key: KAFKA-9375
> URL: https://issues.apache.org/jira/browse/KAFKA-9375
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.4.0
>Reporter: karan kumar
>Assignee: karan kumar
>Priority: Minor
> Fix For: 2.5.0
>
>
> Taking stack dumps reveals that some threads in the kconnect framework are 
> not named.
> Adding names for the threads . 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9489) Remove @InterfaceStability.Evolving from KafkaAdminClient

2020-01-31 Thread Raymond Ng (Jira)
Raymond Ng created KAFKA-9489:
-

 Summary: Remove @InterfaceStability.Evolving from KafkaAdminClient
 Key: KAFKA-9489
 URL: https://issues.apache.org/jira/browse/KAFKA-9489
 Project: Kafka
  Issue Type: Task
  Components: admin
Reporter: Raymond Ng
Assignee: Raymond Ng


KafkaAdminClient is currently marked as @InterfaceStability.Evolving

It was marked as such in June17. At this point the interface is considered 
stable, hence the removal of this annotation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.3-jdk8 #167

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-8764: LogCleanerManager endless loop while compacting/cleaning


--
[...truncated 2.98 MB...]

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots STARTED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
PASSED

kafka.log.ProducerStateManagerTest > testTruncateHead STARTED

kafka.log.ProducerStateManagerTest > testTruncateHead PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[20] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[20] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[21] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[21] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[22] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[22] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[23] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[23] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[24] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[24] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[25] STARTED

kafka.log.BrokerCompressionTest > 

Jenkins build is back to normal : kafka-2.2-jdk8 #23

2020-01-31 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9488) Dangling CountDownLatch.await(timeout)

2020-01-31 Thread Roman Leventov (Jira)
Roman Leventov created KAFKA-9488:
-

 Summary: Dangling CountDownLatch.await(timeout)
 Key: KAFKA-9488
 URL: https://issues.apache.org/jira/browse/KAFKA-9488
 Project: Kafka
  Issue Type: Bug
Reporter: Roman Leventov


There are 12 occurrences in the codebase (11 in tests and 1 in production code, 
in WorkerSourceTask class) when the result of CountDownLatch.await(timeout, 
TimeUnit) is not checked. It's like not checking the result of File.delete(). 
The common fix is to wrap CDL.await() call into assertTrue().

All 16 places could be found using the following structural search in IntelliJ:

$x$.await($y$, $z$);

With "CountDownLatch" type constraint on the $x$ variable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.2-jdk8-old #204

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-8764: LogCleanerManager endless loop while compacting/cleaning


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H40 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision aa6d325e40509504fad84b69047371c9b9cccf9f 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f aa6d325e40509504fad84b69047371c9b9cccf9f
Commit message: "KAFKA-8764: LogCleanerManager endless loop while 
compacting/cleaning (#7932)"
 > git rev-list --no-walk b706d93de720b3ad8a8478aae85c61ff001c7e64 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins3575101357368039285.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins3575101357368039285.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=aa6d325e40509504fad84b69047371c9b9cccf9f, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user b...@confluent.io


Build failed in Jenkins: kafka-trunk-jdk8 #4197

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8503; Add default api timeout to AdminClient (KIP-533) (#8011)


--
[...truncated 5.75 MB...]
org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> 

[jira] [Created] (KAFKA-9487) Followup : KAFKA-9445

2020-01-31 Thread Navinder Brar (Jira)
Navinder Brar created KAFKA-9487:


 Summary: Followup : KAFKA-9445
 Key: KAFKA-9487
 URL: https://issues.apache.org/jira/browse/KAFKA-9487
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: Navinder Brar
Assignee: Navinder Brar






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1122

2020-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8503; Add default api timeout to AdminClient (KIP-533) (#8011)


--
[...truncated 2.86 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED


Re: [VOTE] KIP-373: Allow users to create delegation tokens for other users

2020-01-31 Thread Viktor Somogyi-Vass
Hi All,

As a few days passed and we have the required number of binding votes, the
KIP has passed it.
Thank you all who have voted, I'll post the PR about this soon!
Binding votes: Manikumar, Harsha, Jun
Non-binding ones: Ryanne

Thanks,
Viktor

On Tue, Jan 28, 2020 at 10:56 AM Viktor Somogyi-Vass <
viktorsomo...@gmail.com> wrote:

> Hi Rajini,
>
> I rebased my older PR and double checked it. It'll work with a new
> resource type without adding new fields the ACL admin client APIs. As I
> mentioned though, it'll be good to increment their version though to allow
> more graceful handling of the protocol compatibilities as an older broker
> won't know about the User resource type and probably will fail with a
> serialization error whereas if they match the protocol the client could
> detect it's an older broker and wouldn't allow the request. I'll append
> this to the KIP.
> Please let me know if we're good to continue with this.
>
> Best,
> Viktor
>
> On Mon, Jan 20, 2020 at 5:45 PM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com> wrote:
>
>> Hi Rajini,
>>
>> 1) I think we can to keep the conventions in the tool. As an addition we
>> wouldn't have to retain certain characters (for creating the list).
>> 2) Yes, so based on 1) and this --users changes to --user-principal (and
>> accepts one single user principal).
>> 3) Looking at it again probably we'll want to increase the version of the
>> ACL protocols as new resource and operation types are getting added and
>> currently sending such requests to old brokers would result in
>> serialization errors. So it would be nicer to handle them on the API
>> handshake. Besides this I don't see if we need to do anything else as these
>> operations should be able to handle these changes on the code level. I'll
>> make sure to test this ACL scenario and report back about it (although I
>> need a few days as the code I have is very old and contains a lot of
>> conflicts with the current trunk). Please let me know if I'm missing
>> something here.
>>
>> Thanks,
>> Viktor
>>
>> On Fri, Jan 17, 2020 at 5:23 PM Rajini Sivaram 
>> wrote:
>>
>>> Hi Viktor,
>>>
>>> Thanks for the KIP. A few questions:
>>>
>>> 1) kafka-acls.sh has options like* --topic* that specifies a single
>>> topic.
>>> Is there a reason why we want to have *--users* instead of *--user *with
>>> a
>>> single user?
>>> 2) We use user principal rather than just the name everywhere else. Can
>>> we
>>> do the same here, or do we not want to treat this as a principal?
>>> 3) If we update AclCommand, don't we also need equivalent AdminClient
>>> changes to configure this ACL? I believe we are deprecating ZK-based ACL
>>> updates, so we need to add this to AdminClient?
>>>
>>> Regards,
>>>
>>> Rajini
>>>
>>> On Fri, Jan 17, 2020 at 3:15 PM Viktor Somogyi-Vass <
>>> viktorsomo...@gmail.com>
>>> wrote:
>>>
>>> > Hi Jun & Richard,
>>> >
>>> > Jun, thanks for your feedback and vote.
>>> >
>>> > 100. Thanks, I'll correct that.
>>> >
>>> > 101. (@Richard) in this case the principal names will be something like
>>> > "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"
>>> unless
>>> > principal mapping or builder is defined (refer to [1]). I think Jun was
>>> > referring to this case which is correct, semicolon seems to be a
>>> better fit
>>> > in this case.
>>> >
>>> > Viktor
>>> >
>>> > https://docs.confluent.io/current/kafka/authorization.html
>>> >
>>> > On Thu, Jan 16, 2020 at 11:45 PM Richard Yu <
>>> yohan.richard...@gmail.com>
>>> > wrote:
>>> >
>>> > > Hi Jun,
>>> > >
>>> > > Can the SSL username really include the comma?
>>> > >
>>> > > From what I could tell, when I searched it up, I couldn't find
>>> anything
>>> > > that indicated comma can be a delimiter.
>>> > > A related doc below:
>>> > > https://knowledge.digicert.com/solution/SO12401.html
>>> > >
>>> > > Cheers,
>>> > > Richard
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > On Thu, Jan 16, 2020 at 1:37 PM Jun Rao  wrote:
>>> > >
>>> > > > Hi, Viktor,
>>> > > >
>>> > > > Thanks for the KIP. +1 from me. Just a couple of minor comments
>>> below.
>>> > > >
>>> > > > 100.
>>> CreateDelegationTokenResponse/DescribeDelegationTokenResponse. It
>>> > > > seems that "validVersions" should be "0-2".
>>> > > >
>>> > > > 101. The option --users "owner1,owner2" in AclCommand. Since SSL
>>> user
>>> > > name
>>> > > > can include comma, perhaps we could use semicolon as the separator.
>>> > > >
>>> > > > Jun
>>> > > >
>>> > > > On Wed, Jan 15, 2020 at 2:11 AM Viktor Somogyi-Vass <
>>> > > > viktorsomo...@gmail.com>
>>> > > > wrote:
>>> > > >
>>> > > > > Hey folks, bumping this again as KIP freeze is nearing and I
>>> hope to
>>> > > get
>>> > > > > this into the next release.
>>> > > > > We need only one binding vote.
>>> > > > >
>>> > > > > Thanks,
>>> > > > > Viktor
>>> > > > >
>>> > > > > On Thu, Jan 9, 2020 at 1:56 PM Viktor Somogyi-Vass <
>>> > > > > viktorsomo...@gmail.com>
>>> > > > > wrote:
>>> > > > >
>>> > > > > > Bumping this in the hope of