[jira] [Resolved] (KAFKA-9343) Add ps command for Kafka and zookeeper process on z/OS.

2020-07-21 Thread Shuo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuo Zhang resolved KAFKA-9343.
---

> Add ps command for Kafka and zookeeper process on z/OS.
> ---
>
> Key: KAFKA-9343
> URL: https://issues.apache.org/jira/browse/KAFKA-9343
> Project: Kafka
>  Issue Type: Task
>  Components: tools
>Affects Versions: 2.4.0
> Environment: z/OS, OS/390
>Reporter: Shuo Zhang
>Priority: Major
>  Labels: OS/390, z/OS
> Fix For: 2.4.2, 2.5.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> +Note: since the final change scope changed, I changed the summary and 
> description.+ 
> The existing method to check Kafka process for other platform doesn't 
> applicable for z/OS, on z/OS, the best keyword we can use is the JOBNAME. 
> PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk 
> '\{print $1}') 
> --> 
> PIDS=$(ps -A -o pid,jobname,comm | grep -i $JOBNAME | grep java | grep -v 
> grep | awk '\{print $1}') 
> So does the zookeeper process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4732

2020-07-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9274: Mark `retries` config as deprecated and add new


--
[...truncated 6.35 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

> Task :streams:upgrade-syste

Build failed in Jenkins: kafka-trunk-jdk14 #307

2020-07-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9274: Mark `retries` config as deprecated and add new


--
[...truncated 3.20 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTes

Build failed in Jenkins: kafka-trunk-jdk11 #1658

2020-07-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9274: Mark `retries` config as deprecated and add new


--
[...truncated 3.20 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenI

Jenkins build is back to normal : kafka-trunk-jdk14 #306

2020-07-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-645: Replace abstract class Windows with a proper interface

2020-07-21 Thread Sophie Blee-Goldman
Hey John,

Thanks for the KIP. I know this has been bugging you :)

That said, I think the KIP is missing some elaboration in the Motivation
section.
You mention a number of problems we've had and lived with in the past --
could
you give an example of one, and how it would be solved by your proposal?

By the way, I assume we would also need to deprecate all APIs that accept a
Windows
parameter in favor of new ones that accept a FixedSizeWindowDefinition? Off
the
top of my head that would be the windowedBy methods in KGroupedStream and
CogroupedKStream

On Tue, Jul 21, 2020 at 1:46 PM John Roesler  wrote:

> Hello all,
>
> I'd like to propose KIP-645, to correct a small API mistake in Streams.
> Fixing this now allows us to avoid perpetuating the mistake in new work.
> For example, it will allow us to implement KIP-450 cleanly.
>
> The change itself should be seamless for users.
>
> Please see https://cwiki.apache.org/confluence/x/6SN4CQ for details.
>
> Thanks,
> -John
>


[DISCUSS] KIP-645: Replace abstract class Windows with a proper interface

2020-07-21 Thread John Roesler
Hello all,

I'd like to propose KIP-645, to correct a small API mistake in Streams.
Fixing this now allows us to avoid perpetuating the mistake in new work.
For example, it will allow us to implement KIP-450 cleanly.

The change itself should be seamless for users.

Please see https://cwiki.apache.org/confluence/x/6SN4CQ for details.

Thanks,
-John


Build failed in Jenkins: kafka-trunk-jdk11 #1657

2020-07-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] KAFKA-9432:(follow-up) Set `configKeys` to null in 
`describeConfigs()`

[github] MINOR; Move quota integration tests to using the new quota API. (#8954)


--
[...truncated 6.40 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.

[jira] [Created] (KAFKA-10298) Replace Windows with a proper interface

2020-07-21 Thread John Roesler (Jira)
John Roesler created KAFKA-10298:


 Summary: Replace Windows with a proper interface
 Key: KAFKA-10298
 URL: https://issues.apache.org/jira/browse/KAFKA-10298
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler
Assignee: John Roesler


See POC: [https://github.com/apache/kafka/pull/9031]

 

Presently, windowed aggregations in KafkaStreams fall into two categories:
 * Windows
 ** TimeWindows
 ** UnlimitedWindows
 ** JoinWindows
 * SessionWindows

Unfortunately, Windows is an abstract class instead of an interface, and it 
forces some fields onto its implementations. This has led to a number of 
problems over the years, but so far we have been able to live with them.

However, as we consider adding new implementations to this side of the 
hierarchy, the damage will spread. See KIP-450, for example.

We should take the opportunity now to correct the issue by introducing an 
interface and deprecating Windows itself. Then, we can implement new features 
cleanly and maybe remove Windows in the 3.0 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4731

2020-07-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] KAFKA-9432:(follow-up) Set `configKeys` to null in 
`describeConfigs()`

[github] MINOR; Move quota integration tests to using the new quota API. (#8954)


--
[...truncated 3.17 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shou

Re: [DISCUSS] KIP-450: Sliding Windows

2020-07-21 Thread Matthias J. Sax
Thanks for updating the KIP.

Couple of follow up comments:

1) For the mandatory grace period, we should use a static builder method
that take two parameters. This provides a better API as user cannot
forget to set the grace period. Throwing a runtime exception seems not
to be the best way to handle this case.



2) In Fig.2 you list 10 hopping windows. I believe it should actually be
more? There first hopping window would be [-6,-4[ and the last one would
be from [19,29[ -- hence, the cost saving are actually much higher.



3a) IQ: you are saying that the user need to compute the start time as

> windowSize+the time they're looking at

Should this be "targetTime - windowSize" instead?



3b) IQ: in you example you say "window size of 10 minutes" with an
incident at 9:15.

> they're looking for a window with the start time of 8:15.

The example does not seem to add up?



4) For "Processing Windows": you describe a three step approach: I just
want to point out, that step (1) is not necessary for each input record,
because timestamps are not guaranteed to be unique and thus a previous
record with the same key and timestamp might have create the windows
already.

Nit: I am also not exactly sure what you mean by step (3) as you use the
word "send". I guess you mean "put"?

It seem there are actually more details in the sub-page:

> A new record for SlidingWindows will always create two new windows. If either 
> of those windows already exist in the windows store, their aggregation will 
> simply be updated to include the new record, but no duplicate window will be 
> added to the WindowStore.

However, the first and second sentence contradict each other a little
bit. I think the first sentence is not correct.

Nit:

> For in-order records, the left window will always be empty.

This should be "right window" ?



5) "Emitting Results": it might be worth to point out, that a
second/future window of a new record is create with no records, and
thus, even if it's initialized it won't be emitted. Only if a
consecutive record falls into the window, the window would be updates
and the window result (for a window content of one record) would be sent
downstream.

Again, the sub-page contains this details. Might still be worth to add
to the top level page, too.

Btw: this implementation actually raises an issue for IQ: those empty
windows would be returned. Thus I am wondering if we need to use two
stores internally? One store for actual windows and one store for empty
windows? If an empty window is updated, it's move to the other store?
For IQ, we only allow to query the non-empty-window store?



6) On the sub-page:

> The left window of in-order records and both windows for out-of-order records 
> need to be updated with the values of records that have already been 
> processed.

Why "both windows for out-of-order records"? IMHO, we don't know how
many existing windows needs to be updated when processing an
out-of-order record. Of course, an out-of-order record could not fall
into any existing window but create two new windows, too.

>  Because each record creates one new window that includes itself and one 
> window that does not

As state above, this does not seem to hold. I understand why you mean,
but it would be good to be exact.

Figure 2: You use the term "late" but you mean "out-of-order" I guess --
a record is _late_ if it's not processed any longer as the grace period
passed already.

Figure 2: "Late" should be out-or-order. The example text say a window
[16,26] should be created but the figure shows the green window as [15,20].

About the blue window: maybe add not that the blue window contains the
aggregate we need for the green window, _before_ the new record `a` is
added to the blue window.



7) I am not really happy to extend TimeWindows and I think the argument
about JoinWindows is not the best (IMHO, JoinWindows do it already wrong
and we just repeat the same mistake). However, it seems our window
hierarchy is "broken" already and it might be out of scope for this KIP
to fix it. Hence, I am ok that we bite the bullet for now and clean it
up later.



-Matthias


On 7/20/20 5:18 PM, Guozhang Wang wrote:
> Hi Leah,
> 
> Thanks for the updated KIP. I agree that extending SlidingWindows from
> Windows is fine for the sake of not introducing more public APIs (and their
> internal xxxImpl classes), and its cons is small enough to tolerate to me.
> 
> 
> Guozhang
> 
> 
> On Mon, Jul 20, 2020 at 1:49 PM Leah Thomas  wrote:
> 
>> Hi all,
>>
>> Thanks for the feedback on the KIP. I've updated the KIP page
>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-450%3A+Sliding+Window+Aggregations+in+the+DSL
>>>
>> to address these points and have created a child page
>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/Aggregation+details+for+KIP-450
>>>
>> to go more in depth on certain implementation details.
>>
>> *Grace Period:*
>> I think Sophie raises a good point that the default grace perio

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-21 Thread Colin McCabe
Hi all,

With binding +1s from Rajini Sivaram, David Arthur, and Boyang Chen, and 
non-binding +1s from Tom Bentley, the vote passes.

Thanks to everyone who commented and helped to improve the proposal, especially 
Ron Dagostino, David, and Boyang.

best,
Colin


On Thu, Jul 16, 2020, at 16:02, Ron Dagostino wrote:
> Hi Colin.  I updated the KIP with various renames.  I've also created a
> draft PR at https://github.com/apache/kafka/pull/9032 that still needs a
> bunch of real implementation but nonetheless represents the renames in code.
> 
> The biggest changes are that there are now derived classes public class
> UserScramCredentialAdditionUpsertion and public class
> UserScramCredentialDeletion.  I don't know what the reaction to the use of
> the term "upsertion" will be, but that's the best thing I could come up
> with to reflect that these requests are "upserts" (update if there,
> otherwise insert).  It was referred to as an "Addition" before, which I
> felt was not technically correct.  If you diff the most recent two versions
> of the KIP it diffs pretty cleanly and makes the changes pretty apparent.
> 
> Ron
> 
> 
> On Thu, Jul 16, 2020 at 11:38 AM Colin McCabe  wrote:
> 
> > On Thu, Jul 16, 2020, at 08:06, Ron Dagostino wrote:
> > > Thanks, Colin.  The standard "about" message for ThrottleTimeMs seems
> > > to be "The duration in milliseconds for which the request was throttled
> > > due to a quota violation, or zero if the request did not violate any
> > quota."
> > > as opposed to "The time spent waiting for quota." Should we adjust to
> > > match the typical definition?
> > >
> >
> > Hi Ron,
> >
> > Good point.  Let's keep the "about" text consistent.  I changed it.
> >
> > >
> > > I'm wondering if describing Scram credentials should require READ
> > privilege
> > > rather than ALTER on the cluster?   Altering SCRAM credentials of course
> > > requires ALTER privilege, and I can see the argument for requiring ALTER
> > > privilege to describe them as well, but it did catch my eye as something
> > > worth questioning/confirming.
> > >
> >
> > Also a good point.  I spoke with Rajini about this offline, and she
> > pointed out that we can already see user names in ACLs if we have DESCRIBE
> > on CLUSTER.  So it should be fine to have describeScramUsers require
> > DESCRIBE on CLUSTER as well.
> >
> > >
> > > I'm also now thinking that "UNKNOWN" shouldn't be listed in the
> > > ScramMechanism enum.  I thought maybe it was a catch-all so we will
> > always
> > > be able to deserialize something regardless of what key actually appears,
> > > but I just realized that SCRAM credentials and Client Quotas are mixed
> > > together in the same JSON, so it will be up to the corresponding API to
> > > ignore what it doesn't recognize -- i.e. if both client quotas and SCRAM
> > > credentials are defined for a user, then invoking DescribeClientQuotas
> > must
> > > only describe the quota configs and invoking DescribeScramUsers must only
> > > describe the SCRAM configs.
> > >
> >
> > The reason to have the UNKNOWN enum is so that we can add new SCRAM
> > mechanisms in the future.  If we don't have it, then we're basically saying
> > we can never add new mechanisms.
> >
> > I agree that the decision to put SCRAM users under the same ZK path as
> > client quotas makes this more complex than we'd like it to be, but all is
> > not lost.  For one thing, we could always just add a new ZK path for SCRAM
> > users in the future if we really need to.  With a new path we wouldn't have
> > to worry about namespace collisions.  For another thing, in the
> > post-KIP-500 world this won't be an issue.
> >
> > In the short term, a simpler solution might work here.  For example, can
> > we just assume that any key that starts with "SCRAM-" is not a quota, but a
> > scram user?  (Or look at some other aspect of the key).
> >
> > >
> > >  Also, note that invoking kafka-configs.sh
> > > --bootstrap-server ... --entity-type user --describe will require the
> > > invocation of two separate APIs -- one to describe quotas and one to
> > > describe SCRAM credentials; I don't think this is a problem, but I did
> > want
> > > to call it out explicitly.
> > >
> >
> > That should be OK.
> >
> > >
> > > Finally, there is a lack of consistency regarding how we name things.
> > For
> > > example, we are calling these APIs {Describe,Alter}ScramUsers and we have
> > > declared the classes {Describe,Alter}ScramUserOptions, which matches up
> > > fine.  We also have public class DescribeScramUsersResult, which also
> > > matches.  But then we have public class AlterScramCredentialsResult,
> > > interface ScramCredentialAlteration, public class
> > ScramCredentialAddition,
> > > and public class ScramCredentialDeletion, none of which match up in terms
> > > of consistency of naming.  I wonder if we should always use
> > > "UserScramCredential" everywhere since that is technically what these
> > API's
> > > are about: describing/altering User

Re: [DISCUSS] KIP-607: Add Metrics to Record the Memory Used by RocksDB to Kafka Streams

2020-07-21 Thread John Roesler
Thanks for the update, Bruno!

In addition to Guozhang's feedback, I'm a little concerned
about the change to the RocksDBConfigSetter. If I understand
the proposal, people would have to separately supply
their Cache to the Options parameter in setConfig() and also
save it in a field so it can be returned in cache(). If they don't
return the same object, then the metrics won't be accurate,
but otherwise the mistake will be undetectable. Also, the
method is defaulted to return null, so existing implementations
would have no indication that they need to change, except that
users who want to read the new metrics would see inaccurate
values. They probably don't have a priori knowledge that would
let them identify that the reported metrics aren't accurate, and
even if they do notice something is wrong, it would probably take
quite a bit of effort to get all the way to the root cause.

I'm wondering if we can instead avoid the new method and pass
to the ConfigSetter our own subclass of Options (which is non-final 
and has only public constructors) that would enable us to capture
and retrieve the Cache later. Or even just use reflection to get the
Cache out of the existing Options object after calling the ConfigSetter.

What do you think?
Thanks again for the update,
-John

On Mon, Jul 20, 2020, at 17:39, Guozhang Wang wrote:
> Hello Bruno,
> 
> Thanks for the updated KIP. I made a pass and here are some comments:
> 
> 1) What's the motivation of keeping it as INFO while KIP-471 metrics are
> defined in DEBUG?
> 
> 2) Some namings are a bit inconsistent with others and with KIP-471, for
> example:
> 
> 2.a) KIP-471 uses "memtable" while in this KIP we use "mem-table", also the
> "memtable" is prefixed and then the metric name. I'd suggest we keep them
> consistent. e.g. "num-immutable-mem-table" => "immutable-memtable-count",
> "cur-size-active-mem-table" => "active-memable-bytes"
> 
> 2.b) "immutable" are abbreviated as "imm" in some names but not in others,
> I'd suggest we do not use abbreviations across all names,
> e.g. "num-entries-imm-mem-tables" => "immutable-memtable-num-entries".
> 
> 2.c) "-size" "-num" semantics is usually a bit unclear, and I'd suggest we
> just more concrete terms, e.g. "total-sst-files-size" =>
> "total-sst-files-bytes", "num-live-versions" => "live-versions-count",
> "background-errors" => "background-errors-count".
> 
> 3) Some metrics are a bit confusing, e.g.
> 
> 3.a) What's the difference between "cur-size-all-mem-tables" and
> "size-all-mem-tables"?
> 
> 3.b) And the explanation of "estimate-table-readers-mem" does not read very
> clear to me either, does it refer to "estimate-sst-file-read-buffer-bytes"?
> 
> 3.c) How does "estimate-oldest-key-time" help with memory usage debugging?
> 
> 4) For my own education, does "estimate-pending-compaction-bytes" capture
> all the memory usage for compaction buffers?
> 
> 5) This is just of a nit comment to help readers better understand rocksDB:
> maybe we can explain in the wiki doc which part of rocksDB uses memory
> (block cache, OS cache, memtable, compaction buffer, read buffer), and
> which of them are on-heap and wich of them are off-heap, which can be hard
> bounded and which can only be soft bounded and which cannot be bounded at
> all, etc.
> 
> 
> Guozhang
> 
> 
> On Mon, Jul 20, 2020 at 11:00 AM Bruno Cadonna  wrote:
> 
> > Hi,
> >
> > During the implementation of this KIP and after some discussion about
> > RocksDB metrics, I decided to make some major modifications to this KIP
> > and kick off discussion again.
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-607%3A+Add+Metrics+to+Kafka+Streams+to+Report+Properties+of+RocksDB
> >
> > Best,
> > Bruno
> >
> > On 15.05.20 17:11, Bill Bejeck wrote:
> > > Thanks for the KIP, Bruno. Having sensible, easy to access RocksDB memory
> > > reporting will be a welcomed addition.
> > >
> > > FWIW I also agree to have the metrics reported on a store level. I'm glad
> > > you changed the KIP to that effect.
> > >
> > > -Bill
> > >
> > >
> > >
> > > On Wed, May 13, 2020 at 6:24 PM Guozhang Wang 
> > wrote:
> > >
> > >> Hi Bruno,
> > >>
> > >> Sounds good to me.
> > >>
> > >> I think I'm just a bit more curious to see its impact on performance: as
> > >> long as we have one INFO level rocksDB metrics, then we'd have to turn
> > on
> > >> the scheduled rocksdb metrics recorder whereas previously, we can
> > decide to
> > >> not turn on the recorder at all if all are set as DEBUG and we
> > configure at
> > >> INFO level in production. But this is an implementation detail anyways
> > and
> > >> maybe the impact is negligible after all. We can check and re-discuss
> > this
> > >> afterwards :)
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >> On Wed, May 13, 2020 at 9:34 AM Sophie Blee-Goldman <
> > sop...@confluent.io>
> > >> wrote:
> > >>
> > >>> Thanks Bruno! I took a look at the revised KIP and it looks good to me.
> > >>>
> > >>> Sophie
> > >>>
> > >>> On Wed, May 13,

Jenkins build is back to normal : kafka-trunk-jdk8 #4730

2020-07-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk14 #305

2020-07-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10279; Allow dynamic update of certificates with additional


--
[...truncated 6.39 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestD

Re: 回复: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-07-21 Thread Dániel Urbán
Hi,

I've updated the PR based on the discussion and the comments on the PR.
If there are no more issues, I'll start a vote in a few days.

Thanks,
Daniel

wang120445...@sina.com  ezt írta (időpont: 2020.
júl. 1., Sze, 3:26):

> maybe it just likes RBAC’s  show tables;
>
>
>
> wang120445...@sina.com
>
> 发件人: Hu Xi
> 发送时间: 2020-06-30 23:04
> 收件人: dev@kafka.apache.org
> 主题: 回复: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support
> for multiple topics, minimize the number of requests to server
> That's a great KIP for GetOffsetShell tool. I have a question about the
> multiple-topic lookup situation.
>
> In a secured environment, what does the tool output if it has DESCRIBE
> privileges for some topics but hasn't for others?
>
> 
> 发件人: Dániel Urbán 
> 发送时间: 2020年6月30日 22:15
> 收件人: dev@kafka.apache.org 
> 主题: Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support
> for multiple topics, minimize the number of requests to server
>
> Hi Manikumar,
> Thanks, went ahead and assigned a new ID, it is KIP-635 now:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
> Daniel
>
> Manikumar  ezt írta (időpont: 2020. jún. 30.,
> K,
> 16:03):
>
> > Hi,
> >
> > Yes, we can assign new id to this KIP.
> >
> > Thanks.
> >
> > On Tue, Jun 30, 2020 at 6:59 PM Dániel Urbán 
> > wrote:
> >
> > > Hi,
> > >
> > > To help with the discussion, I also have a PR for this KIP now.
> > reflecting
> > > the current state of the KIP:
> https://github.com/apache/kafka/pull/8957.
> > > I would like to ask a committer to start the test job on it.
> > >
> > > One thing I realised though is that there is a KIP id collision, there
> is
> > > another KIP with the same id:
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
> > > What is the protocol in this case? Should I acquire a new id for the
> > > GetOffsetShell KIP, and update it?
> > >
> > > Thanks in advance,
> > > Daniel
> > >
> > > Dániel Urbán  ezt írta (időpont: 2020.
> jún.
> > > 30., K, 9:23):
> > >
> > > > Hi Manikumar,
> > > >
> > > > Thanks for the comments.
> > > > 1. Will change this - thought that "command-config" is used for admin
> > > > clients.
> > > > 2. It's not necessary, just felt like a nice quality-of-life feature
> -
> > > will
> > > > remove it.
> > > >
> > > > Thanks,
> > > > Daniel
> > > >
> > > > On Tue, Jun 30, 2020 at 4:16 AM Manikumar  >
> > > > wrote:
> > > >
> > > > > Hi Daniel,
> > > > >
> > > > > Thanks for working on this KIP.  Proposed changes looks good to me,
> > > > >
> > > > > minor comments:
> > > > > 1. We use "command-config" option name in most of the cmdline tools
> > to
> > > > pass
> > > > > config
> > > > > properties file. We can use the same name here.
> > > > >
> > > > > 2. Not sure, if we need a separate option to pass an consumer
> > property.
> > > > > fewer options are better.
> > > > >
> > > > > Thanks,
> > > > > Manikumar
> > > > >
> > > > > On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán <
> urb.dani...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I see that this KIP turned somewhat inactive - I'd like to pick
> it
> > up
> > > > and
> > > > > > work on it if it is okay.
> > > > > > Part of the work is done, as switching to the Consumer API is
> > already
> > > > in
> > > > > > trunk, but some functionality is still missing.
> > > > > >
> > > > > > I've seen the current PR and the discussion so far, only have a
> few
> > > > > things
> > > > > > to add:
> > > > > > - I like the idea of the topic-partition argument, it would be
> > useful
> > > > to
> > > > > > filter down to specific partitions.
> > > > > > - Instead of a topic list arg, a pattern would be more powerful,
> > and
> > > > also
> > > > > > fit better with the other tools (e.g. how the kafka-topics tool
> > > works).
> > > > > >
> > > > > > Regards,
> > > > > > Daniel
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-578: Add configuration to limit number of partitions

2020-07-21 Thread Gokul Ramanan Subramanian
Hi.

Can we resume the voting process for KIP-578? I have addressed additional
comments by Boyang and Ismael.

Thanks.

On Mon, Jun 8, 2020 at 9:09 AM Gokul Ramanan Subramanian <
gokul24...@gmail.com> wrote:

> Hi. Can we resume the voting process for KIP-578? Thanks.
>
> On Mon, Jun 1, 2020 at 11:09 AM Gokul Ramanan Subramanian <
> gokul24...@gmail.com> wrote:
>
>> Thanks Colin. Have updated the KIP per your recommendations. Let me know
>> what you think.
>>
>> Thanks Harsha for the vote.
>>
>> On Wed, May 27, 2020 at 8:17 PM Colin McCabe  wrote:
>>
>>> Hi Gokul Ramanan Subramanian,
>>>
>>> Thanks for the KIP.
>>>
>>> Can you please modify the KIP to remove the reference to the deprecated
>>> --zookeeper flag?  This is not how kafka-configs.sh is supposed to be used
>>> in new versions of Kafka.  You get a warning message if you do use this
>>> deprecated flag.  As described in KIP-604, we are removing the --zookeeper
>>> flag in the Kafka 3.0 release.  It also causes problems when people use the
>>> deprecated access mode-- for example, as you note in this KIP, it bypasses
>>> resource limits such as the ones described here.
>>>
>>> Instead of WILL_EXCEED_PARTITION_LIMITS, how about
>>> RESOURCE_LIMIT_REACHED?  Then the error string can contain the detailed
>>> message about which resource limit was hit (per broker limit, per cluster
>>> limit, whatever.)  It would also be good to spell out that
>>> CreateTopicsPolicy plugins can also throw this exception, for consistency.
>>>
>>> I realize that 2 billion partitions seems like a very big number.
>>> However, filesystems have had to transition to 64 bit inode numbers as time
>>> has gone on.  There doesn't seem to be any performance reason why this
>>> should be a 31 bit number, so let's just make these configurations longs,
>>> not ints.
>>>
>>> best,
>>> Colin
>>>
>>>
>>> On Wed, May 27, 2020, at 09:48, Harsha Chintalapani wrote:
>>> > Thanks for the KIP Gokul. This will be really useful for our use cases
>>> as
>>> > well.
>>> > +1 (binding).
>>> >
>>> > -Harsha
>>> >
>>> >
>>> > On Tue, May 26, 2020 at 12:33 AM, Gokul Ramanan Subramanian <
>>> > gokul24...@gmail.com> wrote:
>>> >
>>> > > Hi.
>>> > >
>>> > > Any votes for this?
>>> > >
>>> > > Thanks.
>>> > >
>>> > > On Tue, May 12, 2020 at 11:36 AM Gokul Ramanan Subramanian <
>>> gokul2411s@
>>> > > gmail.com> wrote:
>>> > >
>>> > > Hello,
>>> > >
>>> > > I'd like to initialize voting on KIP-578:
>>> > > https://cwiki.apache.org/confluence/display/KAFKA/
>>> > > KIP-578%3A+Add+configuration+to+limit+number+of+partitions
>>> > > .
>>> > >
>>> > > Got some good feedback from Stanislav Kozlovski, Alexandre Dupriez
>>> and Tom
>>> > > Bentley on the discussion thread. I have addressed their comments. I
>>> want
>>> > > to thank them for their time.
>>> > >
>>> > > If there are any more concerns about the KIP, I am happy to discuss
>>> them
>>> > > further.
>>> > >
>>> > > Thanks.
>>> > >
>>> > >
>>> >
>>>
>>


[jira] [Resolved] (KAFKA-10279) Allow dynamic update of certificates with additional SubjectAltNames

2020-07-21 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-10279.

  Reviewer: Manikumar
Resolution: Fixed

> Allow dynamic update of certificates with additional SubjectAltNames
> 
>
> Key: KAFKA-10279
> URL: https://issues.apache.org/jira/browse/KAFKA-10279
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.7.0
>
>
> At the moment, we don't allow dynamic keystore update in brokers if DN and 
> SubjectAltNames don't match exactly. This is to ensure that existing clients 
> and inter-broker communication don't break. Since addition of new entries to 
> SubjectAltNames will not break any authentication, we should allow that and 
> just verify that new SubjectAltNames is a superset of the old one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4729

2020-07-21 Thread Apache Jenkins Server
See 

Changes:


--
[...truncated 6.34 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTe