[GitHub] [kafka-site] ankit-kumar-25 commented on pull request #220: KAFKA-8360: Docs do not mention RequestQueueSize JMX metric

2020-09-21 Thread GitBox


ankit-kumar-25 commented on pull request #220:
URL: https://github.com/apache/kafka-site/pull/220#issuecomment-696188603


   Hey @viktorsomogyi, 
   
   Thank you for the pointers, I have created a PR against the ops.html 
available in the Kafka project: https://github.com/apache/kafka/pull/9314



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-10510) Reassigning partitions should not allow increasing RF of a partition unless configured with it

2020-09-21 Thread Stanislav Kozlovski (Jira)
Stanislav Kozlovski created KAFKA-10510:
---

 Summary: Reassigning partitions should not allow increasing RF of 
a partition unless configured with it
 Key: KAFKA-10510
 URL: https://issues.apache.org/jira/browse/KAFKA-10510
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski


Kafka should have some validations in place against increasing the RF of a 
partition through a reassignment. Users could otherwise shoot themselves in the 
foot by increasing the RF of a topic by reassigning its partitions to extra 
replicas and then have new partition creations use a lesser (the configured) 
replication factor.

Our tools should ideally detect when RF is increasing inconsistently with the 
config and issue a separate command to change the config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #78

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10459: Document IQ APIs where order does not hold between stores 
(#9251)


--
[...truncated 3.29 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #76

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use `Map.forKeyValue` to avoid tuple allocation in Scala 2.13 
(#9299)


--
[...truncated 6.53 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #77

2020-09-21 Thread Apache Jenkins Server
See 




[GitHub] [kafka-site] viktorsomogyi edited a comment on pull request #220: KAFKA-8360: Docs do not mention RequestQueueSize JMX metric

2020-09-21 Thread GitBox


viktorsomogyi edited a comment on pull request #220:
URL: https://github.com/apache/kafka-site/pull/220#issuecomment-696132637


   Otherwise it looks good, I can approve it once you create the PR in the 
kafka repo (and perhaps get someone to merge it).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] viktorsomogyi commented on pull request #220: KAFKA-8360: Docs do not mention RequestQueueSize JMX metric

2020-09-21 Thread GitBox


viktorsomogyi commented on pull request #220:
URL: https://github.com/apache/kafka-site/pull/220#issuecomment-696132637


   Otherwise it looks good, I can approve it once you create the PR in the 
kafka repo.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #77

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use `Map.forKeyValue` to avoid tuple allocation in Scala 2.13 
(#9299)


--
[...truncated 6.58 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED


[jira] [Created] (KAFKA-10509) Add metric to track throttle time due to hitting connection rate quota

2020-09-21 Thread Anna Povzner (Jira)
Anna Povzner created KAFKA-10509:


 Summary: Add metric to track throttle time due to hitting 
connection rate quota
 Key: KAFKA-10509
 URL: https://issues.apache.org/jira/browse/KAFKA-10509
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Anna Povzner
Assignee: Anna Povzner
 Fix For: 2.7.0


See KIP-612.

 

kafka.network:type=socket-server-metrics,name=connection-accept-throttle-time,listener=\{listenerName}
 * Type: SampledStat.Avg
 * Description: Average throttle time due to violating per-listener or 
broker-wide connection acceptance rate quota on a given listener.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10458) Need a way to update quota for TokenBucket registered with Sensor

2020-09-21 Thread Anna Povzner (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anna Povzner resolved KAFKA-10458.
--
Resolution: Fixed

> Need a way to update quota for TokenBucket registered with Sensor
> -
>
> Key: KAFKA-10458
> URL: https://issues.apache.org/jira/browse/KAFKA-10458
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Anna Povzner
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.7.0
>
>
> For Rate() metric with quota config, we update quota by updating config of 
> KafkaMetric. However, it is not enough for TokenBucket, because it uses quota 
> config on record() to properly calculate the number of tokens. Sensor passes 
> config stored in the corresponding StatAndConfig, which currently never 
> changes. This means that after updating quota via KafkaMetric.config, our 
> current and only method, Sensor will record the value using old quota but 
> then measure the value to check for quota violation using the new quota 
> value. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #75

2020-09-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #76

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10401; Ensure `currentStateTimeStamp` is set correctly by group 
coordinator (#9202)


--
[...truncated 6.58 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-09-21 Thread Jun Rao
Hi, Colin,

Sorry for the late reply. A few more comments below.

50. Configurations
50.1 controller.listeners: It seems that a controller just needs one
listener. Why do we need to have a list here? Also, could you provide an
example of how this is set and what's its relationship with existing
configs such as "security.inter.broker.protocol" and "
inter.broker.listener.name"?
50.2 registration.heartbeat.interval.ms and registration.lease.timeout.ms.
Should we match their default value with the corresponding default for ZK?
50.3 controller.connect: Could you provide an example? I am wondering how
it differs from quorum.voters=1@kafka-1:9092, 2@kafka-2:9092, 3@kafka-3
:9092.
50.4 controller.id: I am still not sure how this is being used. Could you
explain this in more detail?

51. BrokerHeartbeat: It seems a bit wasteful to include Listeners in every
heartbeat request since it typically doesn't change. Could we make that an
optional field?

52. KIP-584 adds a new ZK node /features. Should we add a corresponding
metadata record?

53. TopicRecord and DeleteTopic: Both DeleteTopic and TopicRecord.Deleting
indicate topic deletion. Could we outline the flow when each will be set?
In particular, which one indicates the intention to delete and which one
indicates the completion of the deletion.

54. "The controller can generate a new broker epoch by using the latest log
offset." Which offset is that? Is it the offset of the metadata topic for
the corresponding BrokerRecord? Is it guaranteed to be different on each
broker restart?

55. "Thereafter, it may lose subsequent conflicts if its broker epoch is
stale.  (See KIP-380 for some background on broker epoch.)  The reason for
favoring new processes is to accommodate the common case where a process is
killed with kill -9 and then restarted. " Are you saying that if there is
an active broker registered in the controller, a new broker heartbeat
request with the INITIAL state will fence the current broker session? Not
sure about this. For example, this will allow a broker with incorrectly
configured broker id to kill an existing valid broker.

56. kafka-storage.sh:
56.1 In the info mode, what other information does it show in addition to
"kip.500.mode=enabled"?
56.2 Should the format mode take the config file as the input too like the
info mode?

Thanks,

Jun

On Thu, Sep 17, 2020 at 4:12 PM Colin McCabe  wrote:

> Hi Unmesh,
>
> That's a fair point.  I have moved the lease duration to the broker
> heartbeat response.  That way lease durations can be changed just be
> reconfiguring the controllers.
>
> best,
> Colin
>
> On Wed, Sep 16, 2020, at 07:40, Unmesh Joshi wrote:
> > Thanks Colin, the changes look good to me. One small thing.
> > registration.lease.timeout.ms is the configuration on the controller
> side.
> > It will be good to comment on how brokers know about it, to be able to
> > send LeaseDurationMs
> > in the heartbeat request,
> > or else it can be added in the heartbeat response for brokers to know
> about
> > it.
> >
> > Thanks,
> > Unmesh
> >
> > On Fri, Sep 11, 2020 at 10:32 PM Colin McCabe 
> wrote:
> >
> > > Hi Unmesh,
> > >
> > > I think you're right that we should use a duration here rather than a
> > > time.  As you said, the clock on the controller will probably not
> match the
> > > one on the broker.  I have updated the KIP.
> > >
> > > > > It's important to keep in mind that messages may be delayed in the
> > > > > network, or arrive out of order.  When this happens, we will use
> the
> > > start
> > > > > time specified in the request to determine if the request is stale.
> > > > I am assuming that there will be a single TCP connection maintained
> > > between
> > > > broker and active controller. So, there won't be any out of order
> > > requests?
> > > > There will be a scenario of broker GC pause, which might cause
> connection
> > > > timeout and broker might need to reestablish the connection. If the
> pause
> > > > is too long, lease will expire and the heartbeat sent after the pause
> > > will
> > > > be treated as a new registration (similar to restart case), and a new
> > > epoch
> > > > number will be assigned to the broker.
> > >
> > > I agree with the end of this paragraph, but not with the start :)
> > >
> > > There can be out-of-order requests, since the broker will simply use a
> new
> > > TCP connection if the old one has problems.  This can happen for a
> variety
> > > of reasons.  I don't think GC pauses are the most common reason for
> this to
> > > happen.  It's more common to see issues issues in the network itself
> that
> > > result connections getting dropped from time to time.
> > >
> > > So we have to assume that messages may arrive out of order, and
> possibly
> > > be delayed.  I added a note that heartbeat requests should be ignored
> if
> > > the metadata log offset they contain is smaller than a previous
> heartbeat.
> > >
> > > > When the active controller fails, the new active controller needs to
> be
> > > 

Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #8

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[Jason Frantz] KAFKA-10401; Ensure `currentStateTimeStamp` is set correctly by 
group coordinator (#9202)


--
[...truncated 2.93 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 

Build failed in Jenkins: Kafka » kafka-2.4-jdk8 #5

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[Jason Frantz] KAFKA-10401; Ensure `currentStateTimeStamp` is set correctly by 
group coordinator (#9202)


--
[...truncated 2.77 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 

Build failed in Jenkins: Kafka » kafka-2.3-jdk8 #4

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[Jason Frantz] KAFKA-10401; Ensure `currentStateTimeStamp` is set correctly by 
group coordinator (#9202)


--
[...truncated 1.70 MB...]
kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

> Task :kafka-2.3-jdk8:generator:compileJava UP-TO-DATE
> Task :kafka-2.3-jdk8:generator:processResources NO-SOURCE
> Task :kafka-2.3-jdk8:generator:classes UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:processMessages UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:compileJava UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:processResources UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:classes UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:determineCommitId UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:createVersionFile
> Task :kafka-2.3-jdk8:clients:jar UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:compileTestJava UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:processTestResources UP-TO-DATE
> Task :kafka-2.3-jdk8:clients:testClasses UP-TO-DATE
> Task 

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #16

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[Jason Frantz] KAFKA-10401; Ensure `currentStateTimeStamp` is set correctly by 
group coordinator (#9202)


--
[...truncated 3.15 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull 

Re: [VOTE] KIP-666: Add Instant-based methods to ReadOnlySessionStore

2020-09-21 Thread John Roesler
Thanks, Jorge!

I’m +1 (binding)

-John

On Mon, Sep 21, 2020, at 12:35, Sophie Blee-Goldman wrote:
> Thanks for pointing out the vote in the discussion thread, this email
> somehow skipped my inbox  ¯\_(ツ)_/¯
> 
> I'm +1 (non-binding)
> 
> -Sophie
> 
> On Mon, Sep 7, 2020 at 4:18 AM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
> 
> > Hi everyone,
> >
> > I'd like to start a thread to vote for KIP-666 and align instant-based
> > operations on Interactive Query APIs between Window and Session stores:
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-666%3A+Add+Instant-based+methods+to+ReadOnlySessionStore
> >
> > Discussion thread:
> >
> > https://lists.apache.org/thread.html/r21193d599159b0e0002b95e427659fee8833ea0f8577d6f2e098ca8f%40%3Cdev.kafka.apache.org%3E
> >
> > Thanks!
> > Jorge.
> >
>


Re: [VOTE] KIP-663: API to Start and Shut Down Stream Threads and to Request Closing of Kafka Streams Clients

2020-09-21 Thread John Roesler
I’m +1 also. Thanks, Bruno!
-John

On Mon, Sep 21, 2020, at 17:08, Guozhang Wang wrote:
> Thanks Bruno. I'm +1 on the KIP.
> 
> On Mon, Sep 21, 2020 at 2:49 AM Bruno Cadonna  wrote:
> 
> > Hi,
> >
> > I would like to restart from zero the voting on KIP-663 that proposes to
> > add methods to the Kafka Streams client to add and remove stream threads
> > during execution.
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads
> >
> > Matthias, if you are still +1, please vote again.
> >
> > Best,
> > Bruno
> >
> > On 04.09.20 23:12, John Roesler wrote:
> > > Hi Sophie,
> > >
> > > Uh, oh, it's never a good sign when the discussion moves
> > > into the vote thread :)
> > >
> > > I agree with you, it seems like a good touch for
> > > removeStreamThread() to return the name of the thread that
> > > got removed, rather than a boolean flag. Maybe the return
> > > value would be `null` if there is no thread to remove.
> > >
> > > If we go that way, I'd suggest that addStreamThread() also
> > > return the name of the newly created thread, or null if no
> > > thread can be created right now.
> > >
> > > I'm not completely sure if I think that callers of this
> > > method would know exactly how many threads there are. Sure,
> > > if a human being is sitting there looking at the metrics or
> > > logs and decides to call the method, it would work out, but
> > > I'd expect this kind of method to find its way into
> > > automated tooling that reacts to things like current system
> > > load or resource saturation. Those kinds of toolchains often
> > > are part of a distributed system, and it's probably not that
> > > easy to guarantee that the thread count they observe is
> > > fully consistent with the number of threads that are
> > > actually running. Therefore, an in-situ `int
> > > numStreamThreads()` method might not be a bad idea. Then
> > > again, it seems sort of optional. A caller can catch an
> > > exception or react to a `null` return value just the same
> > > either way. Having both add/remove methods behave similarly
> > > is probably more valuable.
> > >
> > > Thanks,
> > > -John
> > >
> > >
> > > On Thu, 2020-09-03 at 12:15 -0700, Sophie Blee-Goldman
> > > wrote:
> > >> Hey, sorry for the late reply, I just have one minor suggestion. Since
> > we
> > >> don't
> > >> make any guarantees about which thread gets removed or allow the user to
> > >> specify, I think we should return either the index or full name of the
> > >> thread
> > >> that does get removed by removeThread().
> > >>
> > >> I know you just updated the KIP to return true/false if there
> > are/aren't any
> > >> threads to be removed, but I think this would be more appropriate as an
> > >> exception than as a return type. I think it's reasonable to expect
> > users to
> > >> have some sense to how many threads are remaining, and not try to remove
> > >> a thread when there is none left. To me, that indicates something wrong
> > >> with the user application code and should be treated as an exceptional
> > case.
> > >> I don't think the same code clarify argument applies here as to the
> > >> addStreamThread() case, as there's no reason for an application to be
> > >> looping and retrying removeStreamThread()  since if that fails, it's
> > because
> > >> there are no threads left and thus it will continue to always fail. And
> > if
> > >> the
> > >> user actually wants to shut down all threads, they should just close the
> > >> whole application rather than call removeStreamThread() in a loop.
> > >>
> > >> While I generally think it should be straightforward for users to track
> > how
> > >> many stream threads they have running, maybe it would be nice to add
> > >> a small utility method that does this for them. Something like
> > >>
> > >> // Returns the number of currently alive threads
> > >> boolean runningStreamThreads();
> > >>
> > >> On Thu, Sep 3, 2020 at 7:41 AM Matthias J. Sax 
> > wrote:
> > >>
> > >>> +1 (binding)
> > >>>
> > >>> On 9/3/20 6:16 AM, Bruno Cadonna wrote:
> >  Hi,
> > 
> >  I would like to start the voting on KIP-663 that proposes to add
> > methods
> >  to the Kafka Streams client to add and remove stream threads during
> >  execution.
> > 
> > 
> > >>>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads
> > 
> >  Best,
> >  Bruno
> > >
> >
> 
> 
> -- 
> -- Guozhang
>


Re: [DISCUSS] KIP-671: Shutdown Streams Application when appropriate exception is thrown

2020-09-21 Thread Walker Carlson
The error code right now is the assignor error, 2 is coded for shutdown
but it could be expanded to encode the causes or for other errors that need
to be communicated. For example we can add error code 3 to close the thread
but leave the client in an error state if we choose to do so in the future.

On Mon, Sep 21, 2020 at 3:43 PM Boyang Chen 
wrote:

> Thanks for the KIP Walker.
>
> In the KIP we mentioned "In order to communicate the shutdown request from
> one client to the others we propose to update the SubcriptionInfoData to
> include a short field which will encode an error code.", is there a
> dedicated error code that we should define here, or it is case-by-case?
>
> On Mon, Sep 21, 2020 at 1:38 PM Walker Carlson 
> wrote:
>
> > I am changing the name to "Add method to Shutdown entire Streams
> > Application" since we are no longer using an Exception, it seems more
> > appropriate.
> >
> > Also it looks like the discussion is pretty much finished so I will be
> > calling it to a vote.
> >
> > thanks,
> > Walker
> >
> > On Sat, Sep 19, 2020 at 7:49 PM Guozhang Wang 
> wrote:
> >
> > > Sounds good to me. I also feel that this call should be non-blocking
> but
> > I
> > > guess I was confused from the discussion thread that the API is
> designed
> > in
> > > a blocking fashion which contradicts with my perspective and hence I
> > asked
> > > for clarification :)
> > >
> > > Guozhang
> > >
> > >
> > > On Wed, Sep 16, 2020 at 12:32 PM Walker Carlson  >
> > > wrote:
> > >
> > > > Hello Guozhang,
> > > >
> > > > As for the logging I plan on having three logs. First, the client log
> > > that
> > > > it is requesting an application shutdown, second, the leader log
> > > processId
> > > > of the invoker, third, then the StreamRebalanceListener it logs that
> it
> > > is
> > > > closing because of an `stream.appShutdown`. Hopefully this will be
> > enough
> > > > to make the cause of the close clear.
> > > >
> > > > I see what you mean about the name being dependent on the behavior of
> > the
> > > > method so I will try to clarify.  This is how I currently envision
> the
> > > call
> > > > working.
> > > >
> > > > It is not an option to directly initiate a shutdown through a
> > > StreamThread
> > > > object from a KafkaStreams object because "KafkaConsumer is not safe
> > for
> > > > multi-threaded access". So how it works is that the method in
> > > KafkaStreams
> > > > finds the first alive thread and sets a flag in the StreamThread. The
> > > > StreamThread will receive the flag in its runloop then set the error
> > code
> > > > and trigger a rebalance, afterwards it will stop processing. After
> the
> > > > KafkaStreams has set the flag it will return true and continue
> running.
> > > If
> > > > there are no alive threads the shutdown will fail and return false.
> > > >
> > > > What do you think the blocking behavior should be? I think that the
> > > > StreamThread should definitely stop to prevent any of the corruption
> we
> > > are
> > > > trying to avoid by shutting down, but I don't see any advantage of
> the
> > > > KafkaStreams call blocking.
> > > >
> > > > You are correct to be concerned about the uncaught exception handler.
> > If
> > > > there are no live StreamThreads the rebalance will not be started at
> > all
> > > > and this would be a problem. However the user should be aware of this
> > > > because of the return of false and react appropriately. This would
> also
> > > be
> > > > fixed if we implemented our own handler so we can rebalance before
> the
> > > > StreamThread closes.
> > > >
> > > > With that in mind I believe that `initiateClosingAllClients` would be
> > an
> > > > appropriate name. WDYT?
> > > >
> > > > Walker
> > > >
> > > >
> > > > On Wed, Sep 16, 2020 at 11:43 AM Guozhang Wang 
> > > wrote:
> > > >
> > > > > Hello Walker,
> > > > >
> > > > > Thanks for the updated KIP. Previously I'm also a bit hesitant on
> the
> > > > newly
> > > > > added public exception to communicate user-requested whole app
> > > shutdown,
> > > > > but the reason I did not bring this up is that I feel there's
> still a
> > > > need
> > > > > from operational aspects that we can differentiate the scenario
> where
> > > an
> > > > > instance is closed because of a) local `streams.close()` triggered,
> > or
> > > > b) a
> > > > > remote instance's `stream.shutdownApp` triggered. So if we are
> going
> > to
> > > > > remove that exception (which I'm also in favor), we should at least
> > > > > differentiate from the log4j levels.
> > > > >
> > > > > Regarding the semantics that "It should wait to receive the
> shutdown
> > > > > request in the rebalance it triggers." I'm not sure I fully
> > understand,
> > > > > since this may be triggered from the stream thread's uncaught
> > exception
> > > > > handler, if that thread is already dead then maybe a rebalance
> > listener
> > > > > would not even be fired at all. Although I know this is some
> > > > implementation
> > > > > details that you 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #76

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10401; Ensure `currentStateTimeStamp` is set correctly by group 
coordinator (#9202)


--
[...truncated 3.29 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED


Re: [DISCUSS] KIP-671: Shutdown Streams Application when appropriate exception is thrown

2020-09-21 Thread Boyang Chen
Thanks for the KIP Walker.

In the KIP we mentioned "In order to communicate the shutdown request from
one client to the others we propose to update the SubcriptionInfoData to
include a short field which will encode an error code.", is there a
dedicated error code that we should define here, or it is case-by-case?

On Mon, Sep 21, 2020 at 1:38 PM Walker Carlson 
wrote:

> I am changing the name to "Add method to Shutdown entire Streams
> Application" since we are no longer using an Exception, it seems more
> appropriate.
>
> Also it looks like the discussion is pretty much finished so I will be
> calling it to a vote.
>
> thanks,
> Walker
>
> On Sat, Sep 19, 2020 at 7:49 PM Guozhang Wang  wrote:
>
> > Sounds good to me. I also feel that this call should be non-blocking but
> I
> > guess I was confused from the discussion thread that the API is designed
> in
> > a blocking fashion which contradicts with my perspective and hence I
> asked
> > for clarification :)
> >
> > Guozhang
> >
> >
> > On Wed, Sep 16, 2020 at 12:32 PM Walker Carlson 
> > wrote:
> >
> > > Hello Guozhang,
> > >
> > > As for the logging I plan on having three logs. First, the client log
> > that
> > > it is requesting an application shutdown, second, the leader log
> > processId
> > > of the invoker, third, then the StreamRebalanceListener it logs that it
> > is
> > > closing because of an `stream.appShutdown`. Hopefully this will be
> enough
> > > to make the cause of the close clear.
> > >
> > > I see what you mean about the name being dependent on the behavior of
> the
> > > method so I will try to clarify.  This is how I currently envision the
> > call
> > > working.
> > >
> > > It is not an option to directly initiate a shutdown through a
> > StreamThread
> > > object from a KafkaStreams object because "KafkaConsumer is not safe
> for
> > > multi-threaded access". So how it works is that the method in
> > KafkaStreams
> > > finds the first alive thread and sets a flag in the StreamThread. The
> > > StreamThread will receive the flag in its runloop then set the error
> code
> > > and trigger a rebalance, afterwards it will stop processing. After the
> > > KafkaStreams has set the flag it will return true and continue running.
> > If
> > > there are no alive threads the shutdown will fail and return false.
> > >
> > > What do you think the blocking behavior should be? I think that the
> > > StreamThread should definitely stop to prevent any of the corruption we
> > are
> > > trying to avoid by shutting down, but I don't see any advantage of the
> > > KafkaStreams call blocking.
> > >
> > > You are correct to be concerned about the uncaught exception handler.
> If
> > > there are no live StreamThreads the rebalance will not be started at
> all
> > > and this would be a problem. However the user should be aware of this
> > > because of the return of false and react appropriately. This would also
> > be
> > > fixed if we implemented our own handler so we can rebalance before the
> > > StreamThread closes.
> > >
> > > With that in mind I believe that `initiateClosingAllClients` would be
> an
> > > appropriate name. WDYT?
> > >
> > > Walker
> > >
> > >
> > > On Wed, Sep 16, 2020 at 11:43 AM Guozhang Wang 
> > wrote:
> > >
> > > > Hello Walker,
> > > >
> > > > Thanks for the updated KIP. Previously I'm also a bit hesitant on the
> > > newly
> > > > added public exception to communicate user-requested whole app
> > shutdown,
> > > > but the reason I did not bring this up is that I feel there's still a
> > > need
> > > > from operational aspects that we can differentiate the scenario where
> > an
> > > > instance is closed because of a) local `streams.close()` triggered,
> or
> > > b) a
> > > > remote instance's `stream.shutdownApp` triggered. So if we are going
> to
> > > > remove that exception (which I'm also in favor), we should at least
> > > > differentiate from the log4j levels.
> > > >
> > > > Regarding the semantics that "It should wait to receive the shutdown
> > > > request in the rebalance it triggers." I'm not sure I fully
> understand,
> > > > since this may be triggered from the stream thread's uncaught
> exception
> > > > handler, if that thread is already dead then maybe a rebalance
> listener
> > > > would not even be fired at all. Although I know this is some
> > > implementation
> > > > details that you probably abstract away from the proposal, I'd like
> to
> > > make
> > > > sure that we are on the same page regarding its blocking behavior
> since
> > > it
> > > > is quite crucial to users as well. Could you elaborate a bit more?
> > > >
> > > > Regarding the function name, I guess my personal preference would
> > depend
> > > on
> > > > its actual blocking behavior as above :)
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Sep 16, 2020 at 10:52 AM Walker Carlson <
> wcarl...@confluent.io
> > >
> > > > wrote:
> > > >
> > > > > Hello all again,
> > > > >
> > > > > I have updated the kip 

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-09-21 Thread Boyang Chen
Hey Bill,

unfortunately KIP-590 will not be in 2.7 release, could you move it to
postponed KIPs?

Best,
Boyang

On Thu, Sep 10, 2020 at 2:41 PM Bill Bejeck  wrote:

> Hi Gary,
>
> It's been added.
>
> Regards,
> Bill
>
> On Thu, Sep 10, 2020 at 4:14 PM Gary Russell  wrote:
>
> > Can someone add a link to the release plan page [1] to the Future
> Releases
> > page [2]?
> >
> > I have the latter bookmarked.
> >
> > Thanks.
> >
> > [1]:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > [2]:
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> > 
> > From: Bill Bejeck 
> > Sent: Wednesday, September 9, 2020 4:35 PM
> > To: dev 
> > Subject: Re: [DISCUSS] Apache Kafka 2.7.0 release
> >
> > Hi Dongjin,
> >
> > I've moved both KIPs to the release plan.
> >
> > Keep in mind the cutoff for KIP acceptance is September 30th. If the KIP
> > discussions are completed, I'd recommend starting a vote for them.
> >
> > Regards,
> > Bill
> >
> > On Wed, Sep 9, 2020 at 8:39 AM Dongjin Lee  wrote:
> >
> > > Hi Bill,
> > >
> > > Could you add the following KIPs to the plan?
> > >
> > > - KIP-508: Make Suppression State Queriable
> > > <
> > >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-508%253A%2BMake%2BSuppression%2BState%2BQueriabledata=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436sdata=CkJill9%2FuBqp2HdVQrIjElj2z1nMgQXRaUyWrvY94dk%3Dreserved=0
> > > >
> > > - KIP-653: Upgrade log4j to log4j2
> > > <
> > >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-653%253A%2BUpgrade%2Blog4j%2Bto%2Blog4j2data=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436sdata=nHbw6WiQpkWT3KgPfanEtDCh3sWcL0O%2By8Fu0Bl4ivc%3Dreserved=0
> > > >
> > >
> > > Both KIPs are completely implemented with passing all tests, but not
> got
> > > reviewed by the committers. Could anyone have a look?
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > On Wed, Sep 9, 2020 at 8:38 AM Leah Thomas 
> wrote:
> > >
> > > > Hi Bill,
> > > >
> > > > Could you also add KIP-450 to the release plan? It's been merged.
> > > >
> > > >
> > >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-450%253A%2BSliding%2BWindow%2BAggregations%2Bin%2Bthe%2BDSLdata=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436sdata=1KbAPyL7NKWQSZWBKItUTpAJF5SY6%2FMCj8Rn%2Fw2qO20%3Dreserved=0
> > > >
> > > > Cheers,
> > > > Leah
> > > >
> > > > On Tue, Sep 8, 2020 at 9:32 AM Bill Bejeck 
> wrote:
> > > >
> > > > > Hi Bruno,
> > > > >
> > > > > Thanks for letting me know, I've added KIP-662 to the release plan.
> > > > >
> > > > > -Bill
> > > > >
> > > > > On Mon, Sep 7, 2020 at 11:33 AM Bruno Cadonna 
> > > > wrote:
> > > > >
> > > > > > Hi Bill,
> > > > > >
> > > > > > Could you add KIP-662 [1] to the release plan. The KIP has been
> > > already
> > > > > > implemented.
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-662%253A%2BThrow%2BException%2Bwhen%2BSource%2BTopics%2Bof%2Ba%2BStreams%2BApp%2Bare%2BDeleteddata=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436sdata=cxbyU9BJuJkM2JJ6yqfr5dHXrg7Mfr1%2BOKxCy%2FJQiCw%3Dreserved=0
> > > > > >
> > > > > > On 26.08.20 16:54, Bill Bejeck wrote:
> > > > > > > Greetings All!
> > > > > > >
> > > > > > > I've published a release plan at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fpages%2Fviewpage.action%3FpageId%3D158872629data=02%7C01%7Cgrussell%40vmware.com%7Cf9f4193557084c2f746508d854ffe7d2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637352805378806436sdata=iXp0nfkEK5gcuIJcIaZQthfnPR9cJ%2F8x2vBpQR942zA%3Dreserved=0
> > > > > > .
> > > > > > > I have included all of the KIPs that are currently approved,
> but
> > > I'm
> > > > > > happy
> > > > > > > to make any adjustments as necessary.
> > > > > > >
> > > > > > > The KIP freeze is on September 30 with a target release date of
> > > > > November
> > > > > > 6.
> > > > > > >
> > > > > > > Let me know if there are any objections.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Bill Bejeck
> > > > > > >
> > > > > > > On Fri, Aug 14, 2020 at 4:01 PM John Roesler <
> > vvcep...@apache.org>
> > > > > > wrote:
> > > > > > >
> > > > > > >> Thanks, Bill!
> > > > 

Re: [VOTE] KIP-663: API to Start and Shut Down Stream Threads and to Request Closing of Kafka Streams Clients

2020-09-21 Thread Guozhang Wang
Thanks Bruno. I'm +1 on the KIP.

On Mon, Sep 21, 2020 at 2:49 AM Bruno Cadonna  wrote:

> Hi,
>
> I would like to restart from zero the voting on KIP-663 that proposes to
> add methods to the Kafka Streams client to add and remove stream threads
> during execution.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads
>
> Matthias, if you are still +1, please vote again.
>
> Best,
> Bruno
>
> On 04.09.20 23:12, John Roesler wrote:
> > Hi Sophie,
> >
> > Uh, oh, it's never a good sign when the discussion moves
> > into the vote thread :)
> >
> > I agree with you, it seems like a good touch for
> > removeStreamThread() to return the name of the thread that
> > got removed, rather than a boolean flag. Maybe the return
> > value would be `null` if there is no thread to remove.
> >
> > If we go that way, I'd suggest that addStreamThread() also
> > return the name of the newly created thread, or null if no
> > thread can be created right now.
> >
> > I'm not completely sure if I think that callers of this
> > method would know exactly how many threads there are. Sure,
> > if a human being is sitting there looking at the metrics or
> > logs and decides to call the method, it would work out, but
> > I'd expect this kind of method to find its way into
> > automated tooling that reacts to things like current system
> > load or resource saturation. Those kinds of toolchains often
> > are part of a distributed system, and it's probably not that
> > easy to guarantee that the thread count they observe is
> > fully consistent with the number of threads that are
> > actually running. Therefore, an in-situ `int
> > numStreamThreads()` method might not be a bad idea. Then
> > again, it seems sort of optional. A caller can catch an
> > exception or react to a `null` return value just the same
> > either way. Having both add/remove methods behave similarly
> > is probably more valuable.
> >
> > Thanks,
> > -John
> >
> >
> > On Thu, 2020-09-03 at 12:15 -0700, Sophie Blee-Goldman
> > wrote:
> >> Hey, sorry for the late reply, I just have one minor suggestion. Since
> we
> >> don't
> >> make any guarantees about which thread gets removed or allow the user to
> >> specify, I think we should return either the index or full name of the
> >> thread
> >> that does get removed by removeThread().
> >>
> >> I know you just updated the KIP to return true/false if there
> are/aren't any
> >> threads to be removed, but I think this would be more appropriate as an
> >> exception than as a return type. I think it's reasonable to expect
> users to
> >> have some sense to how many threads are remaining, and not try to remove
> >> a thread when there is none left. To me, that indicates something wrong
> >> with the user application code and should be treated as an exceptional
> case.
> >> I don't think the same code clarify argument applies here as to the
> >> addStreamThread() case, as there's no reason for an application to be
> >> looping and retrying removeStreamThread()  since if that fails, it's
> because
> >> there are no threads left and thus it will continue to always fail. And
> if
> >> the
> >> user actually wants to shut down all threads, they should just close the
> >> whole application rather than call removeStreamThread() in a loop.
> >>
> >> While I generally think it should be straightforward for users to track
> how
> >> many stream threads they have running, maybe it would be nice to add
> >> a small utility method that does this for them. Something like
> >>
> >> // Returns the number of currently alive threads
> >> boolean runningStreamThreads();
> >>
> >> On Thu, Sep 3, 2020 at 7:41 AM Matthias J. Sax 
> wrote:
> >>
> >>> +1 (binding)
> >>>
> >>> On 9/3/20 6:16 AM, Bruno Cadonna wrote:
>  Hi,
> 
>  I would like to start the voting on KIP-663 that proposes to add
> methods
>  to the Kafka Streams client to add and remove stream threads during
>  execution.
> 
> 
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads
> 
>  Best,
>  Bruno
> >
>


-- 
-- Guozhang


[jira] [Created] (KAFKA-10508) Consider moving ForwardRequestHandler to a separate class

2020-09-21 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10508:
---

 Summary: Consider moving ForwardRequestHandler to a separate class
 Key: KAFKA-10508
 URL: https://issues.apache.org/jira/browse/KAFKA-10508
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen


With the new redirection template merged in 
[https://github.com/apache/kafka/pull/9103,] the size of KafkaApis file grows 
to 3500+, which is reaching a fair large size. We should consider moving the 
redirection template out as a separate file to reduce the main class size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-671: Add method to Shutdown entire Streams Application

2020-09-21 Thread Guozhang Wang
Thanks for finalizing the KIP. +1 (binding)


Guozhang

On Mon, Sep 21, 2020 at 1:38 PM Walker Carlson 
wrote:

> Hello all,
>
> I would like to start a thread to vote for KIP-671 to add a method to close
> all clients in a kafka streams application.
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Shutdown+Streams+Application+when+appropriate+exception+is+thrown
>
> Discussion thread: *here
> <
> https://mail-archives.apache.org/mod_mbox/kafka-dev/202009.mbox/%3CCAC55fuh3HAGCxz-PzxTJraczy6T-os2oiCV328PBeuJQSVYASg%40mail.gmail.com%3E
> >*
>
> Thanks,
> -Walker
>


-- 
-- Guozhang


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #75

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10438: Lazy initialization of record header to reduce memory 
usage (#9223)


--
[...truncated 6.58 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Re: [DISCUSS] KIP-516: Topic Identifiers

2020-09-21 Thread Justine Olshan
Hi all,

After thinking about it, I've decided to remove the topic name from the
Fetch Request and Response after all. Since there are so many of these
requests per second, it is worth removing the extra information. I've
updated the KIP to reflect this change.

Please let me know if there is anything else we should discuss before
voting.

Thank you,
Justine

On Fri, Sep 18, 2020 at 9:46 AM Justine Olshan  wrote:

> Hi Jun,
>
> I see what you are saying. For now we can remove the extra information.
> I'll leave the option to add more fields to the file in the future. The KIP
> has been updated to reflect this change.
>
> Thanks,
> Justine
>
> On Fri, Sep 18, 2020 at 8:46 AM Jun Rao  wrote:
>
>> Hi, Justine,
>>
>> Thanks for the reply.
>>
>> 13. If the log directory is the source of truth, it means that the
>> redundant info in the metadata file will be ignored. Then the question is
>> why do we need to put the redundant info in the metadata file now?
>>
>> Thanks,
>>
>> Jun
>>
>> On Thu, Sep 17, 2020 at 5:07 PM Justine Olshan 
>> wrote:
>>
>> > Hi Jun,
>> > Thanks for the quick response!
>> >
>> > 12. I've decided to bump up the versions on the requests and updated the
>> > KIP. I think it's good we thoroughly discussed the options here, so we
>> know
>> > we made a good choice. :)
>> >
>> > 13. This is an interesting situation. I think if this does occur we
>> should
>> > give a warning. I agree that it's hard to know the source of truth for
>> sure
>> > since the directory or the file could be manually modified. I guess the
>> > directory could be used as the source of truth. To be honest, I'm not
>> > really sure what happens in kafka when the log directory is renamed
>> > manually in such a way. I'm also wondering if the situation is
>> recoverable
>> > in this scenario.
>> >
>> > Thanks,
>> > Justine
>> >
>> > On Thu, Sep 17, 2020 at 4:28 PM Jun Rao  wrote:
>> >
>> > > Hi, Justine,
>> > >
>> > > Thanks for the reply.
>> > >
>> > > 12. I don't have a strong preference either. However, if we need IBP
>> > > anyway, maybe it's easier to just bump up the version for all inter
>> > broker
>> > > requests and add the topic id field as a regular field. A regular
>> field
>> > is
>> > > a bit more concise in wire transfer than a flexible field.
>> > >
>> > > 13. The confusion that I was referring to is between the topic name
>> and
>> > > partition number between the log dir and the metadata file. For
>> example,
>> > if
>> > > the log dir is topicA-1 and the metadata file in it has topicB and
>> > > partition 0 (say due to a bug or manual modification), which one do we
>> > use
>> > > as the source of truth?
>> > >
>> > > Jun
>> > >
>> > > On Thu, Sep 17, 2020 at 3:43 PM Justine Olshan 
>> > > wrote:
>> > >
>> > > > Hi Jun,
>> > > > Thanks for the comments.
>> > > >
>> > > > 12. I bumped the LeaderAndIsrRequest because I removed the topic
>> name
>> > > field
>> > > > in the response. It may be possible to avoid bumping the version
>> > without
>> > > > that change, but I may be missing something.
>> > > > I believe StopReplica is actually on version 3 now, but because
>> > version 2
>> > > > is flexible, I kept that listed as version 2 on the KIP page.
>> However,
>> > > you
>> > > > may be right in that we may need to bump the version on StopReplica
>> to
>> > > deal
>> > > > with deletion differently as mentioned above. I don't know if I
>> have a
>> > > big
>> > > > preference over used tagged fields or not.
>> > > >
>> > > > 13. I was thinking that in the case where the file and the request
>> > topic
>> > > > ids don't match, it means that the broker's topic/the one in the
>> file
>> > has
>> > > > been deleted. In that case, we would need to delete the old topic
>> and
>> > > start
>> > > > receiving the new version. If the topic name were to change, but the
>> > ids
>> > > > still matched, the file would also need to update. Am I missing a
>> case
>> > > > where the file would be correct and not the request?
>> > > >
>> > > > Thanks,
>> > > > Justine
>> > > >
>> > > > On Thu, Sep 17, 2020 at 3:18 PM Jun Rao  wrote:
>> > > >
>> > > > > Hi, Justine,
>> > > > >
>> > > > > Thanks for the reply. A couple of more comments below.
>> > > > >
>> > > > > 12. ListOffset and OffsetForLeader currently don't support
>> flexible
>> > > > fields.
>> > > > > So, we have to bump up the version number and use IBP at least for
>> > > these
>> > > > > two requests. Note that it seems 2.7.0 will require IBP anyway
>> > because
>> > > of
>> > > > > changes in KAFKA-10435. Also, it seems that the version for
>> > > > > LeaderAndIsrRequest and StopReplica are bumped even though we only
>> > > added
>> > > > a
>> > > > > tagged field. But since IBP is needed anyway, we may want to
>> revisit
>> > > the
>> > > > > overall tagged field choice.
>> > > > >
>> > > > > 13. The only downside is the potential confusion on which one is
>> the
>> > > > source
>> > > > > of truth if they don't match. Another option is to include those

[VOTE] KIP-671: Add method to Shutdown entire Streams Application

2020-09-21 Thread Walker Carlson
Hello all,

I would like to start a thread to vote for KIP-671 to add a method to close
all clients in a kafka streams application.

KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Shutdown+Streams+Application+when+appropriate+exception+is+thrown

Discussion thread: *here
*

Thanks,
-Walker


Re: [DISCUSS] KIP-671: Shutdown Streams Application when appropriate exception is thrown

2020-09-21 Thread Walker Carlson
I am changing the name to "Add method to Shutdown entire Streams
Application" since we are no longer using an Exception, it seems more
appropriate.

Also it looks like the discussion is pretty much finished so I will be
calling it to a vote.

thanks,
Walker

On Sat, Sep 19, 2020 at 7:49 PM Guozhang Wang  wrote:

> Sounds good to me. I also feel that this call should be non-blocking but I
> guess I was confused from the discussion thread that the API is designed in
> a blocking fashion which contradicts with my perspective and hence I asked
> for clarification :)
>
> Guozhang
>
>
> On Wed, Sep 16, 2020 at 12:32 PM Walker Carlson 
> wrote:
>
> > Hello Guozhang,
> >
> > As for the logging I plan on having three logs. First, the client log
> that
> > it is requesting an application shutdown, second, the leader log
> processId
> > of the invoker, third, then the StreamRebalanceListener it logs that it
> is
> > closing because of an `stream.appShutdown`. Hopefully this will be enough
> > to make the cause of the close clear.
> >
> > I see what you mean about the name being dependent on the behavior of the
> > method so I will try to clarify.  This is how I currently envision the
> call
> > working.
> >
> > It is not an option to directly initiate a shutdown through a
> StreamThread
> > object from a KafkaStreams object because "KafkaConsumer is not safe for
> > multi-threaded access". So how it works is that the method in
> KafkaStreams
> > finds the first alive thread and sets a flag in the StreamThread. The
> > StreamThread will receive the flag in its runloop then set the error code
> > and trigger a rebalance, afterwards it will stop processing. After the
> > KafkaStreams has set the flag it will return true and continue running.
> If
> > there are no alive threads the shutdown will fail and return false.
> >
> > What do you think the blocking behavior should be? I think that the
> > StreamThread should definitely stop to prevent any of the corruption we
> are
> > trying to avoid by shutting down, but I don't see any advantage of the
> > KafkaStreams call blocking.
> >
> > You are correct to be concerned about the uncaught exception handler. If
> > there are no live StreamThreads the rebalance will not be started at all
> > and this would be a problem. However the user should be aware of this
> > because of the return of false and react appropriately. This would also
> be
> > fixed if we implemented our own handler so we can rebalance before the
> > StreamThread closes.
> >
> > With that in mind I believe that `initiateClosingAllClients` would be an
> > appropriate name. WDYT?
> >
> > Walker
> >
> >
> > On Wed, Sep 16, 2020 at 11:43 AM Guozhang Wang 
> wrote:
> >
> > > Hello Walker,
> > >
> > > Thanks for the updated KIP. Previously I'm also a bit hesitant on the
> > newly
> > > added public exception to communicate user-requested whole app
> shutdown,
> > > but the reason I did not bring this up is that I feel there's still a
> > need
> > > from operational aspects that we can differentiate the scenario where
> an
> > > instance is closed because of a) local `streams.close()` triggered, or
> > b) a
> > > remote instance's `stream.shutdownApp` triggered. So if we are going to
> > > remove that exception (which I'm also in favor), we should at least
> > > differentiate from the log4j levels.
> > >
> > > Regarding the semantics that "It should wait to receive the shutdown
> > > request in the rebalance it triggers." I'm not sure I fully understand,
> > > since this may be triggered from the stream thread's uncaught exception
> > > handler, if that thread is already dead then maybe a rebalance listener
> > > would not even be fired at all. Although I know this is some
> > implementation
> > > details that you probably abstract away from the proposal, I'd like to
> > make
> > > sure that we are on the same page regarding its blocking behavior since
> > it
> > > is quite crucial to users as well. Could you elaborate a bit more?
> > >
> > > Regarding the function name, I guess my personal preference would
> depend
> > on
> > > its actual blocking behavior as above :)
> > >
> > >
> > > Guozhang
> > >
> > >
> > >
> > >
> > > On Wed, Sep 16, 2020 at 10:52 AM Walker Carlson  >
> > > wrote:
> > >
> > > > Hello all again,
> > > >
> > > > I have updated the kip to no longer use an exception and instead add
> a
> > > > method to the KafkaStreams class, this seems to satisfy everyone's
> > > concerns
> > > > about how and when the functionality will be invoked.
> > > >
> > > > There is still a question over the name. We must decide between
> > > > "shutdownApplication", "initateCloseAll", "closeAllInstaces" or some
> > > > variation.
> > > >
> > > > I am rather indifferent to the name. I think that they all get the
> > point
> > > > across. The most clear to me would be shutdownApplicaiton or
> > > > closeAllInstacnes but WDYT?
> > > >
> > > > Walker
> > > >
> > > >
> > > >
> > > > On Wed, Sep 16, 2020 at 9:30 AM Walker 

[jira] [Created] (KAFKA-10507) Limit the set of APIs returned in pre-authentication ApiVersions

2020-09-21 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10507:
---

 Summary: Limit the set of APIs returned in pre-authentication 
ApiVersions 
 Key: KAFKA-10507
 URL: https://issues.apache.org/jira/browse/KAFKA-10507
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


We use the ApiVersions RPC to check whether the SaslHandshake and 
SaslAuthenticate APIs are supported before authenticating with the broker. 
Currently the response contains all APIs supported by the broker. It seems like 
a good idea to reduce the set of APIs returned at this level to only those 
which are supported prior to authentication. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-664: Provide tooling to detect and abort hanging transactions

2020-09-21 Thread Jason Gustafson
Thanks everyone! The vote passes. Here is the tally:

+3 (binding): Me, Guozhang Wang, Boyang Chen
+2 (non-binding): Robert Barrett, Ron Dagostino

-Jason


On Fri, Sep 18, 2020 at 6:42 PM Boyang Chen 
wrote:

> Thanks Jason, +1 (binding) from me
>
> Boyang
>
> On Fri, Sep 11, 2020 at 10:12 AM Robert Barrett 
> wrote:
>
> > +1 (non-binding)
> >
> > Thanks Jason!
> >
> > On Tue, Sep 8, 2020 at 5:28 PM Guozhang Wang  wrote:
> >
> > > +1. Thanks!
> > >
> > > Guozhang
> > >
> > > On Tue, Sep 8, 2020 at 3:04 PM Ron Dagostino 
> wrote:
> > >
> > > > +1 (non-binding) -- Thanks, Jason!
> > > >
> > > > Ron
> > > >
> > > > On Tue, Sep 8, 2020 at 2:04 PM Jason Gustafson 
> > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I'd like to start a vote on KIP-664:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-664%3A+Provide+tooling+to+detect+and+abort+hanging+transactions
> > > > > .
> > > > > Thanks for all the feedback!
> > > > >
> > > > > Best,
> > > > > Jason
> > > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: [VOTE] KIP-666: Add Instant-based methods to ReadOnlySessionStore

2020-09-21 Thread Sophie Blee-Goldman
Thanks for pointing out the vote in the discussion thread, this email
somehow skipped my inbox  ¯\_(ツ)_/¯

I'm +1 (non-binding)

-Sophie

On Mon, Sep 7, 2020 at 4:18 AM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Hi everyone,
>
> I'd like to start a thread to vote for KIP-666 and align instant-based
> operations on Interactive Query APIs between Window and Session stores:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-666%3A+Add+Instant-based+methods+to+ReadOnlySessionStore
>
> Discussion thread:
>
> https://lists.apache.org/thread.html/r21193d599159b0e0002b95e427659fee8833ea0f8577d6f2e098ca8f%40%3Cdev.kafka.apache.org%3E
>
> Thanks!
> Jorge.
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #74

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Replace Java 14 with Java 15 in the README (#9298)

[github] KAFKA-10438: Lazy initialization of record header to reduce memory 
usage (#9223)


--
[...truncated 3.26 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #75

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Replace Java 14 with Java 15 in the README (#9298)

[github] KAFKA-10438: Lazy initialization of record header to reduce memory 
usage (#9223)


--
[...truncated 3.29 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #74

2020-09-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Replace Java 14 with Java 15 in the README (#9298)


--
[...truncated 3.29 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[GitHub] [kafka-site] ankit-kumar-25 commented on pull request #220: KAFKA-8360: Docs do not mention RequestQueueSize JMX metric

2020-09-21 Thread GitBox


ankit-kumar-25 commented on pull request #220:
URL: https://github.com/apache/kafka-site/pull/220#issuecomment-696188603


   Hey @viktorsomogyi, 
   
   Thank you for the pointers, I have created a PR against the ops.html 
available in the Kafka project: https://github.com/apache/kafka/pull/9314



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-09-21 Thread Dongjin Lee
Hi John,

I updated the PR applying the API changes we discussed above. I am now
updating the KIP document.

Thanks,
Dongjin

On Sat, Sep 19, 2020 at 10:42 AM John Roesler  wrote:

> Hi Dongjin,
>
> Yes, that’s right. My the time of KIP-307, we had no choice but to add a
> second name. But we do have a choice with Suppress.
>
> Thanks!
> -John
>
> On Thu, Sep 17, 2020, at 13:14, Dongjin Lee wrote:
> > Hi John,
> >
> > I just reviewed KIP-307. As far as I understood, ...
> >
> > 1. There was Materialized name initially.
> > 2. With KIP-307, Named Operations were added.
> > 3. Now we have two options for materializing suppression. If we take
> > Materialized name here, we have two names for the same operation, which
> is
> > not feasible.
> >
> > Do I understand correctly?
> >
> > > Do you have a use case in mind for having two separate names for the
> > operation and the view?
> >
> > No. I am now entirely convinced with your suggestion.
> >
> > I just started to update the draft implementation. If I understand
> > correctly, please notify me; I will update the KIP by adding the
> discussion
> > above.
> >
> > Best,
> > Dongjin
> >
> > On Thu, Sep 17, 2020 at 11:06 AM John Roesler 
> wrote:
> >
> > > Hi Dongjin,
> > >
> > > Thanks for the reply. Yes, that’s correct, we added that method to name
> > > the operation. But the operation seems synonymous with the view
> produced
> > > the operation, right?
> > >
> > > During KIP-307, I remember thinking that it’s unfortunate the we had to
> > > have two different “name” concepts for the same thing just because
> setting
> > > the name on Materialized is equivalent both to making it queriable and
> > > actually materializing it.
> > >
> > > If we were to reconsider the API, it would be nice to treat these
> three as
> > > orthogonal:
> > > * specify a name
> > > * flag to make the view queriable
> > > * flag to materialize the view
> > >
> > > That was the context behind my suggestion. Do you have a use case in
> mind
> > > for having two separate names for the operation and the view?
> > >
> > > Thanks,
> > > John
> > >
> > > On Wed, Sep 16, 2020, at 11:43, Dongjin Lee wrote:
> > > > Hi John,
> > > >
> > > > It seems like the available alternatives in this point is clear:
> > > >
> > > > 1. Pass queriable name as a separate parameter (i.e.,
> > > > `KTable#suppress(Suppressed, String)`)
> > > > 2. Make use of the Suppression processor name as a queryable name by
> > > adding
> > > > `enableQuery` optional flag to `Suppressed`.
> > > >
> > > > However, I doubt the second approach a little bit; As far as I know,
> the
> > > > processor name is introduced in KIP-307[^1] to make debugging
> topology
> > > easy
> > > > and understandable. Since the processor name is an independent
> concept
> > > with
> > > > the materialization, I feel the first approach is more natural and
> > > > consistent. Is there any specific reason that you prefer the second
> > > > approach?
> > > >
> > > > Thanks,
> > > > Dongjin
> > > >
> > > > [^1]:
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL
> > > >
> > > >
> > > >
> > > > On Wed, Sep 16, 2020 at 11:48 PM John Roesler 
> > > wrote:
> > > >
> > > > > Hi Dongjin,
> > > > >
> > > > > Yes, that's where I was leaning. Although, I'd prefer adding
> > > > > the option to Suppressed instead of adding a new argument to
> > > > > the method call.
> > > > >
> > > > > What do you think about:
> > > > >
> > > > > class Suppressed {
> > > > > +  public Suppressed enableQuery();
> > > > > }
> > > > >
> > > > > Since Suppressed already has `withName(String)`, it seems
> > > > > like all we need to add is a boolean flag.
> > > > >
> > > > > Does that seem sensible to you?
> > > > >
> > > > > Thanks,
> > > > > -John
> > > > >
> > > > > On Wed, 2020-09-16 at 21:50 +0900, Dongjin Lee wrote:
> > > > > > Hi John,
> > > > > >
> > > > > > > Although it's not great to have "special snowflakes" in the
> API,
> > > > > Choice B
> > > > > > does seem safer in the short term. We would basically be
> proposing a
> > > > > > temporary API to make the suppressed view queriable without a
> > > > > Materialized
> > > > > > argument.
> > > > > >
> > > > > > Then, it seems like you prefer `KTable#suppress(Suppressed,
> String)`
> > > > > (i.e.,
> > > > > > queriable name only as a parameter) for this time, and refine API
> > > with
> > > > > the
> > > > > > other related KIPs later.
> > > > > >
> > > > > > Do I understand correctly?
> > > > > >
> > > > > > Thanks,
> > > > > > Dongjin
> > > > > >
> > > > > > On Wed, Sep 16, 2020 at 2:17 AM John Roesler <
> vvcep...@apache.org>
> > > > > wrote:
> > > > > >
> > > > > > > Hi Dongjin,
> > > > > > >
> > > > > > > Thanks for presenting these options. The concern that
> > > > > > > Matthias brought up is a very deep problem that afflicts all
> > > > > > > operations downstream of windowing operations. It's the same
> > > > > > > thing that 

[GitHub] [kafka-site] viktorsomogyi commented on pull request #220: KAFKA-8360: Docs do not mention RequestQueueSize JMX metric

2020-09-21 Thread GitBox


viktorsomogyi commented on pull request #220:
URL: https://github.com/apache/kafka-site/pull/220#issuecomment-696132637


   Otherwise it looks good, I can approve it once you create the PR in the 
kafka repo.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] viktorsomogyi edited a comment on pull request #220: KAFKA-8360: Docs do not mention RequestQueueSize JMX metric

2020-09-21 Thread GitBox


viktorsomogyi edited a comment on pull request #220:
URL: https://github.com/apache/kafka-site/pull/220#issuecomment-696132637


   Otherwise it looks good, I can approve it once you create the PR in the 
kafka repo (and perhaps get someone to merge it).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-665 Kafka Connect Hash SMT

2020-09-21 Thread Brandon Brown
Hi Tom,

The reason I went fix was so that we could simplify the configuration for 
example you can say sha256 instead of having to remember that it’s SHA-256. 
Admittedly if other formats become implemented then it would require updating 
this as well. 

I’m flexible on changing it to a string and letting it be configured with the 
exact name. What do you think Mickael?

Brandon Brown

> On Sep 21, 2020, at 3:42 AM, Tom Bentley  wrote:
> 
> Hi Brandon and Mickael,
> 
> Is it necessary to fix the supported digest? We could just support whatever
> the JVM's MessageDigest supports?
> 
> Kind regards,
> 
> Tom
> 
>> On Fri, Sep 18, 2020 at 6:00 PM Brandon Brown 
>> wrote:
>> 
>> Thanks Michael! So proposed hash functions would be MD5, SHA1, SHA256.
>> 
>> I can expand the motivation on the KIP but here’s where my head is at.
>> MaskField would completely remove the value by setting it to an equivalent
>> null value. One problem with this would be that you’d not be able to know
>> in the case of say a password going through the mask transform it would
>> become “” which could mean that no password was present in the message, or
>> it was removed. However this hash transformer would remove this ambiguity
>> if that makes sense.
>> 
>> Do you think there are other hash functions that should be supported as
>> well?
>> 
>> Thanks,
>> Brandon Brown
>> 
>>> On Sep 18, 2020, at 12:00 PM, Mickael Maison 
>> wrote:
>>> 
>>> Thanks Brandon for the KIP.
>>> 
>>> There's already a built-in transformation (MaskField) that can
>>> obfuscate fields. In the motivation section, it would be nice to
>>> explain the use cases when MaskField is not suitable and when users
>>> would need the proposed transformation.
>>> 
>>> The KIP exposes a "function" configuration to select the hash function
>>> to use. Which hash functions do you propose supporting?
>>> 
 On Thu, Aug 27, 2020 at 10:43 PM  wrote:
 
 
 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-665%3A+Kafka+Connect+Hash+SMT
 
 The current pr with the proposed changes
 https://github.com/apache/kafka/pull/9057 and the original 3rd party
 contribution which initiated this change
 
>> https://github.com/aiven/aiven-kafka-connect-transforms/issues/9#issuecomment-662378057
>> .
 
 I'm interested in any suggestions for ways to improve this as I think
 it would make a nice addition to the existing SMTs provided by Kafka
 Connect out of the box.
 
 Thanks,
 Brandon
 
 
 
>> 


Re: [DISCUSS] KIP-567: Kafka Cluster Audit (new discussion)

2020-09-21 Thread Viktor Somogyi-Vass
Hi Daniel,

> If the auditor needs access to the details of the action, one could argue
that even the response should be passed down to the auditor.
At this point I don't think we need to include responses into the interface
but if you have a use-case we can consider doing that.

> Is it feasible to convert the Java requests and responses to public API?
Well I think that in this case we would need to actually transform a lot of
classes and that might be a bit too invasive. Although since the protocol
itself *is* a public API it might make sense to have some kind of Java
representation as a public API as well.

> If not, do we have another option to access this info in the auditor?
I think one option would be to do what the original KIP-567 was
implemented. Basically we could have an AuditEvent interface that would
contain request specific data. Its obvious drawback is that it has to be
implemented for most of the 40 something protocols but on the upside these
classes shouldn't be complicated. I can try to do a PoC with this to see
how it looks like and whether it solves the problem. To be honest I think
it would be better than publishing the request classes as an API because
here we're restricting access to only what is necessary.

Thanks,
Viktor



On Fri, Sep 18, 2020 at 8:37 AM Dániel Urbán  wrote:

> Hi,
>
> Thanks for the KIP.
>
> If the auditor needs access to the details of the action, one could argue
> that even the response should be passed down to the auditor.
> Is it feasible to convert the Java requests and responses to public API?
> If not, do we have another option to access this info in the auditor?
> I know that the auditor could just send proper requests through the API to
> the brokers, but that seems like an awful lot of overhead, and could
> introduce timing issues as well.
>
> Daniel
>
>
> Viktor Somogyi-Vass  ezt írta (időpont: 2020.
> szept. 16., Sze, 17:17):
>
> > One more after-thought on your second point (AbstractRequest): the
> reason I
> > introduced it in the first place was that this way implementers can
> access
> > request data. A use case can be if they want to audit a change in
> > configuration or client quotas but not just acknowledge the fact that
> such
> > an event happened but also capture the change itself by peeking into the
> > request. Sometimes it can be useful especially when people want to trace
> > back the order of events and what happened when and not just acknowledge
> > that there was an event of a certain kind. I also recognize that this
> might
> > be a very loose interpretation of auditing as it's not strictly related
> to
> > authorization but rather a way of tracing the admin actions within the
> > cluster. It even could be a different API therefore but because of the
> > variety of the Kafka APIs it's very hard to give a method that fits all,
> so
> > it's easier to pass down the AbstractRequest and the implementation can
> do
> > the extraction of valuable info. So that's why I added this in the first
> > place and I'm interested in your thoughts.
> >
> > On Wed, Sep 16, 2020 at 4:41 PM Viktor Somogyi-Vass <
> > viktorsomo...@gmail.com>
> > wrote:
> >
> > > Hi Mickael,
> > >
> > > Thanks for reviewing the KIP.
> > >
> > > 1.) I just wanted to follow the conventions used with the Authorizer as
> > it
> > > is built in a similar fashion, although it's true that in KafkaServer
> we
> > > call the configure() method and the start() in the next line. This
> would
> > be
> > > the same in Auditor and even simpler as there aren't any parameters to
> > > start(), so I can remove it. If it turns out there is a need for it, we
> > can
> > > add it later.
> > >
> > > 2.) Yes, this is a very good point, I will remove it, however in this
> > case
> > > I don't think we need to add the ApiKey as it is already available in
> > > AuthorizableRequestContext.requestType(). One less parameter :).
> > >
> > > 3.) I'll add it. It will simply log important changes in the cluster
> like
> > > topic events (create, update, delete, partition or replication factor
> > > change), ACL events, config changes, reassignment, altering log dirs,
> > > offset delete, group delete with the authorization info like who
> > initiated
> > > the call, was it authorized, were there any errors. Let me know if you
> > > think there are other APIs I should include.
> > >
> > > 4.) The builder is there mostly for easier usability but actually
> > thinking
> > > of it it doesn't help much so I removed it. The AuditInfo is also a
> > helper
> > > class so I don't see any value in transforming it into an interface but
> > if
> > > I simplify it (by removing the builder) it will be cleaner. Would that
> > work?
> > >
> > > I'll update the KIP to reflect my answers.
> > >
> > > Viktor
> > >
> > >
> > > On Mon, Sep 14, 2020 at 6:02 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >
> > >> Hi Viktor,
> > >>
> > >> Thanks for restarting the discussion on this KIP. Being able to 

Re: [DISCUSSION] Upgrade system tests to python 3

2020-09-21 Thread Nikolay Izhikov
Hello.

I fixed two system tests that fails in trunk, also.

streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
streams_static_membership_test.py

Please, take a look at my PR [1]

[1] https://github.com/apache/kafka/pull/9312

> 20 сент. 2020 г., в 06:11, Guozhang Wang  написал(а):
> 
> I've triggered a system test on top of your branch.
> 
> Maybe you could also re-run the jenkins unit tests since currently all of
> them fails but you've only touched on system tests, so I'd like to confirm
> at least one successful run.
> 
> On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov  wrote:
> 
>> Hello, Guozhang.
>> 
>>> I can help run the test suite once your PR is cleanly rebased to verify
>> the whole suite works
>> 
>> Thank you for joining to the review.
>> 
>> 1. PR rebased on the current trunk.
>> 
>> 2. I triggered all tests in my private environment to verify them after
>> rebase.
>>Will inform you once tests passed on my environment.
>> 
>> 3. We need a new ducktape release [1] to be able to merge PR [2].
>>For now, PR based on the ducktape trunk branch [3], not some
>> specific release.
>>If ducktape team need any help with the release, please, let me
>> know.
>> 
>> [1] https://github.com/confluentinc/ducktape/issues/245
>> [2] https://github.com/apache/kafka/pull/9196
>> [3]
>> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
>> 
>>> 16 сент. 2020 г., в 07:32, Guozhang Wang 
>> написал(а):
>>> 
>>> Hello Nikolay,
>>> 
>>> I can help run the test suite once your PR is cleanly rebased to verify
>> the
>>> whole suite works and then I can merge (I'm trusting Ivan and Magnus here
>>> for their reviews :)
>>> 
>>> Guozhang
>>> 
>>> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
>> wrote:
>>> 
 Hello!
 
 I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
 Committers, please, join the review.
 
> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
 написал(а):
> 
> Hello!
> 
> Just a friendly reminder.
> 
> Patch to resolve some kind of technical debt - python2 in system tests
 is ready!
> Can someone, please, take a look?
> 
> https://github.com/apache/kafka/pull/9196
> 
>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
 написал(а):
>> 
>> Hello!
>> 
>> Any feedback on this?
>> What I should additionally do to prepare system tests migration?
>> 
>>> 24 авг. 2020 г., в 11:17, Nikolay Izhikov 
 написал(а):
>>> 
>>> Hello.
>>> 
>>> PR [1] is ready.
>>> Please, review.
>>> 
>>> But, I need help with the two following questions:
>>> 
>>> 1. We need a new release of ducktape which includes fixes [2], [3]
>> for
 python3.
>>> I created the issue in ducktape repo [4].
>>> Can someone help me with the release?
>>> 
>>> 2. I know that some companies run system tests for the trunk on a
 regular bases.
>>> Can someone show me some results of these runs?
>>> So, I can compare failures in my PR and in the trunk.
>>> 
>>> Results [5] of run all for my PR available in the ticket [6]
>>> 
>>> ```
>>> SESSION REPORT (ALL TESTS)
>>> ducktape version: 0.8.0
>>> session_id:   2020-08-23--002
>>> run time: 1010 minutes 46.483 seconds
>>> tests run:684
>>> passed:   505
>>> failed:   9
>>> ignored:  170
>>> ```
>>> 
>>> [1] https://github.com/apache/kafka/pull/9196
>>> [2]
 
>> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
>>> [3]
 
>> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
>>> [4] https://github.com/confluentinc/ducktape/issues/245
>>> [5]
 https://issues.apache.org/jira/secure/attachment/13010366/report.txt
>>> [6] https://issues.apache.org/jira/browse/KAFKA-10402
>>> 
 14 авг. 2020 г., в 21:26, Ismael Juma 
>> написал(а):
 
 +1
 
 On Fri, Aug 14, 2020 at 7:42 AM John Roesler 
 wrote:
 
> Thanks Nikolay,
> 
> No objection. This would be very nice to have.
> 
> Thanks,
> John
> 
> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>> Hello.
>> 
>>> If anyone's interested in porting it to Python 3 it would be a
>> good
> change.
>> 
>> I’ve created a ticket [1] to upgrade system tests to python3.
>> Does someone have any additional inputs or objections for this
 change?
>> 
>> [1] https://issues.apache.org/jira/browse/KAFKA-10402
>> 
>> 
>>> 1 июля 2020 г., в 00:26, Gokul Ramanan Subramanian <
> gokul24...@gmail.com> написал(а):
>>> 
>>> Thanks Colin.
>>> 
>>> While at the subject of 

Re: [VOTE] KIP-663: API to Start and Shut Down Stream Threads and to Request Closing of Kafka Streams Clients

2020-09-21 Thread Bruno Cadonna

Hi,

I would like to restart from zero the voting on KIP-663 that proposes to 
add methods to the Kafka Streams client to add and remove stream threads 
during execution.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads

Matthias, if you are still +1, please vote again.

Best,
Bruno

On 04.09.20 23:12, John Roesler wrote:

Hi Sophie,

Uh, oh, it's never a good sign when the discussion moves
into the vote thread :)

I agree with you, it seems like a good touch for
removeStreamThread() to return the name of the thread that
got removed, rather than a boolean flag. Maybe the return
value would be `null` if there is no thread to remove.

If we go that way, I'd suggest that addStreamThread() also
return the name of the newly created thread, or null if no
thread can be created right now.

I'm not completely sure if I think that callers of this
method would know exactly how many threads there are. Sure,
if a human being is sitting there looking at the metrics or
logs and decides to call the method, it would work out, but
I'd expect this kind of method to find its way into
automated tooling that reacts to things like current system
load or resource saturation. Those kinds of toolchains often
are part of a distributed system, and it's probably not that
easy to guarantee that the thread count they observe is
fully consistent with the number of threads that are
actually running. Therefore, an in-situ `int
numStreamThreads()` method might not be a bad idea. Then
again, it seems sort of optional. A caller can catch an
exception or react to a `null` return value just the same
either way. Having both add/remove methods behave similarly
is probably more valuable.

Thanks,
-John


On Thu, 2020-09-03 at 12:15 -0700, Sophie Blee-Goldman
wrote:

Hey, sorry for the late reply, I just have one minor suggestion. Since we
don't
make any guarantees about which thread gets removed or allow the user to
specify, I think we should return either the index or full name of the
thread
that does get removed by removeThread().

I know you just updated the KIP to return true/false if there are/aren't any
threads to be removed, but I think this would be more appropriate as an
exception than as a return type. I think it's reasonable to expect users to
have some sense to how many threads are remaining, and not try to remove
a thread when there is none left. To me, that indicates something wrong
with the user application code and should be treated as an exceptional case.
I don't think the same code clarify argument applies here as to the
addStreamThread() case, as there's no reason for an application to be
looping and retrying removeStreamThread()  since if that fails, it's because
there are no threads left and thus it will continue to always fail. And if
the
user actually wants to shut down all threads, they should just close the
whole application rather than call removeStreamThread() in a loop.

While I generally think it should be straightforward for users to track how
many stream threads they have running, maybe it would be nice to add
a small utility method that does this for them. Something like

// Returns the number of currently alive threads
boolean runningStreamThreads();

On Thu, Sep 3, 2020 at 7:41 AM Matthias J. Sax  wrote:


+1 (binding)

On 9/3/20 6:16 AM, Bruno Cadonna wrote:

Hi,

I would like to start the voting on KIP-663 that proposes to add methods
to the Kafka Streams client to add and remove stream threads during
execution.



https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads


Best,
Bruno




[jira] [Created] (KAFKA-10506) Ssl connectors and tasks have incorrect statuses

2020-09-21 Thread Lobashin Denis (Jira)
Lobashin Denis created KAFKA-10506:
--

 Summary: Ssl connectors and tasks have incorrect statuses
 Key: KAFKA-10506
 URL: https://issues.apache.org/jira/browse/KAFKA-10506
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.4.0
Reporter: Lobashin Denis


When connecting to a broker via ssl and producer.* or consumer.* properties is 
not set, connectors and tasks have incorrect statuses RUNNING.  But must be 
FAILED, because there are ssl errors in logs.

 

For example, the FILE-TEST connector has the RUNNING status, although there are 
errors in the logs of the ssl producer and the lack of a file file.txt

 

GET https://host:8084/connectors/FILE-TEST/status
{noformat}
{   
"name": "FILE-TEST",
   "connector": 
{ 
"state": "RUNNING",
"worker_id": "host:8084"   
},
"tasks": [
 {   "id": 0,
   "state": "RUNNING",
      "worker_id": "host:8084" }   
],
   "type": "source"
}{noformat}
 
{noformat}
connect.log

[2020-09-21 09:56:15,794] DEBUG [Producer 
clientId=connector-producer-FILE-TEST-0] Connection with host/1.2.3.4 
disconnected (org.apache.kafka.common.network.Selector)[2020-09-21 
09:56:15,794] DEBUG [Producer clientId=connector-producer-FILE-TEST-0] 
Connection with host/1.2.3.4 disconnected 
(org.apache.kafka.common.network.Selector)java.io.EOFException at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119)
 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) 
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at 
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) 
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) at 
java.lang.Thread.run(Thread.java:748)


[2020-09-21 09:56:16,648] DEBUG [Producer 
clientId=connector-producer-FILE-TEST-0] Give up sending metadata request since 
no node is available (org.apache.kafka.clients.NetworkClient)
[2020-09-21 09:56:16,690] DEBUG [Consumer clientId=consumer-connect-cluster-1, 
groupId=connect-cluster] Node 1 sent an incremental fetch response for session 
542278996 with 0 response partition(s), 1 implied partition(s) 
(org.apache.kafka.clients.FetchSessionHandler)
[2020-09-21 09:56:16,691] DEBUG [Consumer clientId=consumer-connect-cluster-1, 
groupId=connect-cluster] Added READ_UNCOMMITTED fetch request for partition 
connect-offsets-0 at position FetchPosition{offset=0, 
offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=host:9093 (id: 
1 rack: null), epoch=0}} to node host:9093 (id: 1 rack: null) 
(org.apache.kafka.clients.consumer.internals.Fetcher)
[2020-09-21 09:56:16,691] DEBUG [Consumer clientId=consumer-connect-cluster-1, 
groupId=connect-cluster] Built incremental fetch (sessionId=542278996, 
epoch=512) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 
partition(s) out of 1 partition(s) 
(org.apache.kafka.clients.FetchSessionHandler)
[2020-09-21 09:56:16,691] DEBUG [Consumer clientId=consumer-connect-cluster-1, 
groupId=connect-cluster] Sending READ_UNCOMMITTED 
IncrementalFetchRequest(toSend=(), toForget=(), implied=(connect-offsets-0)) to 
broker tkli-host:9093 (id: 1 rack: null) 
(org.apache.kafka.clients.consumer.internals.Fetcher)


messages:
Sep 21 09:56:46 host kafka-server-start: [2020-09-21 09:56:46,987] INFO 
[SocketServer brokerId=1] Failed authentication with /1.2.3.4 (SSL handshake 
failed) (org.apache.kafka.common.network.Selector)

{noformat}
 

 

connect-distributed.properties

 
{noformat}
bootstrap.servers=host:9093
config.storage.replication.factor=1
config.storage.topic=connect-configs
group.id=connect-cluster
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
listeners=https://host:8084
listeners.https.ssl.client.auth=required
listeners.https.ssl.enabled.protocols=TLSv1.2
listeners.https.ssl.key.password=q1w2e3r4
listeners.https.ssl.keystore.location=connect.keystore.jks
listeners.https.ssl.keystore.password=q1w2e3r4
listeners.https.ssl.truststore.location=connect.truststore.jks
listeners.https.ssl.truststore.password=q1w2e3r4
offset.flush.interval.ms=1
offset.storage.replication.factor=1
offset.storage.topic=connect-offsets
plugin.path=share/java
rest.advertised.listener=https
security.protocol=SSL
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2
ssl.endpoint.identification.algorithm=https
ssl.key.password=q1w2e3r4
ssl.keystore.location=connect.keystore.jks
ssl.keystore.password=q1w2e3r4
ssl.truststore.location=connect.truststore.jks

Re: [DISCUSS] KIP-660: Pluggable ReplicaAssignor

2020-09-21 Thread Tom Bentley
Hi Mickael,

A few thoughts about the ReplicaAssignor contract:

1. What happens if a ReplicaAssignor impl returns a Map where some
assignments don't meet the given replication factor?
2. Fixing the signature of assignReplicasToBrokers() as you have would make
it hard to pass extra information in the future (e.g. maybe someone comes
up with a use case where passing the clientId would be needed) because it
would require the interface be changed. If you factored all the parameters
into some new type then the signature could be
assignReplicasToBrokers(RequiredReplicaAssignment) and adding any new
properties to RequiredReplicaAssignment wouldn't break the contract.
3. When an assignor throws RepliacAssignorException what error code will be
returned to the client?

Also, this sentence got me thinking:

> If multiple topics are present in the request, AdminManager will update
the Cluster object so the ReplicaAssignor class has access to the up to
date cluster metadata.

Previously I've looked at how we can improve Kafka's pluggable policy
support to pass the more of the cluster state to policy implementations. A
similar problem exists there, but the more cluster state you pass the
harder it is to incrementally change it as you iterate through the topics
to be created/modified. This likely isn't a problem here and now, but it
could limit any future changes to the pluggable assignors. Did you consider
the alternative of the assignor just being passed a Set of assignments?
That means you can just pass the cluster state as it exists at the time. It
also gives the implementation more information to work with to find more
optimal assignments. For example, it could perform a bin packing type
assignment which found a better optimum for the whole collection of topics
than one which was only told about all the topics in the request
sequentially.

Otherwise this looks like a valuable feature to me.

Kind regards,

Tom





On Fri, Sep 11, 2020 at 6:19 PM Robert Barrett 
wrote:

> Thanks Mickael, I think adding the new Exception resolves my concerns.
>
> On Thu, Sep 3, 2020 at 9:47 AM Mickael Maison 
> wrote:
>
> > Thanks Robert and Ryanne for the feedback.
> >
> > ReplicaAssignor implementations can throw an exception to indicate an
> > assignment can't be computed. This is already what the current round
> > robin assignor does. Unfortunately at the moment, there are no generic
> > error codes if it fails, it's either INVALID_PARTITIONS,
> > INVALID_REPLICATION_FACTOR or worse UNKNOWN_SERVER_ERROR.
> >
> > So I think it would be nice to introduce a new Exception/Error code to
> > cover any failures in the assignor and avoid UNKNOWN_SERVER_ERROR.
> >
> > I've updated the KIP accordingly, let me know if you have more questions.
> >
> > On Fri, Aug 28, 2020 at 4:49 PM Ryanne Dolan 
> > wrote:
> > >
> > > Thanks Mickael, the KIP makes sense to me, esp for cases where an
> > external
> > > system (like cruise control or an operator) knows more about the target
> > > cluster state than the broker does.
> > >
> > > Ryanne
> > >
> > > On Thu, Aug 20, 2020, 10:46 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I've created KIP-660 to make the replica assignment logic pluggable.
> > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-660%3A+Pluggable+ReplicaAssignor
> > > >
> > > > Please take a look and let me know if you have any feedback.
> > > >
> > > > Thanks
> > > >
> >
>


[jira] [Created] (KAFKA-10505) [SystemTests] streams_static_membership.py and streams_upgradet_test.py fails

2020-09-21 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-10505:
---

 Summary: [SystemTests] streams_static_membership.py and 
streams_upgradet_test.py fails
 Key: KAFKA-10505
 URL: https://issues.apache.org/jira/browse/KAFKA-10505
 Project: Kafka
  Issue Type: Improvement
Reporter: Nikolay Izhikov


Two system tests fails with the same reason:

streams_static_membership_test.py:

{noformat}
[INFO:2020-09-21 00:33:45,803]: RunnerClient: Loading test {'directory': 
'/opt/kafka-dev/tests/kafkatest/tests/streams', 'file_name': 
'streams_static_membership_test.py', 'cls_name': 'StreamsStaticMembershipTest', 
'method_name': 
'test_rolling_bounces_will_not_trigger_rebalance_under_static_membership', 
'injected_args': None}
[INFO:2020-09-21 00:33:45,818]: RunnerClient: 
kafkatest.tests.streams.streams_static_membership_test.StreamsStaticMembershipTest.test_rolling_bounces_will_not_trigger_rebalance_under_static_membership:
 Setting up...
[INFO:2020-09-21 00:33:45,819]: RunnerClient: 
kafkatest.tests.streams.streams_static_membership_test.StreamsStaticMembershipTest.test_rolling_bounces_will_not_trigger_rebalance_under_static_membership:
 Running...
[INFO:2020-09-21 00:35:24,906]: RunnerClient: 
kafkatest.tests.streams.streams_static_membership_test.StreamsStaticMembershipTest.test_rolling_bounces_will_not_trigger_rebalance_under_static_membership:
 FAIL: invalid literal for int() with base 10: 
"Generation{generationId=5,memberId='consumer-A-3-a9a925b2-2875-4756-8649-49f516b6cae1',protocol='stream'}\n"
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
134, in run
data = self.run_test()
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
192, in run_test
return self.test_context.function(self.test)
  File 
"/opt/kafka-dev/tests/kafkatest/tests/streams/streams_static_membership_test.py",
 line 88, in 
test_rolling_bounces_will_not_trigger_rebalance_under_static_membership
generation = int(generation)
ValueError: invalid literal for int() with base 10: 
"Generation{generationId=5,memberId='consumer-A-3-a9a925b2-2875-4756-8649-49f516b6cae1',protocol='stream'}\n"
{noformat}

streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade

{noformat}
test_id:
kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_version_probing_upgrade
status: FAIL
run time:   1 minute 3.362 seconds


invalid literal for int() with base 10: 
"Generation{generationId=6,memberId='StreamsUpgradeTest-8a6ac110-1c65-40eb-af05-8bee270f1701-StreamThread-1-consumer-207de872-6588-407a-8485-101a19ba2bf0',protocol='stream'}\n"
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
134, in run
data = self.run_test()
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
192, in run_test
return self.test_context.function(self.test)
  File "/opt/kafka-dev/tests/kafkatest/tests/streams/streams_upgrade_test.py", 
line 273, in test_version_probing_upgrade
current_generation = self.do_rolling_bounce(p, counter, current_generation)
  File "/opt/kafka-dev/tests/kafkatest/tests/streams/streams_upgrade_test.py", 
line 511, in do_rolling_bounce
processor_generation = self.extract_highest_generation(processor_found)
  File "/opt/kafka-dev/tests/kafkatest/tests/streams/streams_upgrade_test.py", 
line 533, in extract_highest_generation
return int(found_generations[-1])
ValueError: invalid literal for int() with base 10: 
"Generation{generationId=6,memberId='StreamsUpgradeTest-8a6ac110-1c65-40eb-af05-8bee270f1701-StreamThread-1-consumer-207de872-6588-407a-8485-101a19ba2bf0',protocol='stream'}\n"
{noformat}


To fix it we need to extract generationId number from log string



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-666: Add Instant-based methods to ReadOnlySessionStore

2020-09-21 Thread Jorge Esteban Quilcate Otoya
Thanks John!

PS. vote thread started
https://lists.apache.org/thread.html/r5863299712063d4cb4be139ce4ab0533ce66efe04c1f7e993666b09c%40%3Cdev.kafka.apache.org%3E

On Sat, Sep 19, 2020 at 2:46 AM John Roesler  wrote:

> Thanks for the KIP, Jorge!
>
> Sorry it took me so long. I just reviewed the KIP, and it looks good to
> me.
>
> Thanks,
> John
>
> On Tue, Sep 1, 2020, at 12:35, Jorge Esteban Quilcate Otoya wrote:
> > Thanks Sophie!
> >
> > > one nit: you missed updating the startTime long to Instant in both
> > appearances of the fetchSession(key, startTime, sessionEndTime) method.
> >
> > Agreed. I'm fixing this on the KIP.
> >
> > > Also, I think by "startTime" you actually meant
> "earliestSessionEndTime".
> >
> > Given the implementation usage of these variables, I think it refers to
> > session start and end time, e.g. in `InMemorySessionStore`:
> >
> > ```
> > public byte[] fetchSession(final Bytes key, final long startTime,
> final
> > long endTime) {
> > removeExpiredSegments();
> >
> > Objects.requireNonNull(key, "key cannot be null");
> >
> > // Only need to search if the record hasn't expired yet
> > if (endTime > observedStreamTime - retentionPeriod) {
> > final ConcurrentNavigableMap > ConcurrentNavigableMap> keyMap = endTimeMap.get(endTime);
> > if (keyMap != null) {
> > final ConcurrentNavigableMap startTimeMap =
> > keyMap.get(key);
> > if (startTimeMap != null) {
> > return startTimeMap.get(startTime);
> > }
> > }
> > }
> > return null;
> > }
> > ```
> >
> > > One question I do have is whether we really need to provide a default
> > implementation
> > > that throws UnsupportedOperationException? Actually I'm wondering if we
> > shouldn't
> > > do something similar to the WindowStore methods, and provide a default
> > implementation
> > > on the SessionStore interface which then just calls the corresponding
> > long-based method.
> >
> > I'll be happy with this approach. I was trying to align it with the
> > approach we followed on KIP-617, but you're right in the case of
> > WindowStore we didn't add default implementations. I'll update the KIP
> > accordingly and wait for more feedback.
> >
> > Cheers,
> > Jorge.
> >
> >
> >
> > On Tue, Sep 1, 2020 at 4:51 AM Sophie Blee-Goldman 
> > wrote:
> >
> > > Thanks for bringing the IQ API into alignment -- the proposal looks
> good,
> > > although
> > > one nit: you missed updating the startTime long to Instant in both
> > > appearances of
> > > the fetchSession(key, startTime, sessionEndTime) method. Also, I think
> by
> > > "startTime"
> > > you actually meant "earliestSessionEndTime".
> > >
> > > One question I do have is whether we really need to provide a default
> > > implementation
> > > that throws UnsupportedOperationException? Actually I'm wondering if we
> > > shouldn't
> > > do something similar to the WindowStore methods, and provide a default
> > > implementation
> > > on the SessionStore interface which then just calls the corresponding
> > > long-based method.
> > > WDYT?
> > >
> > > -Sophie
> > >
> > > On Fri, Aug 28, 2020 at 11:31 AM Jorge Esteban Quilcate Otoya <
> > > quilcate.jo...@gmail.com> wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > I'd like to discuss the following proposal to align IQ Session Store
> API
> > > > with the Window Store one.
> > > >
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-666%3A+Add+Instant-based+methods+to+ReadOnlySessionStore
> > > >
> > > > Looking forward to your feedback.
> > > >
> > > > Cheers,
> > > > Jorge.
> > > >
> > >
> >
>


Re: [VOTE] KIP-578: Add configuration to limit number of partitions

2020-09-21 Thread Gokul Ramanan Subramanian
Hi, at this point, this KIP has one binding vote from Harsha.

Does anyone else have an opinion on this KIP? Do you think we should go
forward with this KIP once we release Kafka 3.0, since there wouldn't be
the ZK backdoor in 3.0, making the partition limits properly enforceable?

Thanks.

On Wed, Aug 5, 2020 at 5:59 PM Harsha Ch  wrote:

> Thanks for the updates Gokul.  +1 (binding).
>
> Thanks,
>
> Harsha
>
> On Wed, Aug 05, 2020 at 8:18 AM, Gokul Ramanan Subramanian <
> gokul24...@gmail.com > wrote:
>
> >
> >
> >
> > Hi.
> >
> >
> >
> > I request the community to kindly resume the voting process for KIP-578.
> > If you have further comments, we can address those as well.
> >
> >
> >
> > Thanks and cheers.
> >
> >
> >
> > On Tue, Jul 21, 2020 at 2:07 PM Gokul Ramanan Subramanian < gokul2411s@
> gmail.
> > com ( gokul24...@gmail.com ) > wrote:
> >
> >
> >>
> >>
> >> Hi.
> >>
> >>
> >>
> >> Can we resume the voting process for KIP-578? I have addressed
> additional
> >> comments by Boyang and Ismael.
> >>
> >>
> >>
> >> Thanks.
> >>
> >>
> >>
> >> On Mon, Jun 8, 2020 at 9:09 AM Gokul Ramanan Subramanian < gokul2411s@
> gmail.
> >> com ( gokul24...@gmail.com ) > wrote:
> >>
> >>
> >>>
> >>>
> >>> Hi. Can we resume the voting process for KIP-578? Thanks.
> >>>
> >>>
> >>>
> >>> On Mon, Jun 1, 2020 at 11:09 AM Gokul Ramanan Subramanian < gokul2411s@
> gmail.
> >>> com ( gokul24...@gmail.com ) > wrote:
> >>>
> >>>
> 
> 
>  Thanks Colin. Have updated the KIP per your recommendations. Let me
> know
>  what you think.
> 
> 
> 
>  Thanks Harsha for the vote.
> 
> 
> 
>  On Wed, May 27, 2020 at 8:17 PM Colin McCabe < cmccabe@ apache. org (
>  cmcc...@apache.org ) > wrote:
> 
> 
> >
> >
> > Hi Gokul Ramanan Subramanian,
> >
> >
> >
> > Thanks for the KIP.
> >
> >
> >
> > Can you please modify the KIP to remove the reference to the
> deprecated
> > --zookeeper flag? This is not how kafka-configs. sh (
> > http://kafka-configs.sh/ ) is supposed to be used in new versions
> of Kafka.
> > You get a warning message if you do use this deprecated flag. As
> described
> > in KIP-604, we are removing the --zookeeper flag in the Kafka 3.0
> release.
> > It also causes problems when people use the deprecated access mode--
> for
> > example, as you note in this KIP, it bypasses resource limits such
> as the
> > ones described here.
> >
> >
> >
> > Instead of WILL_EXCEED_PARTITION_LIMITS, how about
> RESOURCE_LIMIT_REACHED?
> > Then the error string can contain the detailed message about which
> > resource limit was hit (per broker limit, per cluster limit,
> whatever.) It
> > would also be good to spell out that CreateTopicsPolicy plugins can
> also
> > throw this exception, for consistency.
> >
> >
> >
> > I realize that 2 billion partitions seems like a very big number.
> However,
> > filesystems have had to transition to 64 bit inode numbers as time
> has
> > gone on. There doesn't seem to be any performance reason why this
> should
> > be a 31 bit number, so let's just make these configurations longs,
> not
> > ints.
> >
> >
> >
> > best,
> > Colin
> >
> >
> >
> > On Wed, May 27, 2020, at 09:48, Harsha Chintalapani wrote:
> >
> >
> >>
> >>
> >> Thanks for the KIP Gokul. This will be really useful for our use
> >>
> >>
> >
> >
> >
> > cases as
> >
> >
> >>
> >>
> >> well.
> >> +1 (binding).
> >>
> >>
> >>
> >> -Harsha
> >>
> >>
> >>
> >> On Tue, May 26, 2020 at 12:33 AM, Gokul Ramanan Subramanian <
> gokul2411s@ gmail.
> >> com ( gokul24...@gmail.com ) > wrote:
> >>
> >>
> >>>
> >>>
> >>> Hi.
> >>>
> >>>
> >>>
> >>> Any votes for this?
> >>>
> >>>
> >>>
> >>> Thanks.
> >>>
> >>>
> >>>
> >>> On Tue, May 12, 2020 at 11:36 AM Gokul Ramanan Subramanian <
> >>>
> >>>
> >>
> >>
> >
> >
> >
> > gokul2411s@
> >
> >
> >>
> >>>
> >>>
> >>> gmail. com ( http://gmail.com/ ) > wrote:
> >>>
> >>>
> >>>
> >>> Hello,
> >>>
> >>>
> >>>
> >>> I'd like to initialize voting on KIP-578:
> >>> https:/ / cwiki. apache. org/ confluence/ display/ KAFKA/ (
> >>> https://cwiki.apache.org/confluence/display/KAFKA/ )
> KIP-578%3A+Add+configuration+to+limit+number+of+partitions
> >>>
> >>> .
> >>>
> >>>
> >>>
> >>> Got some good feedback from Stanislav Kozlovski, Alexandre Dupriez
> >>>
> >>>
> >>
> >>
> >
> >
> >
> > and Tom
> >
> >
> >>
> >>>
> >>>
> >>> Bentley on the discussion thread. I have addressed their comments.
> >>>
> >>>
> >>
> >>
> >
> >
> >

Re: [DISCUSS] KIP-665 Kafka Connect Hash SMT

2020-09-21 Thread Tom Bentley
Hi Brandon and Mickael,

Is it necessary to fix the supported digest? We could just support whatever
the JVM's MessageDigest supports?

Kind regards,

Tom

On Fri, Sep 18, 2020 at 6:00 PM Brandon Brown 
wrote:

> Thanks Michael! So proposed hash functions would be MD5, SHA1, SHA256.
>
> I can expand the motivation on the KIP but here’s where my head is at.
> MaskField would completely remove the value by setting it to an equivalent
> null value. One problem with this would be that you’d not be able to know
> in the case of say a password going through the mask transform it would
> become “” which could mean that no password was present in the message, or
> it was removed. However this hash transformer would remove this ambiguity
> if that makes sense.
>
> Do you think there are other hash functions that should be supported as
> well?
>
> Thanks,
> Brandon Brown
>
> > On Sep 18, 2020, at 12:00 PM, Mickael Maison 
> wrote:
> >
> > Thanks Brandon for the KIP.
> >
> > There's already a built-in transformation (MaskField) that can
> > obfuscate fields. In the motivation section, it would be nice to
> > explain the use cases when MaskField is not suitable and when users
> > would need the proposed transformation.
> >
> > The KIP exposes a "function" configuration to select the hash function
> > to use. Which hash functions do you propose supporting?
> >
> >> On Thu, Aug 27, 2020 at 10:43 PM  wrote:
> >>
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-665%3A+Kafka+Connect+Hash+SMT
> >>
> >> The current pr with the proposed changes
> >> https://github.com/apache/kafka/pull/9057 and the original 3rd party
> >> contribution which initiated this change
> >>
> https://github.com/aiven/aiven-kafka-connect-transforms/issues/9#issuecomment-662378057
> .
> >>
> >> I'm interested in any suggestions for ways to improve this as I think
> >> it would make a nice addition to the existing SMTs provided by Kafka
> >> Connect out of the box.
> >>
> >> Thanks,
> >> Brandon
> >>
> >>
> >>
>


Re: Contributor access

2020-09-21 Thread Mickael Maison
Done! Welcome to the Kafka community

On Sun, Sep 20, 2020 at 7:51 AM Piotr Rżysko  wrote:
>
> Hello,
>
> I would like to contribute to Kafka. Can you please add me to the
> contributor list?
> My JIRA username: piotr.rzysko
>
> Best regards,
> Piotr Rżysko